Hacker Newsnew | past | comments | ask | show | jobs | submit | next_xibalba's commentslogin

Never buying these again. Bought the 1st generation and they died just after the warranty expired.

Remember that exec tech salaries are extreme outliers. I worked for an exec in manufacturing. He had full p&l responsibility for a business segment with ~150 employees, $27 million in revenue at 40% gross margins, and a production plant. His total comp was ~$300k.

Now just think of the comp levels in sectors like government, education, etc.


The number of people in the category is simply impossible for any normal person's definition of "top executive".

If you click the link it mentions "general and operations managers". They're tossing a lot of different roles into the category.


> Remember that exec tech salaries are extreme outliers.

It's the combination of tech and big or fast growing companies.

People who operate in FAANG or Silicon Valley bubbles (or who spend too much time on Blind) can lose track of what salaries look like in the rest of the world.

I often share Buffer's open salary page because their compensation is actually pretty normal from all of the data I've seen and hiring I've done: https://buffer.com/salaries

Every time it gets posted there are comments from people aghast that the software engineers "only" make $200K and in disbelief that the CEO's salary is "only" $300K.


This is why I no longer go to the theater. The norms around how one behaves in theaters have been destroyed (at least to my preferences).

According to OECD [1], population growth outran capital, housing, and infrastructure. So it's kind of like they didn't have enough "slots" to plug all of these additional people into.

They don't claim this is to only or even primary cause of Canada's weak per-capita GDP growth though. As you would expect, there are many, many causes.

[1] https://www.oecd.org/content/dam/oecd/en/publications/report...


Yeah, I think this is the real answer here, not the elaborate social signaling/insider conspiracy takes. These are people who are communicating non-stop and are mostly are boomers who did not grow up on keyboards.

> RSI

Wait, we have another acronym to track. Is this the same/different than AGI and/or ASI?


Some people should definitely be getting Repetitive Strain Injury from all the hyping up of LLMs.

Recursive self improvement. It's when AI speeds up the development of the next AI.

Recursive Self Improvement

Have you considered using Gemini?

Google seems to be on a hot streak with their models, and, since they're playing from behind, I'd expect favorable pricing and terms. But, I don't know anyone who is using or talking about Gemini. All the chatter seems to be Anthropic vs. OpenAI.


because gemini, despite what stats say, still produces garbage once the problem gets harder. it nails it for lab conditions, but messy reality or creativity or even code quality is a far cry from opus or the latest gpt5.4 by a long shot. and always has been. its pretty good inside the GSuite because of integrations, but standalone its near worthless compared to even grok-code-fast which doesn't think much at all (but damn it is fast). At this point google keeps throwing noodlepots with AI against every wall in reach to see what sticks, which is more kind of desperation that still works to increase wall street highscores, but not exactly a streak or breakthrough. just rapid fire shotgun launches to see if anything sticks. No one serious talks Gemini because its not even worth considering still for real things outside shiny presentations and artificial benchmarks.

Gemini schools the other two when doing code reviews.

I used to think tokens are a commodity, but it’s becoming clear that the jagged frontier is different enough even for the easiest use case of SWE that there’s room for having two if not three providers of different foundational models. It isn’t a winner takes all, they’re all winning together. Cursor isn’t properly taking advantage of the situation yet.


My experience exactly. The more "real" the problems become, the more other models become unsuitable when compared to claude, with the sole exceptions being deepseek/kimi, which while speaking strictly w.r.t metrics and basic tasks are not better, they are more interesting and handle more odd and totally out of domain stuff better than the US models. An example being code i wrote for a hypercomplex sedenion based artififial neural network broke claude so bad it start saying it is chatgpt and cant evaluate/run code. similar experience for all US models, which are characterized by being extremely brittle at the fringes, though cladue least among them. Meanwhile chinese models are less capable for cookie cutter stuff but keep swinging when things get really weird and unusual. It's like US models optimize for the lowest minima acheivable, and god help you if distribution changes. Chinese models on the otgerhand seem to optimize for the flattest minima, giving poorer quality across the board but far more robust behaviour.

I've tried. It's just not very good compared to either mentioned alternative.

I can't even use 3.1 with Gemini CLI, not sure why.

Is this trolling or are you serious? Those are all, imo, hideous! They are blocky and unrefined and like they’re uncomfortable to wear.

While I wouldn't call them ugly, I agree that they look just like, well, watches to me. I wouldn't pay a premium for any of them.

Since I'm not into luxury watches, a common occurrence is:

Me to a stranger: "Wow, cool watch! Which one is it?"

Stranger: "Random cheap brand I found on Amazon."

If you've never heard of Omega, Rolex, etc, chances are you won't be able to distinguish a cheap watch from an expensive watch. It's just the brand.

(OK, OK, chances are the material is a lot better - scratchproof, etc - but probably still costs less than 10% to make than what they sell for).


Of course, they look like normal watches, that’s the point. However, if you paid for, you would get an extremely polished watch, rares/high-quality materials, hand-checked for every imperfection etc. as opposed to an almost the same looking watch for a brand, say Orient, which you would be able to find minor imperfections even as a non-enthusiast.

I dunno. Having looked at cheaper watches, I don't see imperfections. I'll grant that over time they'll show up (quicker wear and tear).

Here's the thing: Ever since I was a kid, the following features were basic:

1. Tells time

2. Tells date

3. Stopwatch

4. Alarm

5. Chrono (yes, I used that a lot for years).

All this for $50 or less.

I'm assuming the >$100 watches have all this? If not, IMO, the watch is simply failing at the very basics. It shouldn't even be called a watch.

Then Tier 2:

- Timer

- Multiple alarms

- World time

I pay extra for these (and use all of them).

The next tier (Tier 3):

- No batteries and/or solar. Definitely no manual winding.

The next tier:

- Stuff like GPS, sunrise/sunset, etc.

Personally, only after Tier 3 would I consider paying extra for all the things you mentioned. But paying $300+ for a watch without a timer or world time? You've been scammed.

How much have you paid for a pen? Montblanc ball point pens can exceed $1000. Apply everything you said to a (ballpoint) pen - one that most likely will write poorer than a good $50 fountain pen - and you'll see how people view what you are saying.


These particular watches are not my taste as well (I do not find them ugly though) but these are some of the most popular examples watch community find very pretty.

This is effectively a contract. You can put anything you want in a contract, but contracts are enforceable to only to the extent they comply with the law (statutes, case law, the constitution, etc.)

So to settle this, someone needs to violate this license and get sued. Or maybe proactively sue?


Remember: this applies to all LLM-generated code, not just chardet. No LLM-generated code is copyrightable, and thus cannot be licensed. The legal challenge could come in any context where LLMs have been used and the code placed under any license (proprietary or otherwise).

Which is going cause a collision between the "not copyrightable" and "derived from copyrighted work" angles.


If we were able to give the Ukrainians fully automated kill bots, and those kill bots enabled Ukraine to swiftly expel the Russians from their territories, would that not be a good thing? Or would you rather the meat grinder continue to destroy Ukraine's young men to satisfy some moral purity threshold?

If we could give Taiwan killbots that would ensure China could never invade, or at least could never occupy Taiwan, would that be good or bad? I have a feeling I know what the Taiwanese would say.

While we're at it, should we also strip out all the machine learning/AI driven targeting systems from weapons? We might feel good about it, but I would bet my life savings that our future adversaries will not do the same.


You seem to see everything from a binary perspective. China bad, Taiwan good. Russia bad, Ukraine good.

The world is more nuanced than that.

But to answer your question. No we should not give anyone automatic kill bots. Automatic kill bots shouldn’t even be a thing.


Yes, I think Russia's invasion of Ukraine is quite clearly a binary Russia=bad, Ukraine=good. Same for the impending Chinese invasion of Taiwan. Perhaps you could explain the nuances under which Russia was the good guy? Better yet, maybe you could explain it to the Ukrainians who have been displaced, or the family members of those who have been killed, or the soldiers who have been permanently maimed?

Whether you or I like it or not, automatic kill bots will be a thing. It will only be a question of which countries have them and which do not.


And there is evidence automated killbots were already used in Gaza (not that that's a good thing).

Generally, in war, there are no rules, and someone is going to make automated killbots, and I expect one place to see them quite soon is in the Russia-Ukraine war. And yes, I'm hoping the good guys use them and win over the bad guys. And yes, there are good guys and bad guys in that conflict.


The thing about building fulling automated kill bots is then you've built fully automated kill bots.


Fully automated kill bots are coming, whether any of us like it or not. The question is, which militaries will have them, and which militaries will be sitting ducks? China is pursuing autonomous weapons at full speed.

Personally, I think it'd be great to have the Anthropic people at the table in the creation of such horrors, if only to help curb the excesses and incompetencies of other potential offerings.


Rephrasing your "inquiry" to highlight how short-sighted this is:

If giving the ukranians nuclear warheads could help them default Russia, then isn't that good? Wouldn't using nuclear warheads to erradicate Russia end the war almost immediately?

Like, why are we even bothering with automated killing robots? That's stupid. We already have nukes, and they're the ultimate weapon, so just do that.

Do you not see how this greedy line of logic could easily lead to the destruction of not just the US, but the entire human race?

This is LITERALLY the plot line of Terminator. Literally. "Hey guys let's build skynet, isn't that a good idea??"

Like... do you not hear yourself? What is not clicking here?


> This is LITERALLY the plot line of Terminator. Literally.

No, it's not. Skynet was a recursively self improving ASI. You are conflating an autokill bot and, apparently, an ASI that can embody and replicate itself.

> If giving the ukranians nuclear warheads could help them default Russia, then isn't that good?

Surely, you can recognize how an autokill bot and a thermonucelar weapon are different, right? These are categorically different concepts. What's more, Russia is a nuclear armed opponent with, reportedly, dead man's hand systems that would launch their entire nuclear arsenal even if their command structure is destroyed in a nuclear first strike.

I'll just repeat the basic point here: autokill bots are coming. Whether any of us like it or not. Just like nuclear weapons. If I could wave a magic wand and eliminate all weapons of mass destruction in the world, I would. But that's not reality. So, walk me through how you think this plays out if we don't develop them, but Russia, China, etc. do?

I can't think of a more clear cut case of moral, justified deployment of autokill bots than to aid Ukraine in expelling the Russian invaders.


> No, it's not. Skynet was a recursively self improving ASI. You are conflating an autokill bot and, apparently, an ASI that can embody and replicate itself.

It never said it was any of that. The point of terminator is that decisioning around war was taken out of the hands of humans, and then nobody could control it.

You people really don't get it do you? Skynet doesn't need to be evil, or conscious, or self improving. It can be good, very good. But when WE don't control it, we don't know the consequences of what we created. Nobody saw AI psychosis coming but we created it, by making the models good. By making the models listen to you and agree with you.

For fucks sake, you could make an automated system that just signs postcards and, if you give it enough access, it could wipe out the human race. Not because it's evil, it might not even have an understanding of evil, but because we don't control it, and it will meet it's own goals without concerns for us because it's not human.

> autokill bots are coming. Whether any of us like it or not.

Inevitability is not an argument, and I won't humor it. It's cognitively lazy and dishonest. With this reasoning you can justify ANYTHING. Rape, murder, nuclear warfare, killing and eating children. This reasoning is bad and stupid and nobody should do it anymore.


> you could make an automated system that just signs postcards and, if you give it enough access, it could wipe out the human race.

I mean this sincerely. You really ought to stop reading Bostrom and Yudkowsky. It is very hard to take this kind of hysteria seriously.

> Inevitability is not an argument, and I won't humor it.

It is and I don't care what you will or won't humor. Just answer me this: how will you convince all the other countries of the world to not build terminators? The leading example of "it is inevitable" is of course China. They are already testing and deploying semi-autonomous robots throughout their national security apparatus. If you're answer is: "Just because they do it doesn't mean anyone else should" then you're not to be taken seriously on this topic.

> killing and eating children

I'd really like to know what convoluted scenario you could conjure in which one would argue that killing and eating children is inevitable.


Saying something is hysteria is also not an argument. Again, it's just intellectually lazy. Just because you refuse to take problems seriously doesn't mean they cease to exist, it just means you lack critical thinking.

And, as for eating and killing children, it's easy: starvation. If you're hungry enough you'll eat children. All it takes is a supply chain disruption, much more likely than nuclear war even.

So why not eat the children now? It's gonna happen anyway.

It's true that I am jumping the gun here. We don't need an apocalypse for AI to suck ass. It sucks ass right now and is causing massive problems. We should probably focus on that.


Saying something is intellectually lazy is not an argument. It's just intellectually lazy. Just because you refuse to take China's unrestrained development of autokill bots seriously doesn't mean they won't do it, it just means you lack critical thinking.

Perhaps you could write Xi a nicely worded letter informing him that he really shouldn't let his military-industrial complex develop autokill bots. When he inevitably realizes the error of his ways (mostly due to you accusing him of intellectual laziness), he'll no doubt shutdown autokill bot development. Taiwan and India will rest easy and praise your hard working intellect. Then we can then shift all societal resources to focus LLMs and why you think they suck.


Ukrainian young (24 y.o.) man here. Living and working in police 30 kilometres away from the actual frontline.

No, thanks, we don't need those "fully automated kill bots". There's absolutely no guarantee that they wouldn't kill the operator (I mean, the one who directs them) or human ally.

We're pretty much fine with drone technology we have.

But for me personally, that's not the most important point. What is more important - and what almost no one in the Western countries seems to realise (no offence, but many of westerners seem to be kind of binary-minded: it's either 0xFFFFFF or 0x000000, no middle ground at all) - is that on the Russian side, soldiers are not "fully automated kill bots" either. Sure, there's a lot of... let's say - war criminals. Yes, for sure. But en masse they are the same young men that you can see on the Ukrainian side. Moreover, many people in Ukraine have relatives in Russia, and there already were the cases where two siblings were in different armies, literally fighting with each other. So in my opinion, "fully automated kill bots" are not an option here. At least unless you deploy them in Moscow and St. Peterburg to neutralize all of the Russian elites, military commandment and other decision-making persons of the current regime.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: