Hacker Newsnew | past | comments | ask | show | jobs | submit | rogerrogerr's commentslogin

Ground vehicles consistently have radio conventions that just don't fit into the aviation world. It feels like a contributor to this accident, you can hear the controller's brain skip a couple gears trying to understand the goofy word order from the truck.

Pilots and controllers speak the same language in the same order; ground vehicles just kinda say stuff.

The aviation-ized version of your proposal would be something like this:

> tower truck 1 short of 4 at delta, red status lights


Not a chance any airport is letting firetrucks drive around in the movement area without comms to tower/ground.

AI-proof is probably the wrong way to look at it, but there is substantial advantage in being in one of the _last_ to be automated industries. Social safety nets and such are probably set up by the time the robots come for the last jobs.

>Social safety nets and such are probably set up by the time the robots come for the last jobs.

What makes you think "Social safety nets" will be the solution the élites land on?

If we were to wargame out different scenarios, we'd likely find there are a lot of potential solutions to the problem of large masses of people who are not useful to the cause of productivity in your society.

Giving non-élites a social safety net is actually one of the most resource intensive solutions. Not saying our oligarchs would not choose that solution. Just pointing out that it would severely impact their bottom lines. More than almost any other solution in fact.


If elites do not provide a social safety net why would the masses respect their elite status and resource endowments anyway?

Unless you are suggesting billionaires build private armies in some sort of neo feudalism, there are no elites who are not dependant on the existing social structure.


Private companies literally are building drone armies right now. Are you sure their use will be limited to Ukraine and the Middle East?

Yeah that's why my argument is us proles cannot wait for rock bottom, we have to get these guys now

On the upside they'll all generationally churn out of life, acting as a forcing function on future decisions.

Time isn't linear. No guarantees we march right along handing batons to the next age group. Which generation will be future elites making the choices come from?

Millennials and GenZ (despite a blip towards Trump in 2024, they blipped hard away from him as his policies of 2025 hit them hardest) are trending progressive as they age.

And Millennials and GenZ outnumber a GenX population that is the only cohort to not sour on Trump. GenX influence will rapidly shrink as Boomers churn out.

No linear time. No single clock all living things tick to. Meaning the population composition is not guaranteed to exist such that the old ways are the future. No guarantee 50 year middle managers waiting patiently end up elites in control. They might be too copy paste and conservative.

https://fortune.com/2025/08/07/gen-x-ceos-decreasing-baby-bo...


I don't know about this analysis?

Number one, Trump won the presidency on the strength of his support from younger generations of Americans. It remains to be seen whether or not those younger generations will turn against Trumpism.

Number two, GenX. Not only is GenX is the generation that voted against Trumpism the most statistically speaking, they are also the smallest generation. ie - the least statistically relevant where votes are concerned. (Which is why it didn't really matter that they voted against Trump.)

I agree with your assertion that the Boomers will churn out. I disagree that it will matter that Boomers churn out. Mainly because support for Trump-like policies is, again, strongest among the younger generations. The younger generations are literally how the guy won the presidency and they will represent more of the populace in the future, not less. So until I actually see millennials and GenZ vote against Trump-like policies, I'm not really sure how things get better?


The Canadian social safety net has big enough holes that rather than incur the costs as a first resort, the Canadian government has taken to passing out "are you aware of your options regarding MAID?" pamphlets to decidedly non-terminal patients.

There's only one way to AI-proof yourself: become enormously rich and join the Davos class.


In the US, my state had a spate of sending amber alerts at 2am, mostly for old people escaping from old people homes.

I’m sure a ton of people just turned them off. They did ridiculous damage to the system.

I thought about starting an Amber Alert Milita; so any amber alert gets a fully armed response from the kind of people who join militias. That would have probably made the cops think twice about sending stupid alerts for stupid things.

Brenda escaping the memory care center yesterday morning (!) does not mean you should warn us all to watch out for her the next morning at 2am. Unless she’s found an axe and is going door to door chopping people up. That’s the only reason to send that alert.

I bet they killed a few people with heart attacks by setting off sirens in every bedroom in the county.


I live in another part of the world and have never heard of an amber alert being sent, I assumed they were for nuclear/missile events.

Condolences for the subjects, but it’s bonkers they’re used to spam people about vulnerable individuals who aren’t posing an active threat.


There are other emergency alerts. Amber alerts are for missing people. At least AFAIK but TBH I don't really care at this point it's a poorly though out and implemented system with shitty software that I end up disabling for better or worse.

It's unfortunate because the world would presumably benefit from a properly standardized and above all globalized way of subscribing to geographically local alerts of various sorts. My local government should be able to advertise their servers via the cell towers and I should be able to add and remove subscriptions from anywhere in the world as I see fit. And above all the messages should be properly authenticated. Last I checked the system was so half baked that it was trivially vulnerable to spoofing.



I meant no disrespect to victims of awful crimes, but pointing out the capitalization seems awfully pedantic.

Read the linked wikipedia article please. It is a proper name from back in the day, not the color amber. The alert is named after a little girl called Amber. I wasn't being pedantic. I was pointing out the circumstances in which it came to be and what it's used for.

The “AI isn’t as much of a threat as you think” articles that were very clearly written by AI feel extremely dystopian.

Why do we think this emerged “on its own”? Surely this technique has been discussed in research papers that are in the training set.

You probably express very few truly original ideas. Let’s not set the bar quite so high unless we are all just a sad simulacrum of “pure” thought.

But humans are capable of very many original ideas. Look around you, humans were able to remake the entire world because of these original thoughts.

Original ideas are easy if you allow for bad ideas.

Then "on its own" has no meaning, i.e. everything an LLM does is "on its own".

Why surely? Have you never seen an LLM try something new?

Is your assertion that no one has ever written "we tried some stuff on the small inexpensive platform first, then moved to the bigger more expensive platform with the more promising options" in a research paper or literally anywhere else?

No, that's not my assertion. In fact I asserted nothing at all.

You're speaking in riddles; your communication would be more effective if you didn't do that.

You said "surely", and I asked:

> Why surely? Have you never seen an LLM try something new?

I'm afraid I can't make it any simpler than this.

And I still don't know the answer to how you're so sure. To me there's several explanations, and it seems to you there's only one.

I'm pretty happy with my communication style.


Seems to me the commenter was asking: what observations led us to conclude that original affirmative statement that “the AI did this entirely on its own”.

Given that this is a common technique and not a novel invention, it’s probably present in the training set.

The “surely” reads like it’s referring to the presence of that information in the training set. But your response casts it as saying “surely the AI has not invented something on its own”.

The original question stands IMO, the burden of proof is on whoever is asserting that the AI has invented something on its own, with or without training data that surely already mentions this approach


There is no burden of proof on me, because I'm not asserting that AI has invented something on its own. I haven't told you what my view is or whether I ever have a view.

The problem with the reasoning of the person I was responding to is that it's assuming "if X is in the training set and LLM outputs X, then it did so because X is in the training set". That does not follow. Conceivably it's possible that X is in the training set and LLM outputs X, but if X hadn't been in the training set the LLM also would've output X.

Lets look at that phrase again:

> Why do we think this emerged “on its own”? Surely this technique has been discussed in research papers that are in the training set.

This phrase implies "if X was in the training set, then LLM couldn't have come up with X on its own". This is false. In fact, my claim that the implication is false is testable, in the following manner: Have two training sets, T and T'. In T, X is present. In T' you've removed X but left X-adjacent things. Train LLM A on T and A' on T'. Find a prompt that requires that A outputs X. If on the same prompt A' also outputs X, that's an example of my claim. To repeat, my claim is "it's possible that X is in the training set and LLM outputs X, but if X hadn't been in the training set the LLM also would've output X."

In fact, I've just realized I even have a method for constructing (T, T') that guarantees what I've described. Not sure if it's worth a paper on its own though.


Your pure logic is probably right; I do not have the time or interest to dissect it.

But you’re missing the context and implication: “doing new stuff” is the major achievement we’re looking for next from LLMs. Seeing something that is “new” and is not in the training set is interesting in a way that something contained in the training set is not.

We cannot introspect LLMs meaningfully yet, so the difference between “came up with myself and it’s in the training set incidentally” and “applied a concept in the training set” is not meaningful.


I think the number of new math proofs generated by LLMs over the last few months has conclusively proven that yes - they can "come up with things themselves"

A few examples: Axiom's proof of Fel’s open conjecture on syzygies of numerical semigroups: https://x.com/axiommathai/status/2019449659807219884

Erdos 457: https://www.erdosproblems.com/457

The stronger form of Erdos 650: https://www.erdosproblems.com/650


I honestly don't think I have.

In this case, using a cheap(er) signal or heuristic as an initial filter before spending more resources on cases that pass the filter is a pattern that shows up all over the place, and LLMs are good at picking up on patterns like that and generalizing them. AFAICT.


I'm not sure how people say this so confidently. I have a rather esoteric haskell library that I've written and published for years. ChatGPT and Claude both know about it and frequently help me improve it, and propose completely novel approaches. I'm really not sure how people are so confident that they can't think of anything new. This seems like wishful confirmation bias.

> I'm not sure how people say this so confidently.

Say what, exactly?


> Don't post generated comments or AI-edited comments. HN is for conversation between humans.

> Don't post generated comments or AI-edited comments. HN is for conversation between humans.

Or, sometimes, you order the factory to _reduce_ output to 50% of what it can do for the last week of Q1 so you don't have excess unsold inventory on the books.

Then in Q2, you panic because you don't have enough inventory, so you order the factory to produce at 150% to catch up. Both 50% and 150% are inefficient factory states; if you weren't thinking about snapshot reporting you'd have just let it run at 100% and your Q1+Q2 results would be overall better.

I have personally seen this happen at a household-name Fortune 50 company. It's insane and causes real damage to the business in many ways.


in two word: being short-sighted.

You'd think that such a personal traumatic experience would make this author want to use their own words, not outsource writing about it to LLMs.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: