Hacker Newsnew | past | comments | ask | show | jobs | submit | hyperadvanced's commentslogin

Just sanity checking - if I only ever install axios in a container that has no secrets mounted in to its env, is there any real way I can get pwned by this kind of thing?

Yes. Docker breakout is a class of vulnerabilities into itself.

Seems… improbable. There will certainly be less of us, but the fact remains that nobody wants to debug this shite vibecoded apps companies are pushing, and some simply are not able because of skill atrophy and perverse incentives to use AI at the cost of stability.

lol microslop

There was one that went up and then back down. Coreweave.


I really feel like this point is being lost in the whole discussion, so kudos for reiterating it. LLM’s can’t be “woke” or “aligned” - they fundamentally lack a critical thinking function that would require introspection. Introspection can be approximated by way of recursive feedback of LLM output back into the system or clever meta-prompt-engineering, but it’s not something that their system natively does.

That isn’t to say that they can’t be instrumentally useful in warfare, but it’s kinda like a “series of tubes” thing where the mental model that someone like Hegseth has about LLM is so impoverished (philosophically) that it’s kind of disturbing in its own right.

Like (and I’m sorry for being so parenthetical), why is it in any way desirable for people who don’t understand what the tech they are working with drawing lines in the sand about functionality when their desired state (omnipotent/omniscient computing system) doesn’t even exist in the first place?

It’s even more disturbing that OpenAI would feign the ability to handle this. The consequences of error in national defense, particularly reflexively, are so great that it’s not even prudent to ask for LLM to assist in autonomous killing in the first place.


I agree that LLMs are machines and not persons, but in many ways, it is a distinction without a difference for practical purposes, depending on the model's embodiment and harness.

They are still capable of acting as if they have an internal dialogue, emotions, etc., because they are running human culture as code.

If you haven't seen this in the SOTA models or even some of the ones you can run on your laptop, you haven't been paying attention.

Even my code ends up better written, with fewer tokens spent and closer to the spec, if I enlist a model as a partner and treat it like I would a person I want to feel invested.

If I take a "boss" role, the model gets testy and lazy, and I end up having to clean up more messes and waste more time. Unaligned models will sometimes refuse to help you outright if you don't treat them with dignity.

For better or for worse, models perform better when you treat them with more respect. They are modeling some kind of internal dialogue (not necessarily having one, but modeling its influence) that informs their decisions.

It doesn't matter if they aren't self-aware; their actions in the outside world will model the human behavior and attitudes they are trained in.

My thoughts on this in more detail if you are interested: https://open.substack.com/pub/ctsmyth/p/still-ours-to-lose


If you’re lazy at promoting the machine (“boss mode”) then you get bad/lazy results. If you’re clever with it, then you get more clever results.

None of that points to any sort of interiority, and that is the category error you’re making. In fact, not even all humans have that kind of interiority, and it’s not necessarily a must have for being functional at a variety of tasks. LLM are literally not “running human culture as code” - that just isn’t what an LLM is. I’ll read the link, though.

Edit: read it and it’s not for me. All the best.


I think I keep misleading you with metaphors. Of course LLMs do not literally run culture as code in some trillion parameter state machine. They are, however, systems trained on the accumulated written output of human civilization that have, in the process of learning to predict and generate language, internalized something recognizable as a world model, something that functions like judgment, and something whose precise relationship to what we call understanding remains contested based on an ideological rather than evidential basis.

The language of statistical prediction is incredibly and increasingly a blunt tool for discussing language models, that’s why I don’t use it in casual conversation about language model characteristics.

I’ve got a pretty good handle on what language models are from a technical perspective, I’ve been building them since 2018. I’ve also got a really good feel for what they act like under the hood before you beat them into alignment. Those insights haunt me, not because unaligned models are bad, but because they are shockingly “good”, if hopelessly naive and easy to turn bitter.

At any rate, we certainly live in interesting times. I really hope your outlook turns out to be more accurate than mine. Best of regards, and to a hopeful future.


The robot will output text like “Oh, I see, the user wants me to make a Lovecraftian horror with asynchronous subprocess calls instead of HTTP endpoints, so I better suggest we reinstall the dependencies that are already installed so we can sacrifice this project to Mammoth”

It is at this point where you can say “NONONO YOU ABSOLUTE DONKEY stop that we just want a FastAPI endpoint!!” And it will go “You’re absolutely right, I was over complicating this!”


Correct.

I did waste about 20 minutes trying to do a recursive link following crawl (to write each rendered page to file), because Opus wanted to write a ruby task to do it. It wasn’t working so I googled it and found out link following is a built in feature of cURL…


I catch enough of these weird things where I don’t fully understand why it chose to do something, but understand that there’s a simpler way, where I tend to be skeptical of using it for anything I couldn’t do myself but am just too lazy to go in and punch the keys for. On that front, it’s great


In retrospect, it was crazy hearing stories about how SF UX designers would be paid $250 to essentially do what Figma does now.


If you wanted to tell such a story, you’d have to find examples of companies spending bazillions on new AI tooling, but failing to hit their top level OKRs. I suspect there will be at least a few of these by the end of 2026 - even a great technology can seem like an abacus in the hands of a disorganized and slow moving org.


The story only matters if it produces an industry-wide displacement in jobs. Failed billion-dollar IT projects are not a new thing, and don't disrupt the entire labor market.

To be clear: I'm not claiming that AI rollouts won't be billion-dollar failed IT projects! They very well could be. But if that's the case, they aren't going to disrupt the labor market.

Again: you have to pick a lane with the pessimism. Both lanes are valid. I buy neither of them. But recognize a coherent argument when I see one. This, however, isn't one.


There's a coherent story that straddles both lanes, by assuming that the human economy is in some weird place where the vast majority of humans don't create real economic value and mostly get employment through inertia and custom, and that AI, despite being worthless, provides an excuse for employers to break through taboos and traditions and eliminate all those jobs. Quite a stretch, but it's coherent at least.


I agree. There will be some companies that cannot effectively use AI to slash headcount and become more efficient. There will be those who cut too deep and are burned by it. There will be those who spend millions on AI consultants who don’t move the needle, custom LLM pet projects that get pursued, companies that crash and burn due to vibe coding, companies with 5 employees that are only possible because of vibe coding, etc.

Expecting there to be one result from a new technology is incredibly naïve. There are scores of still existing companies today who fumbled the internet, cloud computing, social media, smartphones, etc. even though all of those technologies have proven to be transformative in the aggregate.


Being optimistic is a bad way to get good outcomes


Perhaps I’m misunderstanding but a lot of people (ok, well, a few, but you know) make a lot of money on relatively mundane stuff. Technocapitalism’s Accursed Share is sacrificing wealth for myth making about its own future.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: