Hacker Newsnew | past | comments | ask | show | jobs | submit | codingdave's commentslogin

> Distillation is the technique at the centre of the dispute. It does not require stealing model weights or breaking into servers. A distiller feeds thousands or millions of carefully constructed queries to a frontier AI model, collects the responses, and uses those responses to train a cheaper rival model that approximates the original’s capabilities at a fraction of the cost.

Just so I'm sure I understand this correctly... The USA is ticked at China for training new LLMs on pre-existing content/data held by private corporations, which they freely exposed to the internet. But not ticked at those corporations for having trained LLMs in the first place on the content created by private citizens?


Yes, and it has been said since day one of LLMs that all we need to do is keep things that way - no action without human intervention. Just like it was said that you should never grant AI direct access to change your production systems. But the stories of people who have done exactly that and had their systems damaged and deleted show that people aren't trying to even keep such basic safety nets in place.

AI is getting strong enough that if people give some general direction as well as access to production systems of any kind, things can go badly. It is not true that all implementations of agentic AI requires human intervention for all action.


My cynical rule of thumb: By default we should imagine LLMs like javascript logic offloaded into a stranger's web-browser.

The risks are similar: No prompts/data that go in can reliably be kept secret; A sufficiently-motivated stranger can have it send back completely arbitrary results; Some of those results may trigger very bad things depending on how you use or even just display them on your own end.

P.S. This conceptual shortcut doesn't quite capture the dangers of poison data, which could sabotage all instances even when they happen to be hosted by honorable strangers.


Eh, these same people will attach openclaw to production systems soon and destroy their own companies.

One does not even need OpenClaw to achieve this outcome: https://x.com/lifeof_jer/status/2048103471019434248

Yeeeehaaaaa, the vibes shall never end!

On a more serious note, they were mostly f*cked by their paas provider imo. Claude will always do dumb shit. Especially if you tell it to not do something... By doing so you generally increase the likelihood of it doing it.

It's even obvious why if you think about it, the pattern of "you had one job, but you failed" or "only this can't happen, it happened!" And all it's other forms is all over literature, online content etc.

But their PaaS provider not scoping permissions properly is the root cause, all things considered. While Claude did cause this issue there, something else would've happened eventually otherwise.


I absolutely agree with you.

Also, some folks seem to be forgetting the virtues of boring, time-tested platforms & technologies in their rush to embrace the new & shiny & vibe-***ed. & also forgetting to thoroughly read documentation. It’s not terribly surprising to me that an “AI-first” infrastructure company might make these sorts of questionable design decisions.


The problem is, out of ten companies who take this approach, nine will indeed destroy themselves and one will end up with a trillion-dollar market cap. It will outcompete hundreds of companies who stuck with more conservative approaches. Everybody will want to emulate company #10, because "it obviously works."

I don't see any stabilizing influences on the horizon, given how much cash is sloshing around in the economy looking for a place to land. Things are going to get weird, stupid, and chaotic, not necessarily in that order.


Sounds like a pretty efficient self correcting mechanism

I’m not sure what the problem is there


The problem is that destruction isn't contained to the company. If an AI agent exposes all company data and that includes PII or health information, that could have an impact on a large number of people.

PII breaches have been pretty consistently a problem for the last several decades, predating modern LLMs.

So that is a structural problem with their data and security management and operations, totally independent of the architecture for doing large scale token inference.


Normalisation of deviance is the problem: https://en.wikipedia.org/wiki/Normalization_of_deviance

Remember that these models are getting better; this means they get trusted with increasingly more important things by the time an error explodes in someone's face.

It would be very bad if the thing which explodes is something you value which was handed off to an AI by someone who incorrectly thought it safe.

AI companies which don't openly report that their AI can make mistakes are being dishonest, and that dishonesty would make this normalization of deviance even more prevelant than it already is.


That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures

Further, it’s only a problem to the extent that the downsides or risks are not accounted for which again… is a social problem not a technological problem

This isn’t a problem for organizations that have well aligned incentives across their workflows

A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving

The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them. They are then attributing the pain in dealing with that organization to the technology rather than the misaligned incentives


> That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures

As @TeMPOraL here likes to point out, it can be genuinely fruitful to anthropomorphise AI. I only agree with partially, that this is true for *some* of the failure modes.

> A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving

Sure, but society as a whole doesn't have the right solid incentives to make sure that companies have the right solid incentives to do this. We can tell this quite easily by all the stupid things that get done.

> The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them.

This is also fundamentally the AI alignment problem, that all AI are trained on some fitness function which is a proxy for what the trainer wanted, which is a proxy for what incentives their boss gave them, which is a proxy that repeats up to the owners in a capitalist society, which is a proxy for economic growth, which is a proxy for votes in a democracy, which is a proxy for good in a democracy.


Yes, AI encodes latent intent.

I wrote a whole ass paper at the end of 2022 demonstrating that unless we fix society we will deterministically create anti-social AGI because humans do not generate pro-social data.

https://kemendo.com/Myth-of-Scarcity.html


If you had made a tool that gave gpt-3 the ability to run arbitrary commands on your production systems you could have seen things go badly.

Good news! Today's SOTA models can also make things go badly.

Yep. I don’t see how that metric indicates how… strong(?) a language model is.

To be fair, the author of the post said the same thing. From the other thread on HN, they themselves said: "Nobody should cry over a SaaS, of all things. But GitHub has meant so much more to me than that (all laid out in the post). I have an unhealthy relationship with it. "

> can now pay for Claude Code

If you argument is that LLMs have removed money as a gatekeeper to success, that line right there defeats your own argument.


> but she's socially irrelevant now.

I'm pretty sure there was a Black Mirror episode about social scoring dictating peoples value/relevance. That was a good place for such a concept, because letting social media sites dictate someone's relevance is just weird. Relevance is a personal opinion, and should remain that way. People are free to stop following others. It works, and isn't dystopian.


Considering how little is actually interesting about the app aside from it using an old domain, my impression is that the entire post is just pseudo-marketing, attempting to encourage people to get back into domain name squatting.

Well this vibecoded chatgpt crap is quite clearly an attempt at making Friendster more valuable than 30k

No, you probably need to elaborate on that. Because in my experience, the quality from people in India varies just as much as the quality from any other country, including the USA.

What does make a difference is the company they work for. Large hourly "body shops" gives you coders whose quality tends to be lower, regardless if we are talking about an Indian firm or an American firm. Direct hires of independent individuals tend to be higher. But there is always individual variation.

You see people from India more, sure. There are more of them. Over a billion of them, to be precise. Anyone who dismisses a billion people as "always the same" is not being clever, they are being racist. And you know that, otherwise you wouldn't have pre-empted this response with "everyone who is ready to accept it."

Say that there are communication gaps to overcome. Say there are cultural differences. Say that those cultural differences change the assumed business expectations and the mechanisms by which people express their thoughts and opinions. Those things are all true. My recommendation to anyone who has an urge to dismiss an entire population is to instead get to know them: Step up and learn how your teammates think and work. It will make for a better team, better communication, and better results.


Okay, since you insist.

I'm not racist. I don't care about race. I do care about culture a lot. By culture I mean a set of "default behaviors" and values that people from said culture are more likely to exhibit. That's where my issues with Indians began and continue. Of course you are right that generalizing over 1+ billion people is a futile exercise. Intellectually, I agree. And yet, in my personal experience, certain behaviors and attitudes they have just keep coming up with frequency, that just doesn't match any other group of people I have been interacting with. I live a rather international life. I interact with people from many, many cultures. I currently live in a culture, that is completely alien to my own, and I love it. It's not a problem of closed mind or some kind of supremacy thinking. I am free from that.

Specifically about Indians - I find that great many of them prefer memorizing over thinking. In the IT consulting days of my career, I noticed that they seemed to have 4-5 solutions, that they would apply to all problems. Whether the solution would fit the problem or solve it, was secondary. If it did, great. If it didn't, well that was someone else's problem. Half of my job was fixing stuff that an Indian "fixed" before me. The appearance of having fixed something was much more important than the actual fixing. It was all about appearances with them. While people in general seek recognition, I have never met another group of people who are so eager to lie and cover things up to gain some perception of short-term bump in status. It's not isolated to work environment. You see, I suspected myself of perhaps being racist in the end, so I would challenge myself to befriend Indians if I met any - just to see. Maybe I was being judgmental and wrong? The last time I tried it, the Indian man I met kept kissing my ass so much I had to cut him off. Why did he do that? Based on what he was saying, he saw me as someone from an "upper caste" (he projected his ideals of a successful businessman on me) and desperately wanted me to know how much I have done for him (I haven't done anything other than having a few conversations about life and business in general). Took me a while to understand that all this excessive praise and ass kissing was an attempt to elevate himself by proximity to something great. Needless to say I am nowhere as great as he portrayed me to be. Later I also found that half the stuff he shared with me was made up to impress me.

Another feature of their culture is extreme pride. They will never stop talking about India, Indian culture, Indian food, etc. They expect you to praise it, be in awe. If you aren't, they will pressure you to change your mind. Since working with them was a universally appalling experience, I wasn't impressed, so that came up a lot. You see this pride and attention seeking everywhere online. A normal person will say "Hello", "Good morning". An Indian will say "Good morning FROM INDIA". It must be mentioned, because it must be noticed and praised. It's just tiring. There is a reason why so many are waiting for country-based filters on Twitter. You wouldn't have guessed which countries are most upset about this.

I am certain that there are reasons and explanations for all of this and that there are many exceptions. As you have mentioned, there are so many of them, they can't all be like that. And fair enough. I just find all of this so tiring, that I don't want to deal with them at all. If 1 out of a 100 is a smart and pleasant person, they are still surrounded by 99 that I don't want to deal with. It might be sad, but it is what it is.


I've been working remotely since 2011, for companies who are 100% remote, and who still accomplish all of those things.

Can you put together a working product that you can put in front of someone and it will accomplish their goals? Sure, absolutely.

Will it be secure, reasonably free of bugs, scale well, and compliant with any regulatory requirements in your industry (if any)? Not a freaking chance.


> Can you come back after it's all resolved? You won't know AI.

In 3-4 years, nothing anyone is doing today will matter. It is rapidly evolving, and I'd rather sit back, do what I know, and let it all fall out one way or other other, then learn what I need to if I haven't retired by then.

For younger people, being on the bleeding edge of new things matters, but it really doesn't for us old folk. We know how to learn. We'll learn it when it matters. So long as we have work until then, I am not going to waste my energy re-skilling every 6 months to use a tech that is nowhere near stable with an entirely unclear future.


Exactly. I really think a place to sit this out for a while would be a good idea for me. Ironically part of my job is getting reluctant developers onboard with GitHub copilot but it’s a dance I’d rather watch from the sidelines for a bit

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: