Hacker Newsnew | past | comments | ask | show | jobs | submit | LZ_Khan's commentslogin

How come all the departed researchers are Chinese nationals?

This is simply not true. Igor Babuschkin and Christian Szegedy left as well. Only 10 of the 12 remain at this point.

I don't know. Elon Musk personally founded xAI and these were his hand selected cofounders.

Because xAI = Jian-Yang x N.

I'm kidding... I think.


After seeing the type of people he hired for doge.. yikes.

Was doge ever anything more than a "get root, grab the data, and run" operation?

Maybe, but destroying USAID was an unforgivable sin. Short of nukes, rapidly turning off direct medical and food aid that people in critical need have relied on for years is objectively one of the fastest way to kill millions of people.

Don't forget the destruction of USAID and countless projects that had the word "diversity" in its work.

It's pretty obvious now.

It was obvious at the time too.

I think more important than that was shutting down all investigations into Musk's companies.

Karparthy worked for Elon for, what, 5 years? How did he do it, if Elon is Ivan the Terrible?

Mate, wouldn’t it make sense that these rules are applied via hierarchy? If Elon respects Karparthy he almost certainly gave him a longer leash and Karparthy’s output was strong enough to not warrant intervention. It’s clear he did not want to stay long term so I’m not sure this is a strong line of thinking.

It's possible. I don't know. My tone comes off as support Elon, and I do not, at all. I've seen first-hand almost all of these tactics while I was at <Elon Company>. I'm observing that some people seem to do OK at Elon's companies, and for many years, and never seem to get the boot or be abused in other ways. Therefore, Elon is probably not quite as bad a manager as he is made out to be. This is all I am saying. Since I have firsthand knowledge, I believe my opinion has value. Those that disagree? Show me your Source of Truth. Thank you.

I don’t believe Elon is even remotely related to a people manager. He’s a stakeholder and operator which require different skill sets. He finds folks who will manage to o bring the empathy he tends to lose in his pursuit of his next project. I believe your evidence may be anecdotally valuable but let’s be clear about the dynamic of a founder/ceo.

Karpathy makes great educational content. It's not clear what industry (or academic) research he did even now, five years later.

AI comments are certainly bad for discourse on HN. But who's to be the judge of AI or human? Are you reading humanity's Jeff Dean or computerized Elon Musk? It's certainly a tricky situation to be in!

It's fine except for their argument that it makes people less safe. If they want to disallow encryption they don't need to lie to people while they're at it.

OpenAI's just trading equity for GPU credits at this point?


But also a billion users is ChatGPT's biggest weakness. So many free users burning compute up. So many incentives to nerf the intelligence to affordable levels. Sounds like a nightmare.


A very obvious and likely to happen strategy is to turn the emotional manipulation dial up. Make users more dependent on validation and attention the model provides.

They're already doing it, but wonder how far they'll take it.


At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.


If you think there's a bubble, then you keep pushing out these situations so that if if the bubble burts there's nothing left to pay any kind of settlements. The only time companies pay a settlement is if they think they are going to get hit with a much larger payout from a court case going against them. Even then, there's chances to appeal the amounts in the ruling. Dear Leader did this very thing.


The crazy thing is, the blatant AI generated replies with unnecessary gravitas and cliche writing are just the obvious ones.

Those are probably replies crafted by non-English speaking scammers from India / Russia / China.

There's probably a whole sea of undetectable replies from people who know how to prompt the models properly.


If you care about improvement of models, you would support the US labs here.

It costs hundreds of millions of dollars to train a frontier model. It's not just "scraping the web."

Distillation allows labs to replicate these results at 1/100th of the cost. This creates a prisoner's dilenmma which incentivizes labs to withhold their models from the public.


How much did it cost to produce all the data on the internet and every book ever published? Surely even the most conservative calculations put it at multiple years of planetary GDP. The same argument can be made to say that letting the big labs get away with pirating it will disincentivize people to publish anything.


I personally have stopped publishing publicly, since my research is still on the fuzzy boundary of AI's current knowledge, my website gets scraped daily, and I don't want to contribute to paid models for zero acknowledgement or compensation.


> I personally have stopped publishing publicly, since my research is still on the fuzzy boundary of AI's current knowledge, my website gets scraped daily, and I don't want to contribute to paid models for zero acknowledgement or compensation.

I don't know about your works so pardon me but thinking about it, would a better solution be for gated communities at the very least, say matrix or xmpp or irc be better?

I suppose that scraping bots of matrix would be quite hard for AI companies to setup? but anyone interested in reading your contents can still find the data if they are interested plus you get the additional benefit of a community/like-minded people.


Not only publishing, it has already disincentivised a huge part of what made Web 2.0: public APIs for data access to platforms.

It was amazing to be able to create some toy projects using data from big platforms, now they're all afraid LLM trainers will scrape their contents and create a competitor to their moat, the data.

It just sucks at many different levels.


This reads a bit like over-moralizing to me. US labs will continue improving their models because they have to make money in a competitive market. Chinese distillations have arguably improved the status-quo, with Qwen and R1 forcing GPT-OSS to be released to the public. American businesses are competing, and American customers are getting better products because of the competitive pressure on them.

Your purported "prisoner's dilemma" hasn't happened yet to my knowledge, instead we seem to see the opposite. The high-speed development velocity has forced US labs to release more often with less nebulous results. Supporting either side will contribute to healthier competition in the long run.


If 'we' really cared about the improvement of models all of them would be public.

Anything else just proves someone prefers making money to improving the models.


> incentivizes labs to withhold their models from the public.

Does it really? How would they get revenue if they withhold their models? And doesn't economics generally say that if it's easier for your competitor to catch up, you have a higher incentive to maintain your lead?


I think that the bigger conversation to be had here is about the environmental damage - if by using distillation we can really train new models at 1% of the cost in energy, it is ethically imperative that we do this.


> If you care about improvement of models, you would support the US labs here.

I guess I don't care then.


that's fair!


Tell me how they obtained that data?

Nobody feels sorry for big Multinationals trying to skirt Copyright for their own good, but then cry about it when their competition ignores it too.

You can't have your cake and eat it


> incentivizes labs to withhold their models from the public.

This is the only way they make money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: