Hacker Newsnew | past | comments | ask | show | jobs | submit | helloplanets's commentslogin

> Moat seems to be shrinking fast.

It's been a moving target for years at this point.

Both open and closed source models have been getting better, but not sure if the open source models have really been closing the gap since DeepSeek R1.

But yes: If the top closed source models were to stop getting better today, it wouldn't take long for open source to catch up.


This is also what's called the beginner's mind, Shoshin. [0] One of the core concepts of Zen Buddhism. Tangentially related would be the concept of no-mind, Mushin. [1]

[0]: https://en.wikipedia.org/wiki/Shoshin [1]: https://en.wikipedia.org/wiki/No-mind


Pretty sure the majority of people who are a part of the community work a 9-5. But yes, spending after work hours making something that's just completely out there with no monetary purpose at all is much more nourishing for the soul than attempting to create a passive income machine.

> copy pasting it's training data

This is a total misrepresentation of how any modern LLM works, and your argument largely hinges upon this definition.


A backdoor is a completely different thing when it comes to an AI company, as compared to a social media company. Not really even sure what it would mean when it comes to doing inference on an LLM. Having access to the weights, training data and inference engine?

The model of Claude the DoD is asking for more than likely doesn't even exist in a production ready form. The post-training would have to be completely different for the model the DoD is asking for.


> get what they want or makes them feel better about themselves

So... all acts are selfish because if it looks unselfish, that just means it was selfish in a hidden way?


> unfortunately, there are also people deeply involved who don't think human extinction is a bad thing

You mean at the top labs? Since when isn't that level of misanthropy categorized as having mental health issues?


See e.g. Richard Sutton, who, although not at a top lab, is certainly a very important figure in the field.

Or, if you want someone with concrete influence at a top lab, Larry Page.


It's not like that happened out of the blue. (Which could've also been the case in today's day and age.) Anthropic shouldn't have gotten involved in government contracts to begin with.

They inserted themselves into the supply chain, and then the government told them that they'll be classified as a supply chain risk unless they get unfettered access to the tech. They knew what they were getting into, but didn't want the competitors to get their slice of the pie.

The government didn't pursue them, Anthropic actively pursued government and defense work.

Talk about selling out. Dario's starting to feel more and more like a swindler, by the day.


At the very least it captures well how it feels to talk when sufficiently high...


> Because the author of the blog is paid to post daily about nothing but AI and needs to link farm for clicks and engagement on a daily basis.

Care to elaborate? Paid by whom?


It’s at the top of the page:

> Sponsored by: Teleport — Secure, Govern, and Operate AI at Engineering Scale. Learn more

https://simonwillison.net/2026/Feb/19/sponsorship/


Ah, thanks. Somehow missed that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: