I think people are more concerned about the massive deindustrialization and famines which could result from the Strait of Hormuz being chaotically strangled, not the hit to their pocket books at the gas pump
That's very unfortunate. How did it have access to the production DB in the first place?
I'm thinking twice about running Claude in an easily violated docker sandbox (weak restrictions because I want to use NVIDIA nsight with it.) At this stage, at least, I'd never give it explicit access to anything I cared about it destroying.
Even if someone gets them to reliably follow instructions, no one's figured out how to secure them against prompt injection, as far as I know.
You need to be more specific. OpenAI's commitment to assist the Trump administration with domestic mass surveillance seems to have been largely memory-holed.
Yeah, it's eerie, same with how everyone seems to have forgotten that OpenAI betrayed democracy by committing to work on unsupervised autonomous weapons and domestic mass surveillance.
Honestly I find comments like yours much more eerie. By all accounts they never agreed to any of that but you say it with such confidence like it's a fact.
The Trump administration's handling of Anthropic showed that regardless of what the contract or the law says or means, they will severely penalize any vendor who refuses their demands. And OpenAI stepped right into that relationship immediately after the administration showed that. So either they were signing up for a supply-chain risk designation and whatever other punishments the Trump administration dreams up, or they're complying.
If this sounds crazy to you, though, I'd like to know, and understand why. I miss ChatGPT/Codex.
That is not really established. The Anthropic issue was specifically about DoD use and Anthropic's military use restrictions. What the Trump admin did was bad and coercive but its not proof that contract terms and law are irrelevant. For instance, why not just use eminent domain if they don't care about contracts and want whatever they want?
> either they were signing up for a supply-chain risk designation and whatever other punishments the Trump administration dreams up, or they're complying
Couldn't OpenAI have negotiated different terms, accepted a narrower scope, or drawn different red lines? Their public DoD terms still exclude things like mass domestic surveillance and autonomous weapons outside human control. Do you not believe that or believe it doesn't matter at all? Either of those is problematic to the conclusions that follow from them.
I also think the whole argument implies something about Anthropic's position that's not as clean in reality. NSA is already using Mythos despite the Pentagon dispute, and Anthropic is still talking to the administration. Trump even said they were "shaping up" recently.
Isn't it also a possibility that one company negotiated poorly and took a position of perceived moral authority that Trump et al threw a hissy fit over and over reacted to? That's happened countless times with this admin and is far more likely in my opinion given Anthropic hasn't cut all ties and continues to try and work out a contract.
I wholeheartedly agree the current administration is dangerous. I just don't think the conclusion "OpenAI must be complying with the same demands Anthropic refused" follows from what we've seen. And I think there are plenty of other far more plausible conclusions to draw from the events.
> For instance, why not just use eminent domain if they don't care about contracts and want whatever they want?
They were threatening Anthropic with the Defense Production Act[1], which almost comes to the same thing as eminent domain, forcing the provision of goods and services instead of forcing relinquishment of property.
> Do you not believe that or believe it doesn't matter at all?
I don't think it matters at all. The Trump administration is full of scofflaw bullies. Their threats against Anthropic are actually relatively tame, compared to their bullying of Minnesota and the horrific human-rights violations they've committed against immigrants, despite multiple court orders trying to rein them in. Anyone doing business with them is either enthusiastically complying, has some kind of hold over them beyond law or contract, or is setting themselves up for harsh punishment.
> I also think the whole argument implies something about Anthropic's position that's not as clean in reality.
Anthropic software is embedded in military and intelligence services, and that takes time to wind down. My understanding is that it will take months.[2] So yeah, it's a messy, time-consuming divorce, but the origin of the conflict is actually very clear cut.
The NSA has two sides, defensive and offensive. Given Anthropic's approach to restricted release of Mythos, I assume they're releasing it to the defensive side. Anthropic has always taken the position that they're willing to help secure the US, they're just not willing help turn it into a tyranny. Apparently someone has convinced Trump and Hegseth that there's more at stake with Mythos than looking tough on a dissident company.
> Isn't it also a possibility that one company negotiated poorly and took a position of perceived moral authority that Trump et al threw a hissy fit over and over reacted to?
Not really. It's the Trump administration which has negotiated poorly, by capriciously pushing its counterparty around, trying to force it into illegal/immoral/dangerous activity.
> Trump even said they were "shaping up" recently.
He's also repeatedly said he has a workable deal with the Iranians. Do you trust his claims about any of his counterparties?
> And I think there are plenty of other far more plausible conclusions to draw from the events.
The K/V Cache is just an optimization. But yeah you would expect the attention for the model producing "Ok im doing X" and you asking "Why did you do X?" be similar. So i don't see a reason why introspection would be impossible. In fact trying to adapt a test skill where the agent would write a new test instead of adapting a new one i asked it why and it gave the reasoning it used. We then adapted the skill to specifically reject that reasoning and then it worked and the agent adapted the existing test instead.
It's quite likely that OpenAI is running a significant PR campaign to compensate for the bad rep they earned by stepping in to meet the demands of the Trump administration, after Anthropic refused to assist the administration with mass domestic surveillance and development of lethal autonomous weapons. Presumably OpenAI didn't buy the podcast TBPN just because they like the guys.
reply