The US govt is fighting against true immorality in this very hour, the radical Muslim Iranian government who has been murdering thousands of citizens and holding the population hostage for decades. Ask the Iranian people if they think openai is immoral.
While true, that is not in the top 10 motivations for the US attack. If regime change is successful, the US will be perfectly satisfied with a puppet government that has exactly the same treatment of women, as long as the oil flows.
It's a sad irony where the most privileged and protected people (hn crowd) attack the people, institutions, and traditions (us govt, military) that made possible the peaceful and abundant world they take for granted.
I'm Eu, and very grateful to the US, and the military wing of the US, for the peace we've had - and take for granted.
But, things change.
And the US has shown itself to be unreliable, and vindictive, too.
>Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
I don't get it. Aren't these the same things that Anthropic was trying to negotiate?
The OpenAI-DoW contract says "all lawful uses", and then reiterates the existing statutory limits on DoW operations. So it basically spells out in more detail what "all lawful uses" actually means under existing law. Of course, I expect it leaves interpreting that law up to the government, and Congress may change that law in the future.
Anthropic wanted to go beyond that. They wanted contractual limitations on those use cases that are stronger than the existing statutory limitations.
OpenAI has essentially agreed to a political fudge in which the Pentagon gets "all lawful uses" along with some ineffective language which sounds like what Anthropic wanted but is actually weaker. Anthropic wasn't willing to accept the fudge.
Well, or just the possibility of future-proofing the agreement in favor of the US government, as well as walking back the slippery slope of „no autonomic lethality” and „no mass surveillance”.
The former, grants the Congress the ability to change the definition of all „lawful use” as democratically mandated (if the war is officially declared, if the martial law is officially declared).
The latter, is subtle. There can exist a human responsibility for lethal actions taken by fully autonomous AI - the individual who deploys it, for instance, can be made responsible for the consequences even if each individual „pulling of a trigger” has no human in the loop (Dario’s PoV unacceptable).
Similarly, and less subtly, acceptance of foreign mass surveillance, domestic surveillance (as long as its lawful and not meeting the unlawful mass surveillance limits!) seems to be more in the Pentagon’s favor.
Whether we like it or not, we’re heading into some very unstable time. Anthropic wanted to anchor its performance to stable (maybe stale) social norms, Pentagon wanted to rely on AI provider even as we change those norms.
"All lawful uses" has no meaning when a malignant narcissistic sociopath in power controlled by ruthless rich psychopaths can now rewrite every law at will.
i once ran into someone in london in 2023 who was doing their thesis on AI regulation. they had essentially ended up doing a case-study on sam. their honest non-academic conclusion (which they shared quietly) was that they were absolutely terrified of sam altman.
fear is one of those signals we ought to listen to more often
Is not shady, the systems are not ready for that kind of task esp autonomous hunting. Is smart negotiations, plus Sam would have used the Anthropic situation against them saying you can’t designate all AI top American AI companies supply chain risk etc. it’s complete idiocy the would do that anyways
Ready at what level, though. The subtleties are what matters.
It’s well established that belligerents can use mines, to separate the tactical decision of deploying for purposes of area denial; from the snap-second lethal decision (if one can stretch that definition) to detonate in response to an triggering event.
Dario’s model prohibits using AI to decide between enemy combatant and an innocent civilian (even if the AI is bad at it, it is better than just detonating anyways); Sam’s model inherits the notion that the „responsible human” is one that decided to mine that bridge; and AI can make the kill decision.
How is that fundamentally different in the future war where an officer might make a decision to send a bunch of drones up; but the drones themselves take on the lethal choice of enemy/ally/no-combatant engagement without any human in the loop? ELI5 why we can’t view these as smarter mines?
It's different because we are talking about a technology that we might lose control over at some point. Those drones in your example might make an entirely different choice than what you anticipated when you let them take off.
This is a actaully a government bailout of OpenAI. Investors gave it a bunch of money earlier knowing this was going to happen. Greg Brockman is a major Republican donor for 2026. Nice for OpenAI.
PR spin/lying while behind closed doors agreeing to it. What's hard to understand about OpenAI lying?
Altman publicly claimed he had no financial stake in OpenAI to emphasize his mission-driven focus. In 2024 it was revealed that Altman personally owned the OpenAI Startup Fund.
In May 2024, actress Scarlett Johansson accused Altman of intentionally mimicking her voice for ChatGPT's "Sky" persona after she had explicitly declined to work with them.
When OpenAI’s aggressive non-disparagement agreements were leaked, which threatened to strip departing employees of all their vested equity (potentially millions of dollars) if they criticized the company, Altman claimed he was unaware of the "provision."
My theory is that they both went through normal procurement processes. At some point, one of Palantir's forward deployed sales agents slapped someone's arm at the golph course and said, yes we can automously kill with our AI agents. Anthropic, having little to do with the kind of 'AI' in a use case that made sense for, declined.
There's nothing in general about a tweet that makes it any more or less legally binding than any other public communication, and they certainly can be used in legally binding ways. But sure, a simple assertion to the public from the CEO of a privately held company about what a separate contract says is not legally binding - whether through tweet, blog, press release, news interview, or any other method.
I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."
However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:
* Make a negotiation personal
* Emotionally lash out and kill the negotiation
* Complete a worse or similar deal, with a worse or similar party
* Celebrate your worse deal as a better deal
Importantly, you must waste enormous time and resources to secure nothing of substance.
That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.
You're missing an important part of the negotiation - Trump must benefit personally in some way. In this case, Greg Brockman has given by far the biggest single donation ($25m) to Trump's MAGA PAC in September last year.
When I started reading all these news, the thought that came to my mind is: how sweet of these companies to try this, but unfortunately I am sure that other countries advancing AI like China (deepseek, GLM, etc) or Russia, or whoever WILL have their companies' AI at their disposal
Unfortunately, this is the new arms race, race to the moon, and all that together.
This is not about wars or winning contracts. If you know about Sam's strategies - It's just business. This deal ensures Anthropic doesn't have the financial cushion that OpenAI desperately needs (they just raised billions, also trending on HN). Is it ethical? Probably not. But, all is fair in love and war (proverb).
The deal was only possible because anthropic stayed by their convictions. OpenAI didn't have agency in that. You're making it sound like Altman orchestrated the whole thing.
Learn to read. “ Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
I dislike the style of Altman's language about as much as I dislike the bullshit language used in politics or the self-incriminating, overly specific denials used by prominent figures to defend themselves against criminal allegations: “I have never had sexual relations with anyone under the age of 18 outside of my own family.”
The language is so coded that the many places where the core statement must be negated stand out like a sore thumb.
Greg Brockman who cofounded OpenAI is the biggest donor to Trump’s PAC. Altman claims they kept the same restrictions as Anthropic essentially. So my conclusion is OpenAI successfully bribed the government into ditching Anthropic and viciously attacking them by abusing their power (supply chain risk).
Probably the most corrupt way of killing a competitor I’ve heard of.
What else can they do? Would you recommend they stay silent? It sounds like they are no longer the gatekeepers of this technology or they never were to begin with.
I would recommend they start by understanding the landscape and developing strategies that are more suited for the actual world as it is, not the naive fantasy land they believe it is.
Coming out publicly playing their hand like it's a royal flush when it's a 7 high and their cards are facing their opponent clearly wasn't going to do anything. The cynical take is they aren't that naive and this just gives them plausible deniability within their social circles when they are interrogated as to why they work for these corporations. But I like to give the benefit of the doubt.