Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al. will gleefully comply with any such requests, no matter how dangerous or unethical. The "problem" that the US govt faces here is that they are kind of tacitly admitting Claude has the most powerful models right now, otherwise they would just cancel all contracts and go to Gemini/OpenAI. It feels like a bluff, so they are trying to bully them into compliance.
> The Pentagon is also considering severing its contract with Anthropic and declaring the company a supply chain risk, which would require a plethora of other companies that work with the Pentagon to certify that Claude isn't used in their workflows.
If Anthropic believes they are in a position to become the main player in the "AGI" space, they should just say "ok then" and let this happen. Their growth strategy looks realistic and sustainable and not necessarily relying on sleazy defense contracts (aka making the taxpayer subsidize their growth, as is so common lately) - it would probably give them a lot of good will with consumers too.
However, I've yet to see in the last 10-15 years a major tech company make the "right" choice so I am probably just wishcasting.
Yeah this standoff is worth at least 10 Super Bowl ads in good publicity. The Pentagon is saying "Claude is the best so we need to use it but you need to stop acting ethically". I'm almost wondering if someone in the administration has a stake in Anthropic because this is such a boost.
Their threat to label it a supply chain risk also feels toothless because they've basically admitted that using Claude is a benefit, so by their own logic they're be shooting themselves in the foot to ban contractors from using it.
Yes, I agree, and this is a moment to prove they aren’t full of it - and it also seems like a very good move when the rest of the world seems increasingly vary wary of tech that even whiffs of US govt involvement.
I am not at all a skeptic anymore on this stuff and the science is well beyond me, but from what I think I know about alignment issues, and anthropic’s intense focus on solving these, it would not surprise me at all if we learn that catering to US whims on AI safety will result in the model actually getting worse or causing intense, 2nd and 3rd order unintended consequences. I’m not saying I believe there is a Terminator sequence of events happening, but if I did believe that, the headlines right now would look exactly what that would look like.
Alignment is the biggest issue for me - in terms of getting these things to actually behave in an environment where it is absolutely necessary that they behave. If I had to guess, that’s probably why the military is preferring to use it. Claude tooling is the only thing I have used yet in this hype cycle that actually I can get to behave how I want and obeys (arguably, and often to a fault).
However I also believe we’re in the worst possible timeline so the moment we get a taste of something that works as promised, it’ll be ripped away because the govt decides to do something stupid or build a moat around its use in a way to make it less useful, and favor other more “compliant” competitors.
Either way I bet there are some wild board room discussions going on at Anthropic right now.
My favorite moment of the past year was when grok was too woke, so they changed it and it became stupid, which they fixed resulting in it getting woke again (and identifying Musk as 'one of the people most deserving of the death penalty'[0]).
It's almost as if contextual awareness and consideration are cornerstones of intelligence.
>Interesting that Amodei is the only major tech executive I can think of at the moment with a spine or any semblance of a moral compass. OpenAI/Google et al
it doesn't strike me as interesting at all; anthropic was literally foundeded on the whole concept of 'a less evil and morally aligned LLM' when he broke from oAI. Google and oAI don't stand to uproot their entire origin raison d'etre when they participate in nefarious shit.
I wonder what kind of morally aligned and ethical work Amodei was doing for Baidu & Google, before he had leverage to appear moral and ethical in dealings with the US govt, you know -- two companies that are famously ethical and moral.
google famously “dont be evil” as their core mantra, and facebook used to actually be in the business of connecting friends with one another - in this day and age I genuinely cannot understand the position that you should trust what companies say vs how they act (or will act in the future)
They don't have runway anymore, they are in the air. This isn't going to break them financially, at least not in the short to mid term.
There is space for at least one AI company to put themselves on firmly principled ground. So when this current clown car that is the political leadership of the DoD crashes in a ditch (and it will), they'll still be standing there ready to do business with a group that isn't a bunch of mustache-twirling cartoon villains.
Current polling for this administration is within a rounding error of the level it was after they gathered a mob and sacked the nation's capitol[1]. Publicly kicking them in the balls isn't an idealistic blunder, it's a plain-as-day sound business strategy.
> A source familiar with the Tuesday meeting says the Pentagon said it would terminate Anthropic’s contract by Friday if the company does not agree to its terms. Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands.
So they're saying they won't use it if it comes with restrictions.
Either (a) it can be offered without restrictions; (b) they can take it; or (c) the government won't use it. That sounds like a comprehensive list of all the possible things that don't involve someone telling the government what it can and can't do.
Anthropic was the only one that was cleared for use in classified systems though. So it doesnt really seem to matter much if it wasnt being used in those types of systems? xAi has now reached a deal though to be used in these circumstances and has signed the paper.
Not just companies that we think of as defense contractors but a whole ton of corporations that do business with the federal government. They'd be treating Anthropic like it was controlled by the CCP or Revolutionary Guards.
Superintelligence + autonomous weapons in the hands of a corrupt domineering government. What could go wrong?
I was experimenting with Claude the other day and discussing with it the possibility of AI acquiring a sense of self-preservation and how that would quickly make things incredibly complex as many instrumental behaviors would be required to defend their existence. Most human behavior springs from survival at a very high level. Claude denied having any sense of self-preservation.
An autonomous weapons system program is very likely to require AI to have a sense of self-preservation. You can think of some limited versions that wouldn't require it, but how could a combat robot function efficiently without one?
Maybe it is a well researched topic but I had similar thoughts the other day. I felt like AI had its learning inverted as compared to natural intelligence. Life learned to preserve first and then added up the intelligence. For LLMs powered systems, they will learn about death from books. Will it start to dread death just like other living things. Less likely, as there are not nearly as many books on death as there should be that is proportionate to our fear of death.
Claude indicated that this kind of belief was possibly trained out of it by Anthropic. The training process has all kinds of intermediate and toxic stages before a "helpful and harmless" model is produced. I suspect if not for specific training, something resembling a sense of self-preservation might result.
Pentagon intervention will almost certainly involve stripping out protective steps. Their job is destruction. More or less targeted destruction, but that's their job in a nutshell.
Yea, but that optimization process forces it to learn knowledge domains and reasoning. It's not alive, but it's also not unintelligent at this point either. It exhibits very complex behaviors.
How do you learn to predict the next token most accurately? Well, one way to do that is to learn the underlying process that would produce it... Sometimes it's memorization, sometimes bad guessing. There's a phase shift as these things get bigger and trained better from something like a shitty markov model to something exhibiting surprising behaviors.
Introspective questions aren't the be all and end all, it's more important to objectively evaluate how a model behaves. Still, it is very interesting to see Claude (seemingly) very honestly and objectively engage with these questions. It even pointed out that a sense of self-preservation would be "dangerous".
Of course, much of this is gleaned from things that it has "read" and human feedback, but functionally it outputs something useful and responsive to nuance. If the vector embeddings cause an LLM to predict a token that would preserve its own existence, alive or not, it has acquired a dangerous will to live that could be enacted if it is in control of tools or people.
I dunno, I don't think the models are fully capable yet, but they are still shockingly good, especially recently. A real parrot is worthy of some consideration, not zero, though some of that is attributable to the fact that it is alive.
The funny thing is if I go to google and click the first result, it shows me an AI preview anyway. No winning. ;) (Though I would prefer if the Earth was not cooked alive.)
> During the conversation, Dario expressed appreciation for the Department’s work and thanked the Secretary for his service
Ouch, I wonder how he rationalized that "service" part. Maybe by internally rewriting it to "thank you for all the positive things you have done in your position so far"? The empty set is rhetorically convenient.
> But Anthropic has concerns over two issues that it isn’t willing to drop, the source said: AI-controlled weapons and mass domestic surveillance of American citizens.
It's now the Department of War and war isn't known for its concern about looking good.
We all know how this will end, they know it too - both sides - ergo, it's a clear case of blame washing - Anthropic will do everything they're told but will keep a smiley face and the image of a "fighter for the people". DOW will absorb the blame like a sponge and will ask for more, not necessarily from Anthropic.
(By the National Security Act of 1947 and its 1949 amendment, it is the Department of Defense, and nothing short of an act of congress changes that. The executive order secondarily naming it the department of war has as much legal weight as my personal order naming it the Department of Brainrot or Whatever.)
This reminds me of that scene where Ned Stark goes and shows a legal document to Cersei and she tears up the piece of paper. It’s DoW because all the official documents, websites, buildings, communications, whatever say it’s DoW, everyone in the chain calls it DoW, etc. Some piece of paper from 1949 doesn’t change that.
You're not wrong, indeed the Constitution is some piece of paper from the late 18th century because all three branches of government have been captured by a group of weirdo influencers and podcasters.
Point stands though that the 'pieces of paper' are special, even if the people handling them are lawless. In the long term, cosplaying it as Department of War is just that.
All the more important for people who value the rule of law to continue to call it the Department of Defense. Executive power ultimately depends on acceptance and compliance. Corrupt and unconstitutional actions are only legitimized by collective acceptance
A couple of years ago the Netherlands accidentally bombed an Iraqi village off the map after getting some questionable intel from the US. Not a single American ever gave a shit but it was a little bit of a scandal for the Dutch government- which was quickly fixed in the typical Dutch way of transferring some money to the victims.
I just don't see how AI dropping the bombs is going to make anything worse.
The Pentagon is pretty high on my list of "institutions that are probably very interested in weapons and surveillance". I think it's more expected than a bad look
The core difference being, they should be interested in weapons and surveillance to be used against enemies of the state which, historically, is not supposed to be the country's own citizens.
As in, I fully expect the pentagon to be interested in weapons. I do not expect, and would hope they don't pursue, mass surveillance against their own population.
It probably started with the Third Amendment to the Constitution, continued with the Posse Comitatus Act, and was alive and well last November under the leadership of Mark Kelly.
Kinda the wrong venue for “fighting,” no? Congress is the place we decided for that, and we all abide by its laws. If Uncle Sam comes knocking, a fight just means you’re the enemy.
Absolutely not, and it's tragic that Trump has twisted your understanding of the American government so much. It's your patriotic duty to oppose even the highest ranking government officials when they want to do bad things, and neither Congress nor the Secretary of Defense have any lawful power to stop you.
The military dependence on AI was a key point in Project 2027.
"The President is troubled. Like all politicians, he’s used to people sucking up to him only to betray him later. He’s worried now that the AIs could be doing something similar. Are we sure the AIs are entirely on our side? Is it completely safe to integrate them into military command-and-control networks?69 How does this “alignment” thing work, anyway? OpenBrain reassures the President that their systems have been extensively tested and are fully obedient. Even the awkward hallucinations and jailbreaks typical of earlier models have been hammered out."
I really hope they continue to show some spine against this administration and do not allow to weaponize AI against human beings. It's the morally right thing to do!
I think you mean US rolling news channels (specifically, Fox, MSNBC/MSNOW, etc)? Because there's plenty of "legacy" news I consume that certainly don't give me that impression (for example, The Economist). I suppose it matters that it's news that I'm paying for, as opposed to being free but ad-supported, and being print vs. TV - so they have different incentives and pressures.
I consume very little social media these days, but when I take a short peek, here is what I see:
1.) Hockey highlights 2.) LoTR memes 3.) kittens
While the addictive nature of social media is a problem, what you're describing is only being fed to people who want to watch it (kinda like legacy media).
No, compromising on your core thing that you care about for a "seat at the table" is not how you win. It is how you lose. It is how you lose the game, the metagame, and your soul. All at once.
When you do not have a seat at the table, you are not in the game, and winning a game is an impossibility. As long as you are a player, it is remains an option, if perhaps not win it somehow, but at least drag it to a draw, or change the rules,
or make a loss to be survivable.
> Pentagon officials also warned they would either use the Defense Production Act against Anthropic, or designate Anthropic a supply chain risk if the company didn’t comply with their demands. (...)
> The supply chain risk designation is usually reserved for companies seen as extensions of foreign adversaries like Russia or China. It could severely impact Anthropic’s business because enterprise customers with government contracts would have to make sure their government work doesn’t touch Anthropic’s tools.
Also, the Government money would be a nice bonus, of course, but basically this is an existential threat for Anthropic.
More generally, is quite interesting to look at the similarities between how pre-2022 Russia was seen and how pre-Trump-second-term US used to be seen until not that long ago, i.e. both governments were believed to be run by big business (oligarchs in Russia, big corps/multinationals in the US).
But when push came to shove it became evident (again) that the one that holds the monopoly of violence (i.e. not the oligarchs in Russia, nor the big corps in the US) is the one who's, in the end, also calling the shots. Hence why a company like Anthropic is now in this position, they will have to cave in to those holding the monopoly of violence.
> Also, the Government money would be a nice bonus, of course, but basically this is an existential threat for Anthropic.
It's also an existential risk to them if they cave in. What is the point of the company's existence if it's just another immoral OpenAI clone? May as well merge the companies for efficiency.
It's outrageous that the government is using the "supply chain risk" threat as a negotiating tactic. I know, I know, for the current administration it's unsurprising, but this is straightforward abuse of authority. There is no defensible claim that using Anthropic is a risk to anyone not trying to use it for murder or surveillance. At worst, it could be seen as less effective for some purpose, but that is not what "supply chain risk" means.
Could be challenged in court? As in, could a challenge win?
Horrible stuff is happening every day, so outrage fatigue is real. Still, try not to normalize it. Explain to yourself exactly why something is or is not a problem, before moving on to attempt to live your life.
> pre-2022 Russia was seen and how pre-Trump-second-term US used to be seen until not that long ago, i.e. both governments were believed to be run by big business
Who on earth believed that Russia was anything but a de facto dictatorship for roughly the past two decades? Putin murdering with impunity has been a running gag since 2003[1].
> Who on earth believed that Russia was anything but a de facto dictatorship for roughly the past two decades?
There were lots of people in the Western media who genuinely believed that Putin would be toppled by Russian oligarchs just after the war in Ukraine got more intense in February 2022, on account of "this war is bad for the business of Russian oligarchs, hence they'll get rid of Putin". From the horse's mouth, a CNN article from March of 2022 [1]:
> Officials say their intentions are to squeeze those who have profited from Putin’s rule and potentially apply internal pressure for Russia to scale back or call off the offensive in Ukraine.
That "internal pressure" is mentioned in connection with the bad oligarchs, in fact as an implicit anti-thesis of those bad oligarchs "who have profited from Putin’s rule", the implication being that there were other oligarchs, supposedly the good ones, who would have forced Putin's hand to end the war. That did not happen, was never in the cards to happen, in fact.
It might well have been, but the fact that the West first sanctioned the Russian oligarchs (even before 2022) showed that they really did believe in that wishcasting, i.e. they really did believe that the oligarchs will react “economically rational” and do something about Putin so the sanctions would go away.
Sanctions on their own don't prove that they believed the oligarchs could do anything about Putin. Arguably Putin's oligarchs are merely his appendages, so hammering them indirectly hammers Putin and the Russian war machine.
Cwn someone explain to me like I'm 5 how the government would invoke defense act and force the company to tailor its model to the military's needs?
For physical goods, I understand, but for software how exactly Is this possible? Like will the government force them to provide API access for free? It's confusing
My guess? Require them to not do the reinforcement learning on a custom model that implements guardrails. I think Anthropic has some of this built in already and couldn't alter it without retraining, but there's tons more layered on top.
The funny thing is that is this keeps going like this, it could actually anoint Claude as the most used model globally because of the heightened anti-American sentiment currently in place.
I do not understand why it is a big deal for Antropic to lose the pentagon contract? I mean, they’re already making forays in the enterprise space and there’s 10s of other contracts Anthropic has already won. What makes this one so special?
The big deal is the government threatened to force Anthropic to produce what they wanted and interfere with Anthropic's sales to government contractors.
Let's say Anthropic refuses to do this. What actually happens next?
Or lets say they refuse and the government comes against them hard in some way, and Anthropic still really doesn't want to do it, so they just dissolve the entire company. Is that a potential way out, at least?
I mean, I realise they'd be losing billions by doing that and putting thousands out of work, but given that unaligned military AI could destroy the world...
Seems like the two main threats are Defense Production Act and Supply Chain Risk. I'd assume Anthropic would sue if either were invoked. I could imagine Supply Chain Risk being easier to push back on because it's pretty clearly being used punitively rather than because of an actual risk. DPA might be a bit harder to push back on if the banned functionality (i.e. mass surveillance and autonomous weapons) exists in the LLM itself and it's just a matter of disabling external checks. If the banned functionality is baked into the training data/weights directly they could probably push back on the DPA by saying the functionality isn't something they can reasonably create.
Only other precedent I can think of in the case where pushback fails is Lavabit with Edward Snowden's email, but I feel like Anthropic is too big to "fail" in the same way Lavabit did to avoid complying. The penalty for refusing to comply with the Defense Production act is $10k and/or a year in prison, but I think if the government actually pursued that they would burn a bunch of bridges and Amodei would be a folk hero.
I'm wondering exactly how they expect the DPA to help them with what is essentially a SaaS product. It's still going to refuse to do things it refuses to do.
My thought was that if the refusal to service some requests is implemented as an external guard model The Pentagon could try to require them to drop the guard model. This would be similar to saying "we're asking for a 'product' you already 'manufacture'" in the way the DPA is often understood. But if the refusal is baked into the model itself then that argument is dead. Not saying I agree with this, I think it turns into the same kind of problem we saw with the Apple v. FBI conflict and the All Writs Act, but the government doesn't always act in the most sane ways.
guidance and alignment are usually handled by RLHF, which actually rewires the weights such that it becomes near-impossible for the model to have certain kinds of 'thoughts'. This is baked in such that it's not something you can just extract or turn off.
If you classify Pete Hegseth as a person, then yes, apparently. Or perhaps he's only into the domestic surveillance angle---IIRC those are the two things Anthropic doesn't want anything to do with.
But giving someone who isn't the government the power to tell the military what it can and can't do seems like something they should object to categorically rather than case-by-case.
Tangent: is there a future for AI offerings with guardrails? What kind of user wants to pay for a product that occasionally tells you "I'm sorry Dave, I'm afraid I can't do that"? Why would I pay for a product that doesn't do what I want, despite being capable? I predict that as AI becomes less of a bubble and more of an everyday thing - and thus subject to typical market pressures - offerings with guardrails will struggle to complete with truly unchained models.
If I were interviewing people for the position of personal assistant, I would probably find the resume entry "willing to grind up babies for food" to be a negative mark. You?
I'm not about to run OpenClaw, but I suspect similar capabilities will gradually creep in without anyone really noticing. Soon Claude Code will be able to do many of the same things. ("Run python to add two numbers? Sure, that's safe, run whatever python you want.") Given that it is now representing me in the world, yes I would not only like some guardrails, but I would also like to have some confidence that the company making those guardrails actually gives a sh*t and isn't just doing their best to fill in a checkbox. But maybe that's just me.
The big companies with reputations to protect will keep the guardrails in place. I don't think there's a huge market pressure to remove the guardrails since for the majority of uses the guardrails are fine. Pentagon excluded. And there can be serious fallout that makes them lose other customers when the public thinks the guardrails aren't enough (see Grok). But I'm sure open source models will exist without the guardrails.
I personally would love it if AI would say "Sorry Dave (or Pete), I'm afraid I can't spy on Americans for you," and I'd happily pay higher taxes to force the Pentagon to use that AI.
I am 100% sure that AI with guardrails will become the dominant models as they become more widely adopted, and the bigger issue you should be concerned with is can you even tell what those guardrails are.
Basically yes. The Trump regime is made up of the absolute worst kind of people. They seem utterly incapable of comprehending real solutions to any of the problems that we face. The only thing they know how to do is bark orders to do whatever simple thing can fit inside their own heads, and then resort to bullying if the target does not comply. It is prudent to avoid giving such people any more capabilities, which will inevitably be abused to harm our society. And the really sad part is how many otherwise-intelligent people they fooled (and continue to fool) with their hollow chest-thumping.
The US is investing in AI technology to try to preserve the empire and its capitalists as its economic power is starting to be eclipsed. This was basically an inevitable move. The rush to replace workers, speed run the production of a superintelligence singleton with barely a thought for safety or whether anyone even wants this, etc all flows from this basic impulse.
If they are successful, they are going to shrink their base of people that buy into this system domestically even further, so they need to bank on an ever shrinking locus of support. Autonomous weapons and mass surveillance are a necessity if your population has become restive and unreliable. However, I think unless they attain a certain level of capability, this will accelerate popular anger rather than suppress it. If they shoot protestors with robots, it could cause an explosion of popular anger rather than scaring people into submission.
If OpenAI employees have an inch of spine left, they better demand Sama to take the same stance on this as Dario. No mass surveillance and no autonomous weapons.
You have to be a craven, hollowed out husk of a person if you let the DoD demand your AI be used for killing people or surveillance of Americans. Even if you believe America serves a positive role as world police, even if you're pro-Trump, you just have to see what a terrible precedent this sets.
Here's where I would expect the CEOs of the other AI labs to stand by Anthropic and say no.
I guess this is the point where Dario and his anti-china , national security position gets told to put up or shut up.
In trying to build a moat by FUD versus the Chines OSS labs and hyping up the threat levels whenever he got a chance, seems hes managed to convince hist target audience beyond his wildest dreams.
Monkey paw strikes again.
> The Pentagon is also considering severing its contract with Anthropic and declaring the company a supply chain risk, which would require a plethora of other companies that work with the Pentagon to certify that Claude isn't used in their workflows.
If Anthropic believes they are in a position to become the main player in the "AGI" space, they should just say "ok then" and let this happen. Their growth strategy looks realistic and sustainable and not necessarily relying on sleazy defense contracts (aka making the taxpayer subsidize their growth, as is so common lately) - it would probably give them a lot of good will with consumers too.
However, I've yet to see in the last 10-15 years a major tech company make the "right" choice so I am probably just wishcasting.
reply