> “When the first bomb hit the school, one of the teachers and the principal moved a group of students to the prayer hall to protect them,” one of the Red Crescent medics said, citing conversations he had at the time with survivors.
> “The principal called the parents and told them to come and pick up their children. But the second bomb hit that area as well. Only a small number of those who had taken shelter survived.”
> Two strikes on the same target are often characterised as “double-tap” strikes, particularly if there is a brief pause between them and medics and other civilians arriving at the scene are killed in the follow-up attack
First, one could just smuggle TAILS isos.
That would be a real underground operation, and leave no trace on the "government approved computer"
"Psssst, mister, want a distro?"
Alternatively, compile the OS and apps on the spot (average Joes might need help) because "source code ain't an operating system" and "source code ain't apps"
And assemble computers from parts (like people used to do in order to buy personal computers when the mainframe priesthood banned them), because "parts ain't a computer"
Or just get Claude to kick a distro (?)
Ideas? This is a time when we should be protecting individuals instead of mandating what could be a backdoor doxxing tool.
> MAGA didn’t invent apocalyptic thinking—it industrialized it. America is described as fallen, occupied, corrupt beyond recovery. Elections are fake. Courts are compromised. Journalists are enemies. Neighbors are suspect. If everything is already illegitimate, then anything done in response becomes justified.
> Apocalypse becomes permission.
Rules are over. Party like it's the end of time. Jesus will forgive .... maybe.
I read the terms regarding mass surveillance and autonomous weapons. Both OpenAI and Anthropic had similar holdbacks AFAICT. [0]
> The Pentagon has agreed to OpenAI's rules for deploying its technology safely in classified settings, though no contract has been signed, a source familiar with the talks tells Axios.
> Why it matters: The Pentagon has blasted OpenAI rival Anthropic for days, contending its red lines for AI use in the military -- mass surveillance and autonomous weapons -- are philosophical and "woke."
> Now, the department, which did not immediately respond to a request for comment, appears to have accepted OpenAI's similar conditions.
One big difference
OpenAI exec becomes top Trump donor with $25 million gift [1]
I guess that makes it "less woke". This is reprehensible political bullshit.
This is hardly surprising given that OpenAI already signed a military contract. [0]
Where was the open letter then? No-one cared.
Recently, Claude was already used by the administration for the operation in Venezuela [1] alongside Palantir. Anthropic did nothing at the time and again...
No-one cared.
Now everyone cares when Anthropic finally said No? The decision for the contract was already predetermined for OpenAI, even with the open letter.
So the question is, why wasn't the open letter against OpenAI done last year when they signed that first military contract?
Either way, it seems that OpenAI and Anthropic were all OK with the US government using their models for warfare so really there is no point in defending both of them or even the employees who knew beforehand.
>Now everyone cares when Anthropic finally said No?
DoD started asking for the ability to do more stuff. That's the issue here.
"Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance... Fully autonomous weapons." https://www.anthropic.com/news/statement-department-of-war
>So the question is, why wasn't the open letter against OpenAI done last year when they signed that first military contract?
again, this story isn't about people that are against any military contract.
> DoD started asking for the ability to do more stuff. That's the issue here.
The DoD (NSA) has crossed those lines before with big tech when they allowed mass domestic surveillance on their own citizens (PRISIM) in the past. They are willing to break laws to get the job done which probably in 10 years time those actions would be found illegal later. By then it would be too late.
Companies should expect that governments may try to bend or even break rules or laws under veiled pretenses.
So why expect that they would be any different today? It is the nature of governments.
Scorpion (DoD) tells frog (Anthropic) to help them cross the river on the condition that they won't kill them. If you knew that the Scorpion always breaches the contract first, why work with them in the first place?
> again, this story isn't about people that are against any military contract.
Gen AI mass surveillance conducted by the adminstration (or any) is already done by Google, Microsoft, AWS, Oracle and Palantir and soon xAI (via X). So again, no point is being made here.
Given all the above, the clear indication was OpenAI already signed a military contract last year. Why didn't the employees and insiders make an open letter pledging that AI should not be used for mass surveillance back then?
>mass surveillance conducted by the adminstration (or any) is already done by Google, Microsoft, AWS, Oracle and Palantir and soon xAI (via X). So again, no point is being made here.
some companies do it, so that means anthropic has to do it? this seems like nihilism
> some companies do it, so that means anthropic has to do it?
Then I would have expected open letters from anon OpenAI employees from the very beginning in 2025 when those military contracts were signed as a reassurance / pledge around those certain boundaries since they ultimately knew. But of course, only after Anthropic rejected the DoW in 2026. Very late for that.
Anthropic should've known about PRISIM and the nature of governments and should have not tested the benefit of the doubt in the first place since it's directly incompatible with their 'principles'. Otherwise none of this would have happened.
Their first mistake was trusting the government (especially this one) and like almost all of them, they are ready to test what they can get away with and breach the contract first.
Anthropic naively expected that the administration (and any other) of this time would change for the better today. Their lesson is that they should never trust governments to agree to their contracts.
I think people that aren’t objecting to AI mass surveillance of populations: haven’t recognized how thorough and invasive these technologies will become; think the current governments share their values and lists of enemies; naively think government priorities will never change, and that scopes will never increase.
How sure are we that something fishy isn't going on with the models and the alignment research teams and the answers the model is giving? Like maybe Claude's alignment made it worse at trying to mask as Allied Magacomputer than GPT and that's why they're up in arms?
> like. the only one.
Oh?
reply