Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir. They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance. They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.
When you really start digging into it, it appears schizophrenic at first, and then you remember market incentives are a thing and everything falls into place.
>Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir.surveillance of Americans but they happily deal with Palantir.
Palantr will also be subject to the same contractual limitations as the DoD.
>They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance.
The stated red lines are about mass domestic surveillance and fully autonomous lethal weapons - and those are the kinds of restrictions you’d expect to apply to any government using the tech on its own population, not just the US.
While For American agencies to use Anthropic's models against other sovereign states requires the access to the raw data from that state which is somewhat of a practical firebreak. Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?
> They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.
What is the realistic alternative? sit quietly and pretend scaling isn't a thing and dual use does not exist? Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?
Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.
> Palantr will also be subject to the same contractual limitations as the DoD.
Well, first of all, we don't actually know that. Second, I'm going to question the commitment of any company to the principles of democracy and AI safety if one of their bigger partnership is with a literal mass surveillance, Minority-Report-crap company. It's the most confusing business partner to see when you're positioning your company as THE ethical one. If you're dealing with Palantir, you're helping mass surveillance, full stop, because that's what this company does. Which country's citizens get the short end of it is completely irrelevant (though in all likelihood it's still Americans because that's Palantir's home turf).
> Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?
If that's how we characterize the current regime (which I actually agree with), then how come he's proactively trying to help it, deal with it, and insist it's a democracy that needs to be "empowered"? Sounds backwards to me. When you're about to be persecuted by your own government for not allowing it to use your models to do some heinous shit, this sounds like exactly the kind of government you shouldn't be helping at all (and ideally not do business where it can reach you). This is not normal.
> What is the realistic alternative? [...] Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?
If you notice that you're doing harm and you're concerned about doing harm, stop doing harm! Don't make it worse! "If I hadn't pulled the trigger, somebody else would" is a phrase you wouldn't expect to hold up in court. Similarly, racing to the bottom to be the most compassionate, self-conscious, and financially successful scumbag is the least convincing motivation imaginable. We will kill you quickly and painlessly unlike those other, less scrupulous guys! Logic like this absolves bad actors from any responsibility. The amount of harm stays the same but some of it gets whitewashed and virtue-signalled, and at the very minimum I'd expect the onlookers like ourselves not to engage in that.
> Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.
These aren't principles. What he's doing here is a free opportunity for incredible PR and industry support that he's successfully taken advantage of. The actual policy backslides, caveats, and all the lines that had been crossed prior will not receive as much press as the heroic grandstanding of a humble Valley nerd against Pentagon warmongers. Nobody will actually take the time to read the statement and realize how the entire text is full of lawyer-approved non-committal phrasing that leaves outs for any number of future revisions without technically contradicting it. I've already pointed some of it out earlier in the thread. The technology for autonomous weapons isn't reliable enough for use, gee, thanks! I feel so much safer now knowing that Dario will have no qualms engaging with it as soon as he deems it reliable enough.
I'd assumed situation was other way around - largely based Apple's continued omission of desktop class functionality for macOS in SwiftUI e.g. tree view, multi-window, sensible undo support etc and addition of of overlapping capabilities to iPadOS e.g. Stage Manager, mouse support etc.
Looked very much like Apple were on reasonable path to replacing macOS's aging NeXTSTEP UI underpinnings by evolving iPadOS UI into a replacement strategy.
Hope article is right, it would be good to see Apple's give their desktop platform a bit of the love they've focused on their other platforms.
Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons ... OpenAI, Google and xAI—have agreed to loosen safeguards ... Pentagon has demanded that AI be available for “all lawful purposes.”
I predict (Kalshi ?) that Anthropic will ultimately be ejected from the Pentagon running. Morals and ethics be damned, all the others will likely tell their workers: any that don't agree, they will be escorted out the door if they don't like it. Corporate America. Just wait for the next genocidal operation where A I is found contributing to the mass murdering. Cuba ?
There is a another more interesting outcome where AI tells the Pentagon everything the Pentagon does is mostly pointless and can be shutdown. This is when the fun starts.
> ... and yet project velocity does not go much faster
1) The models like us have finite context windows and intelligence, even with good engineering practices system complexity will eventually slow them down.
2) At the moment at least, the code still needs to be reviewed and signed off by us and reading someone else's code is usually harder than writing it.
I am after the automated PR agents have all passed a PR I tend to let Claude Code and
Codex give me a summary, with an MCP skill to read the requirement story. I trust their ability to catch edge cases and typos more than me. I just check the general structure of the PR
Anthropic, OpenAI, Google et. al. have EULAs and the best lawyers money can buy ready to argue that any damage done by publicly releasing bad or malicious code produced or reviewed with their systems is the developers responsibility for not checking properly.
> I trust their ability to catch edge cases and typos more than me.
Given the vendors EULAs etc, if poop really hits the fan with released code, then how is that likely to sound if the lawyers get involved?
> Are people still reading PRs in detail manually?
Ultimately it all depends on circumstance and appetite for risk, but yes many/most places still manually checking releases.
The study was on those over 65, so in terms of London smog and India; life expectancies lower so maybe if data available similar signals might be very difficult/impossible to detect.
> Or into the future with high EV cities and a drop in PM2.5?
We can hope but - tire wear produces similar sized micro plastic pollution [0] and electric cars are heavier and are likely to produce more of them [1]. Add into mix that chemically different make up and who knows [2]
And what happens when the model vendors take the obvious next step and Waymo's things up and starts delivering development systems that they are willing to guarantee don't need a local agent herder for?
reply