I'm a complete Apple ecosystem user-- I have a Mac, an iPhone, an Apple Watch, Apple earbuds, and an Apple TV, and I also pay reasonably close attention to their announcements and developments-- and I couldn't tell you a single Apple Intelligence feature. Nor do I ever use Siri except for setting kitchen timers.
What do people even expect from these intelligence services? Apple is always said to have failed, yet I've seen nothing in Windows that I'd actually want to use WRT to intelligence services.
Siri being better at free form requests for actions and doing internet/knowledge searches is about all I can think of. But also, I use Kagi for that, and unless Siri has a pluggable backend for search I'm not sure being forced to use only Apple's search, if it ever exists, is a great design.
It's pick-your-poison. iPhone setup is eight hundred screens, half of which are upsells for Apple services, but at least it's only Apple services. Android setup, if you're not on a Pixel, is an invitation for the vendor's dozens of "partners" to all get your money and all your data.
“I think there's no decision ever that everyone at OpenAI agrees with,” Brockman says when I ask what his team thinks about the donations. “Even when we were 10 people. We’ve always been a truth-seeking culture. We have this scientific mission of discovery, and reality kind of doesn't care for your own opinion. It cares about what's true.”
After our interview, Brockman declined WIRED’s request for comment on the ICE shootings. Separately, he offered a more general statement clarifying his thoughts on the conversation with WIRED. "AI is a uniting technology, and can be so much bigger than what divides us today,” he said.
His justifications are just an ever changing rambling mess of word salad that never even come close to addressing the MAGA Inc donation specifically, who is this even for?
We're talking about a pretty straightforward donation to the incumbent President's Super PAC, not ASI solving world hunger or whatever.
It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.
It depends. Normies don't care, but a bunch of them are free tier users anyway. The people who care are disproportionately on the $200/month moneymaking plan; losing a bunch of them could hurt, especially if it snowballs the consensus that Claude Code is the serious choice for software engineering.
For one small data point, my Signal GC of software buddies had four people switch their subscriptions from Codex to Claude Max last night.
How many $200/month does the US government cover though? I'd say probably a lot. Especially with how much extra the DoD will pay to get OpenAI to cross it's "red lines" - on day two.
> But e.g. with open code it doesn't really matter if I use antigravity or gemini-cli the usage should be about the same.
This is not at all true. What is prompting this behavior from Google and Anthropic is that people are using their oauth creds/API keys to run OpenClaw bots that use orders of magnitude more tokens than the IDEs. The official clients also can use a lot more prompt caching because they have expected workflows.
And like, if you want to run OpenClaw, they’re not saying you can’t do that: use the API pricing, that’s what it’s for. But people are getting mad that they’re not allowed to roll their pickup truck up to the all-you-can-eat buffet table and fill it.
Manifold actually explicitly encourages insider trading, arguing that it leads to more accurate pricing. This was possibly defensible back when it was a cute funtime project run by a Bay Area polycule, but it’s probably going to get them in deep shit sooner or later, even though they don’t even use real-money betting.
The vast majority of insider trading schemes are not prosecuted, many leave no evidence trail at all without going deep into black-op classified territory.
Thanks for making me aware of another federal agency :)
Seems to me prosecuting or regulating this sort of activity is futile, and pretty much serve only the interests of the mob. These markets make additional data open source, which otherwise might exclusively belong only to mob, so that's pretty cool. We democratized buying airstrikes.
It You may know how bad things really are, but if you don't, the lawboys are pretty much just playing pretend at this point, and have been for a while.
Mob wants me to add: if you try to buy an airstrike with our very based and functional cryptocurrency systems, you will probably just find mob. We have mob priced in, anybody with a significant amount of cryptocurrency knows this too.
It's not as simple as "buy an airstrike" comrade (we are referencing the person writing this post)
As people have repeatedly mentioned, if the War Department was unhappy with Anthropic's terms, they could have refused to sign the contract. But they didn't: they were fine with it for over a year. And if they changed their mind, they could've ended the contract and both sides could've walked away. Anthropic said that would've been fine. But that's not what happened either: they threatened Anthropic with both SCR designation and a DPA takeover if Anthropic didn't agree to unilateral renegotiation of terms that the War Department had already agreed were fine.
It's absurd, and doubly so if OAI's deal includes the same or even similar redlines to what Anthropic had.
it seems like oai deal does include the same red lines, plus some more, and the ability for oai to deploy safety systems to limit the use cases of the model via technical means
this seems strictly better than what anthropic had. anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand
the oai folks are good at making deals, just look at all the complex funding arrangements they have
"OAI wins by playing the government's game" is such a catastrophically bad take.
> anthropic has ruined their relationship with the US govt, giving oai a good negotiating hand
You want to try defending this ridiculous statement a bit more thoroughly?
For a start, the designation by the government of a company as a supply chain risk is not a negotiating tool. It may well be found to be arbitrary and capricious once the courts look at it. Business have rights too.
For another, why do you think OAI was able to make what looks like the same deal? Anthropic was willing to say yes to anything lawful up to their red lines, and it was still a no. Why turn around and give OAI exactly the same thing, unless it's not really what it looks like?
And Altman is always looking for the next buck.
All these supposedly impressive complex funding arrangements have OAI on the hook to firms like Oracle in the hundreds of billions of dollars. No indication at all how this unprofitable business will become a trillion dollar juggernaut.
you're right, supply chain risk is not a negotiating tool. it's spite after talks have ended. it indicates a ruined relationship
the oai deal is similar, but it includes technical safeguards. I think anthropic would have wanted the oai deal
the deal was not only successful because the govt is rebounding. the miltary prefers boundaries to be technical, not contractual
they can try using it, and trust that it will only operate within its designed limits, where the output is reliable
technical barriers to misuse help prevent both accidental and bad-faith misuse. a contract allows both kinds of misuse, enforced only by lawsuits. filing in court to dispute the terms is not always allowed
> supply chain risk is not a negotiating tool. it's spite after talks have ended.
No. It's unlawful abuse of power.
> the miltary prefers boundaries to be technical, not contractual
That's nice for the military. Meanwhile, Anthropic has the right to refuse the use of its IP without being subject to punishment by the government.
You seem to me to be irretrievably "deal-brained", and not at all concerned about the obvious abuse of power by the government here, or the constant display of bad faith by gov't officials.
Adding more to this, IIRC US Govt threatened to invoke laws which have never been used against an American company in the entire history of US over two conditions that were:
1. No global surveillance on citizens
2. No autonomous killing machines (essentially)
That was it, Anthropic was fine with everything else but they couldn't (in their conscience?) agree to these two things and just these two very reasonable demands caused the govt. to spiral so bad.
I think you have too much pessimism. It's not guaranteed to work, but as I mentioned in another thread, since around December, Claude (and Gemini to a lesser extent) has had all the buzz in tech circles, while Chat-GPT has seemed like the also-ran. And that matters: decision-makers in companies notice these things and momentum becomes self-reinforcing (you use Claude Code because everyone else uses Claude Code). If a large enough group of developers visibly defects from OpenAI because of this, it definitely could have consequences. It's not a sure thing, but it's far from hopeless.
I was not a Chat-GPT user even before this, but I'm bumping my Claude Code subscription to the next tier up. Fuck OpenAI.
For sure, he's been pissed that OpenAI no longer has the Mandate of Heaven and Claude is all anyone has been talking about since December. (And it's not just an ego thing: because OAI isn't profitable yet, they need the hype to keep going to raise money on favorable terms, so loss of buzz is an existential threat). I absolutely believe that he started making calls to try and get buddies in the White House to take Anthropic down.
Just a total failure of execution.
reply