That's not how it works in many countries. You can have regional governments that raise their own taxes and aren't beholden to the central government organizationally, just legally.
No idea how the military analogy works but: large companies scale up by "in sourcing" their supplier's functions. Facebook collects their own metrics instead of using datadog. Their own logs instead of Splunk. Facebook's own high cardinality traces instead of Honeycomb. Own datacenters instead of buying from AWS. Own database(s) instead of Oracle.
And then, since you have all these integrated functions, you can spend headcount optimizing datacenter spend down. Hire a team to re-write PHP to make it faster literally pays for itself. Or kernel engineers. Or even HW engineers and power generation. And on the product side, you can do lots of experiments where a 1% improvement in ad revenue pays like the entire department's wages for the year. So you do a lot of them, and the winners cover the cost of the losers. And you hire teams to build software to run more experiments faster and more correctly.
The brakes on this "flywheel of success" is the diseconomies of scale outweighing the economies. When the costs of communicating and negotiation are higher internally than those external contracts you previously subsumed. When you have two teams writing their own database engine competing (with suppliers!) for the same hires. When your datacenter plans outpace industrial power generation plans. When your management spins up secret teams to launch virtual reality products with no legs.
There is only one problem with Meta: Facebook itself is like a TV show that has ran its course. He's riding off what he purchased: Instagram and WhatsApp, but being a product thief he cannot create anything new.
I've never been in the military but I'm told they work this way. You often have interactions with people across the org chart (which is a massive tree with >100,000 nodes on it). If there's a dispute over resources or requirements that can't be resolved you need to find the lowest person that is above both of you to settle it. The depth of the org chart is a key similarity here as well. I think I was ~10 degrees from Sundar when I worked for Google. A soldier in the US military is a similar distance from the president. Also the financial numbers that are thrown around are larger than what most governments deal with and on par with even large nations. The US military might get a $100B influx for some war. Google/Amazon/Meta/etc. spend similarly on AI initiatives.
> On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6.
Is it just me or does this seem kind of shocking? Such a severe bug affecting millions of users with a non-trivial effect on the context window that should be readily evident to anyone looking at the analytics. Makes me wonder if this is the result of Anthropic's vibe-coding culture. No one's actually looking at the product, its code, or its outputs?
It's really hard to understand. There needs to be really loud batman sign in the sky type signals from some hero third party calling out objective product degradation. Do they use cc internally? If so do they use a different version? This should've been almost as loud a break as service just going down altogether, yet it took 2 weeks to fix?!
> ... we’ll ensure that a larger share of internal staff use the exact public build of Claude Code (as opposed to the version we use to test new features) ...
Apparently they are using another version internally.
On API pricing you still pay 10% of the input token price on cache reads. Not sure if the subscription limits count this though.
And of course all conversations now have to compact 80 tokens earlier, and are marginally worse (since results get worse the more stuff is in the context)
Just started messing around with this but I like it. It produces better results than just using Claude Code on its own. The initial output has a lot of junk that needs to be removed (just like anything LLMs generate). I suspect it's only good at reproducing content that is relatively cookie-cutter and prominent in the training data. But still, as a non-designer this produces better results than I can and in line with the level of quality of many paid templates.
reply