Google still somehow has 190,000 employees though. I'd be interested in seeing if the total lines of code output is actually any different (acknowledging that this is not a good metric in itself, still just curious).
Is Anthropic really worth 40x Atlassian? If so, is it worth them spending just 2.5% of their valuation to acquire Atlassian's 300,000 enterprise customers?
Exactly my thoughts. It also raises major questions about organizational and executive leadership, it seems crazy to put the reigns of such a massive ship - integral to the business of huge swaths of the economy - into the hands of an ambitious flash in the pan startup.
They don't operate in my county AFAIK. However that reinforces my idea that the endgame will be a pristine Android phone in a drawer at home with the banking apps required for accessing their sites with 2FA and another phone in my pocket for daily use.
The majority of the code currently running in production for my company was written 5+ years ago. This was all "hand-written" and much lower quality than the AI generated code that I am generating and deploying these days.
Yet I feel much more connected with my old code. I really enjoyed actually writing all that code even though it wasn't the best.
If AI tools had existing 5 years ago when I first started working on this codebase, obviously the code quality would've been much higher. However, I feel like I really loved writing my old code and if given the same opportunity to start over, I would want to rewrite this code myself all over again.
I don't understand the hate for Meta's attempt at the metaverse.
Since when do we criticize companies for actually doing R&D? We should be striving to build novel technologies, not just maintaining the status quo.
Even if the project 'failed' for now, the engineering lessons learned are valuable assets that will transfer to whatever problem those teams solve next (whether at Meta or elsewhere).
Were these cannibalized companies working on an implementation that is not a terrible idea? I'd go as far as saying that it was a decent enough idea that we can applaud the efforts while also rooting for the 'underdogs'.
I read your comment as a joke, but in case if was a defense, or is taken as a defense by others, let me help you punch up your writing for you:
"[Person who is financially incentivized to make unverifiable claims about the utility of the tool they helped build] said [tool] [did an unverified and unverifiable thing] last month"
Which could mean that code was refactored and then built on top of. Or it could just mean that Claude had to correct itself multiple times over those 459 commits.
Does correcting your mistakes from yesterday’s ChatGPT binge episode count as progress…maybe?
If it doesn't revert the corrections, maybe it is progress?
I can easily imagine constant churn in the code because it switches between five different implementations when run five times, foing back to the first one on the sixth time and repeating the process.
I gotta ask, though, why exactly is that much code needed for what CC does?
How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use? There's probably a bit more to it than just:
> How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use?
That's an awfully presumptious tone to take :-)
I'm not deciding "This is how many lines they are allowed", I'm trying to get an idea of exactly what sort of functionality that CC provides requires that sort of volume.
I mean, it's a high-level language being used, it's pulling in a lot of dependencies, etc. It literally is glue code.
Bearing in mind that it appears to be (at this point anyway) purely vibe-coded, I am wondering just how much of the code is dead weight - generated by the LLM and never removed.
The premise of the steps you've listed is flawed in two ways.
This is more what agentic-assisted dev looks like:
1. Get a feature request / bug
2. Enrich the request / bug description with additional details
3. Send AI agents to handle request
4a. In some situations, manually QA results, possibly return to 2.
4b. Otherwise, agents will babysit the code through merge.
The second is that the above steps are performed in parallel across X worktrees. So, the stats are based on the above steps proceeding a handful of times per hour--in some cases completely unassisted.
---
With enough automation, the engineer is only dealing with steps 2 and 4a. You get notified when you are needed, so your attention can focus on finding the next todo or enriching a current todo as per step 2.
---
Babysitting the code through merge means it handles review comments and CI failures automatically.
---
I find communication / consensus with stakeholders, and retooling take the most time.
One can think of a lot of obvious improvements to a MVP product that don't requre much regarding "get a feature request/bug - understand the problem - think on a solution".
You know the features you'd like to have in advance, or changes you want to make you can see as you build it.
And a lot of the "deliver the solution - test - submit to code review, including sufficient explanation" can be handled by AI.
I'd love to see Claude Code remove more lines than it added TBH.
There's a ton of cruft in code that humans are less inclined to remove because it just works, but imagine having LLM doing the clean up work instead of the generation work.
Is it possible for humans to review that amount of code?
My understanding of the current state of AI in software engineering is that humans are allowed (and encouraged) to use LLMs to write code. BUT the person opening a PR must read and understand that code. And the code must be read and reviewed by other humans before being approved.
I could easily generate that amount of code and make it write and pass tests. But I don't think I could have it reviewed by the rest of my team - while I am also taking part in reviewing code written by other people on my team at that pace.
Perhaps they just aren't human reviewing the code? Then it is feasible to me. But it would go against all of the rules that I have personally encountered at my companies and that peers have told me they have at their companies.
I'm appalled this isn't talked about more. Understanding code let alone code written by others is where the real complexity lies. I fail to see how more written code by some dumbass AI that gets things wrong half the time is going to make the job less draining to me. I can only conclude that half the devs of the world, or more, don't really do code reviews, or just rubber stamp crap.
> Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from. I had to get my second Claude Code account, finally; they don’t let you siphon unlimited dollars from a single account, so you need multiple emails and siphons, it’s all very silly. My calculations show that now that Gas Town has finally achieved liftoff, I will need a third Claude Code account by the end of next week. It is a cash guzzler.
Read that as "speed of lines of code", which is very VERY very different from "speed of delivery."
Lines of code never correlated with quality or even progress. Now they do even less.
I've been working a lot more with coding agents, but my convictions around the core principles of software development have not changed. Just the iteration speed of certain parts of the process.
> It’s also 100% vibe coded. I’ve never seen the code, and I never care to, which might give you pause. ‘Course, I’ve never looked at Beads either, and it’s 225k lines of Go code that tens of thousands of people are using every day. I just created it in October. If that makes you uncomfortable, get out now.
You're counting wheel revolutions, not miles travelled. Not an accurate proxy measurement unless you can verify the wheels are on the road for the entire duration.
There are like hundreds of not thousands of users making similar mistakes with AI daily but only a small fraction would post or complain about it.
reply