Quote from the CEO of Anthropic in March 2025:
"I think we'll be there in three to six months where AI is writing 90% of the code and then in 12 months we may be in a world where AI is writing essentially all of the code"
From the article, Claude Code is being used extensively to develop Bun already.
> Over the last several months, the GitHub username with the most merged PRs in Bun's repo is now a Claude Code bot. We have it set up in our internal Discord and we mostly use it to help fix bugs. It opens PRs with tests that fail in the earlier system-installed version of Bun before the fix and pass in the fixed debug build of Bun. It responds to review comments. It does the whole thing.
You do still need people to make all the decisions about how Bun is developed, and to use Claude Code.
> You do still need people to make all the decisions about how Bun is developed, and to use Claude Code.
Yeah but do you really need external hires to do that? Surely Anthropic has enough experienced JavaScript developers internally they could decide how their JS toolchain should work.
Actually, this is thinking too small. There's no reason that each developer shouldn't be able to customize their own developer tools however they want. No need for any one individual to control this, just have devs use AI to spin up their own npm-compatible package management tooling locally. A good day one onboarding task!
"Wasting" is doing a lot of work in that sentence.
They're effectively bringing on a team that's been focused on building a runtime for years. The models they could throw at the problem can't be tapped on the shoulder, and there's no guarantee they'd do a better job at building something like Bun.
Let me refer you back to the GP, where the CEO of Anthropic says AI will be writing most code in 12 months. I think the parent comment you replied to was being somewhat facetious.
Same. I don’t understand how people aren’t getting this yet. I’m spending all day thinking, planning and engineering while spending very little time typing code. My productivity is through the roof. All the code in my commits is of equal quality to what I would produce myself, why wouldn’t it be? Sure one can just ask AI to do stuff and not review it and iterate, but why on earth would one do that? I’m starting to feel that anyone who’s not getting this positive experience simply isn’t good at development to begin with.
There's a real schism, isn't there? I don't even type anymore. I've got voice transcription using whisper (which Claude built). I have like three or four Claude instances open in i3wm. I have head tracking so the mouse and therefore focus moves with my head (which Claude built). So I move my head from one to the other and speak prompts!
It's amazing!
My boss has dubbed it "programming at the speed of thought" which I'm sure he's picked up from somewhere. I've seen other people say that.
I think this wound up being close enough to true, it's just that it actually says less than what people assumed at the time.
It's basically the Jevons paradox for code. The price of lines of code (in human engineer-hours) has decreased a lot, so there is a bunch of code that is now economically justifiable which wouldn't have been written before. For example, I can prompt several ad-hoc benchmarking scripts in 1-2 minutes to troubleshoot an issue which might have taken 10-20 minutes each by myself, allowing me to investigate many performance angles. Not everything gets committed to source control.
Put another way, at least in my workflow and at my workplace, the volume of code has increased, and most of that increase comes from new code that would not have been written if not for AI, and a smaller portion is code that I would have written before AI but now let the AI write so I can focus on harder tasks. Of course, it's uneven penetration, AI helps more with tasks that are well-described in the training set (webapps, data science, Linux admin...) compared to e.g. issues arising from quirky internal architecture, Rust, etc.
At an individual level, I think it is for some people. Opus/Sonnet 4.5 can tackle pretty much any ticket I throw at it on a system I've worked on for nearly a decade. Struggles quite a bit with design, but I'm shit at that anyway.
It's much faster for me to just start with an agent, and I often don't have to write a line of code. YMMV.
Sonnet 3.7 wasn't quite at this level, but we are now. You still have to know what you're doing mind you and there's a lot of ceremony in tweaking workflows, much like it had been for editors. It's not much different than instructing juniors.
Maybe he was correct in the extremely literal sense of AI producing more new lines of code than humans, because AI is no doubt very good at producing huge volumes of Stuff very quickly, but how much of that Stuff actually justifies its existence is another question entirely.
Why do people always stop this quote at the breath? The rest of it says that he still thinks they need tech employees.
> .... and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced
(He then said it would continue improving, but this was not in the 12 month prediction.)
I actually like claude code, but that was always a risky thing to say (actually I recall him saying their software is 90% AI produced) considering their cli tool is literally infested with bugs. (Or it least it was last time I used it heavily. Maybe they've improved it since.)
Is this why everyone only seems to know the first half of Dario's quote? The guy in that video is commenting on a 40 second clip from twitter, not the original interview.
I'm curious what people think of quotes like these. Obviously it makes an explicit, falsifiable prediction. That prediction is false. There are so many reasons why someone could predict that it would be false. Is it just optimistic marketing speech, or do they really believe it themselves?
Everybody knows that marketing speech is optimistic. Which means if you give realistic estimates, then people are going to assume those are also optimistic.
What languages and frameworks? What is the domain space you're operating in? I use Cursor to help with some tasks, but mainly only use the autocomplete. It's great; no complaints. I just don't ever see being able to turn over anywhere close to 90% with the stuff we work on.
Hah. It can’t be “I need to spend more time to figure out how to use these tools better.” It is always “I’m just smarter than other people and have a higher standard.”
My stack is React/Express/Drizzle/Postgres/Node/Tailwind. It's built on Hetzner/AWS, which I terraformed with AI.
It's a private repo, and I won't make it open source just to prove it was written with AI, but I'd be happy to share the prompts. You can also visit the site, if you'd like: https://chipscompo.com/
The tools produce mediocre, usually working in the most technical sense of the word, and most developers are pretty shit at writing code that doesn't suck (myself included).
I think it's safe to say that people singularly focused on the business value of software are going to produce acceptable slop with AI.
I don't remember saying I worked with nextjs, shadcn, clerk (I don't even know what that one is), vercel or even JS/TS so I'm not sure how you can be right but I should know better than to feed the trolls.
I suspect you do not know how to use AI for writing code. No offence intended - it is a journey for everyone.
You have to be setup with the right agentic coding tool, agent rules, agent tools (MCP servers), dynamic context acquisition and workflow (working with the agent operate from a plan rather than simple prompting and hoping for the best).
But if you're lazy, don't put the effort in to understand what you're working with and how to approach it with an engineering mindset - you'll be be left on the outside complaining and telling people how it's all hype.
Always the same answer. It's the user not the AI being blown out of proportion. Tell me, where are all those great amazin applications that were coded 95-100% by AI? Where is the great progress the great new algorithms the great new innovations hiding?
My stack is React/Express/Drizzle/Postgres/Node/Tailwind. It's built on Hetzner/AWS, which I terraformed with AI. Probably 90-95% of it is AI driven.
It's a private repo, and I won't make it open source just to prove it was written with AI, but I'd be happy to share the prompts. You can also visit the site, if you'd like: https://chipscompo.com/
"For now, I’ll go dogfood my shiny new vibe-coded black box of a programming language on the Advent of Code problem (and as many of the 2025 puzzles as I can), and see what rough edges I can find. I expect them to be equal parts “not implemented yet” and “unexpected interactions of new PL features with the old ones”.
If you’re willing to jump through some Python project dependency hoops, you can try to use FAWK too at your own risk, at Janiczek/fawk on GitHub."
That doesn't sound like some great success. It mostly compiles and doesn't explode. Also I wouldn't call a toy "innovation" or "revolution".
Thanks for this! I've been looking for a good guide to an LLM based workflow, but the modern style of YouTube coding videos really grates on me. I think I might even like this :D
This one is a bit old now so a number of things have changed (I mostly use Claude Code now, Dynamic context (Skills) etc...) but here's a brief TLDR I did early this year https://www.youtube.com/watch?v=dDSLw-6vR4o
How much time do you think you saved versus writing it yourself if you factored in the time you spent setting up your AI tooling, writing prompts, contexts etc?
1. I didn't say it was a best example, I replied to a comment asking me to "Post a repo" - I posted a repo. 2. Straw man argument. I was asked for a repo, I posted a repo and clearly you didn't look at the code as it's not an "AI code generator".
1. I didn’t ask for a repo.
2. Still wasn’t me. Maybe an AI agent can help you check usernames?
3. Sorry, a plugin for an AI code generator, which is even worse of an example.
When building complex multi-agent systems where each agent has it's own tools, prompt, persona, etc. I've found LangGraph to be better (and easier) than AWS Bedrock, and OpenAI's Agent framework.
Looks really cool. Probably want to add a .gitignore and get the node_modules folder out of the repo. Also, as far as the claim of being way smaller than other similar frameworks, those other frameworks are doing a lot more. It's a bit of apples to oranges. But yeah lightweight Agent frameworks have a time and place too.
At minimum, root volumes for the VMs. Theoretically, you could load immutable machine images from the network and run entirely off of in-memory filesystems if you persist nothing past instance shutdown (similar to how extremely cautious people might run Tails booted off USB on a laptop with no hard drive), but that won't actually save cost since memory is more expensive than disk anyway.
I don't think you can even technically do that in AWS. I don't think there is any way to detach the root volume from a running instance, or use an immutable network image to boot from. However, for many server workloads, operating entirely would be reasonable. Often you just need the operating system kernel and your server software, and maybe a monitoring agent. And all of that will be loaded in memory anyway.
Well, not to answer your question with a question, but what would you imagine backs all of those database services? Or, said another way, I'm not sure Corey Quinn is mapping the cost dependency graph correctly by giving this breakdown as mutually exclusive (from the standpoint of AWS internally).
I'm looking for 100 acres with a new construction house that can't hear the road for about 400k, any takers? This was possible 10 years ago.lol we all missed out
I've got 3 kids, so no bungalows, and nothing that's in the ghetto, cuz SE DC almost certainly has cheap real estate -- and the highest murder rates in the US.
Pittsburgh, PA has plenty of houses under 100k. Here’s one that’s less than 250 miles away from you. Although this particular one is not a countryside house [1].
I'm not sure if you noticed the amount of content made in the past decade but it's limitless. Each of the streaming services have countless new shows and they all need, or needed, writers. With those streaming services came lower pay and higher minimum pay workers. We've been living through the "golden age" of TV in part due to the work of these writers.
You really think ChatGPT is capable of just plug-n-play? It'll be interesting to see the results of it. I'm rooting for it's failure if it's tried for many reasons but one especially because workers are continued to be treated as trash in this country and it'll get worse with yet another unregulated and half baked technological alternative.
> We've been living through the "golden age" of TV in part due to the work of these writers.
I feel like the ratio of good to bad shows is heavily skewed towards bad.
Yes, there are more good shows out lately than there were in the past, but I think it's basically because of the volume of shows that are being pushed out at all times nowadays.
Looking at it from the perspective of Netflix, Disney, etc... They are being critically panned these days. Netflix in particular is a joke with how many shows they are producing that they cancel after one season.
Now it's unfair to lay that all at the feet of the writers, but arguably only stuff written by some top % of writers are taking off.
So why would they want to raise the pay of all writers when a large majority of them are writing stuff that gets cancelled?
There will probably always be more shows you don’t like than shows you do. They aren’t all for you / me / that guy.
Every show gets canceled eventually, except Jeopardy and The Simpsons I guess. The studio still owns the IP and can re-release, reboot for as long as human life exists.
Industry never wants to raise wages for anyone! A handful of companies own all TV & film media, meaning it’s even harder for folks to negotiate. That’s why unions and strikes are necessary.
Don't get me wrong, I'm generally pro union as long as it is securing higher salaries and more benefits for workers and not just benefitting the union reps. I wish the writers well in this.
But this isn't just "I don't like these shows", it's "no one likes these shows, sometimes they are cancelled before the first season even finishes airing"
I think they would be in a better bargaining position if more shows weren't getting absolutely trashed by both critics and audiences over their writing, that's all.
Not every show is getting trashed. Some shows fail immediately and some don’t. This has been the case for a long time! Plenty of shows haven’t got 2 seasons going back many years.
What it seems like you are really asking for is that no or not many shows are canceled, or they all or most shows appeal to you / everyone. I don’t think that has been possible since the invention of cable, or certainly since the invention of YouTube.
There is too much content out there to waste time on content you don’t like. Honestly it’s a huge win for basically everyone.
Additionally every new show is competing against every show you’ve ever liked (including canceled shows) since they’re no longer relegated to reruns and syndication. You can probably get the whole series on DVD for a way lower price (I remember when each season of X Files was like $80 on DVD) or on a streaming service.
There are still new “mass appeal” shows and movies —- they are often panned as being broad, formulaic slop but they still do crazy numbers. Maybe it’s not actually hard to make content that appeals to most people, but maybe that’s not most content creators’ intention.
The 2008 strike led to way more reality TV. I don't think studios care about quality as long as it draws monetizable eyeballs. They're certainly going to try to replace writers with LLMs! It might even work. Just...not with any semblance of quality.
On the other hand, it seems a brief renaissance of quality sci-fi TV followed that strike. Though that could be the tax incentives Vancouver offered and the abundance of sci-fi writers who cut their teeth on Stargate.
ChatGPT is pretty bad at humor and wit. You might be able to randomly generate something that gets a chuckle, or copy something previously done, but it still takes a human to validate and approve.
> Regulate use of artificial intelligence on MBAcovered projects: AI can’t write or rewrite literary material; can’t be used as source material; and
MBA-covered material can’t be used to train AI.
One of the union's demands going in was a prohibition on using AI in writing, which the studios denied and offered annual "meetings to discuss technological advancements"
> That such scripts would be ineligible for copyright protection seems like it would be a significant issue.
Only if there is zero human interaction beyond prompting. And even that is just the Cooyright Office’s opinion of the law, not necessarily the actual law. [0]
[0] Which would normally be significant, because of Chevron deference, but with the increasing expectation that the Supreme Court will toss Chevron deference in a current case...