If coding goes away, decades of experience become worthless instantly. Not all of it, but the vast majority, enough to justify starting over in another career.
In that world, it will have become more cost-effective for most companies to spend most of their budget on inference vendors and employ a few low-paid LLM wranglers, even if the final output is of terrible quality. No point in competing for that kind of employment experience with that kind of pay.
I really don't get this point of view at all. I acknowledge that two yours into my quarter century of experience, most of what I knew was easily replaceable by the AI of today. After two decades of experience however, syntax and specific algorithm and language knowledge was perhaps 10% of my value, nowhere near the vast majority.
The idea that low-paid LLM wranglers are going to push out the experienced engineers just doesn't wash. What I think is much more likely to happen is the number of software engineers greatly reduces, but the remaining ones actually get paid more, because writing code is no longer the long pole, and having fewer minds designing the system at a high level will allow for more cohesive higher-level design, and less focus on local artesenal code quality.
To be honest AI is just the catalyst and excuse for overhiring that happened due to the gold rush over the last 20 years related to the internet and smart phone revolutions, zero-interest rate, and pandemic effect.
> What I think is much more likely to happen is the number of software engineers greatly reduces, but the remaining ones actually get paid more.
You realize that this is contradictory, right? If the number of competitors remains the same, yet there are far fewer jobs, it's a buyer's market: companies have to offer very little to find someone desperate enough.
> It will allow for more cohesive higher-level design, and less focus on local artesenal code quality.
I don't buy this, LLM code is extremely bloated. It never reuses abstractions or comes up with novel designs to simplify systems. It can't say no, it just keeps bolting on code. In a very very abstract sense you might be right, but that's outside the realm of engineering, that's product design.
You raise some good points about the economics, that's where I feel the least confident, but let me explain my reasoning.
Software has eaten the world, and thus the value of maintaining software has never been hire. Engineers are the people who understand how software works. Therefore unless we move away from software, the value of software engineering remains high.
AI does not reduce software, it increases the amount of software, makes messier software and generally increases the surface area of what needs to be maintained. I could be wrong, but as impressive as LLM's language and code processing capabilities are, I believe there is a huge chasm that will likely never be crossed between the human intent of systems and their implementation that only human engineers can actually bridge. And even if I'm wrong, there's another headwind which is that, as Simon Willison has point out, you can't hold an LLM accountable, and therefore corporate leaders are very unlikely to put AI in any position of power, because all the experience and levers they have for control are based on millenia of evolution and a shared understanding of human experience; in short they want a throat to choke.
The other factor is that while AI can clearly replace rote coding today, I think the demos oversell the utility of that software. Sure it's fine to get started, but you quickly paint yourself in a corner if you attempt to run a business on that code overtime where UX cohesion, operational stability and data integrity over time are paramount and not something that can be solved for without a lot of knowledge and guardrails.
So net of all this, where I think we land is a lot of jobs that are based purely on knowledge of one slow-changing system and specific code syntax will go away, but there will be engineers who maintain all the same code, they'll just cover more scope with LLM assisted tools. You put your finger on something, that I do believe this moves engineering closer to product design, but I still think there's a huge amount on the engineering side that LLMs won't be able to do any time soon (both for technical and the social reasons stated above), and ultimately I don't see the boundary the same way you do, as software engineers we have always had to justify our systems by their real world interaction.
> Software is everywhere and thus the value of maintaining software and the value of software engineering remains high.
This is an unfinished argument. What if we get coding agents to maintain software? What if frequent rewriting becomes cheap enough? Something that's a tenth or one hundredth of your salary doesn't have to be good to make for a good business decision. Why do you think every native application has been replaced by slop made up of 10 layers of JS frameworks on top of electron? Nothing matters as long as the product is cheap and fast to pump out, barely works on modern hardware, and makes dough.
> AI does not reduce software, it increases the amount of software.
There's not infinite demand for software. If AI inference costs take 50% of the prior payroll expenses, while making a company twice as efficient, that means we need 4 times as much demand in software engineering at the same salary for everyone to keep their job. What new or improved subscription, app, website, device, or other software product does the world need right now? 99.9% of people use the same 5 apps. Most of their free time, attention, and disposable income has already been captured by trash that is unbeatable due to network effects. Are we all going to sell shitty LLM frontends to businesses until they notice they could have done the same thing themselves? There might be an explosion in new software, but no one there to care about using it.
> I believe there is a huge chasm that will likely never be crossed between the human intent of systems and their implementation that only human engineers can actually bridge.
Maybe, or the AI might just be missing context. Think of all the unwritten culture, practices, and conversations the LLM hasn't been made aware of.
> In short they want a throat to choke.
You're responsible for those under you anyway, this doesn't help. Banking on those in charge being irrational forever in a way that is bad for business, and without ever noticing, is a bad gamble.
> The other factor is that while AI can clearly replace rote coding today [...], X is not something that can be solved for without a lot of knowledge and guardrails.
I'm talking about the world the AI-maximalists predict is rapidly approaching, not where we are today. None of that knowledge and none of those guardrails are hard to grasp intellectually, compared to advanced mathematics for example. Put your institutional knowledge in a .md file and add another agent that enforces guardrails in a loop. The only way out I see is a situation where there are complex patterns that we intuitively grasp, but can't articulate. Patterns that somehow span too much data or don't have enough examples for LLMs to pick up on.
> There will be engineers who maintain all the same code, they'll just cover more scope with LLM assisted tools.
So fewer jobs with lesser qualifications?
> Ultimately I don't see the boundary the same way you do, as software engineers we have always had to justify our systems by their real world interaction.
I've seen the way engineers design products, and I like products designed by engineers, but no layperson does. Laypeople don't want power, privacy, or agency. They care about how things work, and they lie to themselves and others about what they really want. They don't want a native desktop app that streams high-quality audio from a self-hosted collection, they want a subscription that autoplays algorithmic slop through a react native app on their iPhone. Do you really think you're better at appealing to/fleecing customers than people with actual UX, marketing, and behavioral psychology experience? This example only applies to mass-market software, but I'm sure it's not much different in other fields. Engineers keep thinking they could everyone else's job, but they don't do so well in practice.
I'm sort of shocked at how little of my argument seemed to land with you in any way. I'm wondering how many cycles of software hype have you been through? Were you here for the PC revolution, the .com era, smartphone mass adoption?
There's a lot of what-ifs, and worst case scenarios in your reply that I simply don't find likely. I am not drinking the koolaid from the AI maximalists or the doomers. I could be wrong of course, no one can predict the future, but to me the very real, novel and broad utility of LLMs that we are just learning to harness combined with the investment outlook are leading to a mania that has people overestimating where things will land when the dust settles. If I'm wrong then I guess I'll join the disenfranchised masses picking up pitchforks, but I'm not going to waste time worrying about that until I see more evidence that it's actually going that badly.
So far what I see is that software engineers are the ones getting the most actual utility of AI tooling. The reason is that it still requires a precision of thought and specificity to get anything sustainable out AI coding tools. Note this doesn't mean that engineers can design better apps than proper designers, rather my point is that designers and other disciplines still can not go much further than prototypes, they still need engineers to write the prompts, test the output, maintain the system, and debug things when they go wrong. I have worked long enough with large cross-functional teams to know that the vast majority of folks in non-engineering functions simply can not get enough specificity and clarity in their requests to allow an LLM to turn it into a working system that will work over time. The will hit a wall very quickly where new features add bugs faster than they improve things, and the whole thing collapses under its own weight like a mansion of popsicle sticks. And by the way, I don't consider AI-assisted coding to require less qualification than regular coding. Sure you don't need to know as much syntax or algorithms, but you absolutely need to know data modeling, performance, reliability, debugging, consistency, and migration knowledge in order to use AI to contribute to any software that powers a real business, and yeah you might need to develop your product and business sensibilities, but to me that's what been happening throughout the history of computing. Wiring up ENIAC, certainly required qualifications that were not needed for assembly programing, which in turn required certain things that C programmers did not need and so forth, but harnessing the increasing compute power and complexity required new qualifications. I don't think AI will ultimately be that different, it will change the way we work, it doesn't replace what senior engineers do.
It's a well known fact that LLMs struggle with basic arithmetic of large numbers, that's not what they are made for. Most chatbots will just call a python interpreter in the background.
How do you know GPT-5 does not call a Python interpreter remotely on OpenAI servers when you ask it to do arithmetic ? Your prompt goes to their servers, you have no way to know what happens there.
The only way to be sure a model calls no tool is to run it locally and control the network.
I chose Mistral because it's a large SOTA model that allows me to disable tool calling.
If you try using ChatGPT, you'll see it goes into reasoning mode (It flashes the text "Working") and takes a while when asked to do arithmetic. Chatbots that don't forbid the user from viewing the reasoning chain of thought, such as Deepseek or Mistral, reveal that these chatbots work exactly as I described:
====
Thought for 1s
Thinking
Okay, the user wants to multiply two large numbers: 63,157,997,633 and 6,311,490,009. Since these numbers are quite large, I should use the code interpreter to handle this calculation accurately. Let's write a simple piece of code to compute this product.
Code
output
generated
python
# Calculating the product of the two numbers
number1 = 63157997633
number2 = 6311490009
product = number1 * number2
product
ResultCopy
398621071049125148697
Thinking
Great, the code interpreter has given us the product of the two numbers. Now, let's present the result to the user.
====
Falsifying my claim is simple: Find a model that verifiably does basic arithmetic correctly without relying on any external tools.
I run Qwen3-32b locally without any tools (just llama.cpp) and it can do basic arithmetic for smaller numbers ( like 134566) but I didn't check it for much larger numbers. I'm not at the PC right now but trying to do it via OpenRouter on much larger numbers overflows the context and it stops without giving a result :)
I think the point of the line of questioning is to illustrate that "tools" like a code interpreter act as scratch space for models to do work in, because the reasoning/thinking process has limitations much like our own.
This policy has existed for (non-3D) printers and image editing programs for decades now. They will refuse to print currency or anything with a specific watermark.
Their business is making money. If they can build money printing machines, they're not going to refuse to use them because that's "not their business".
Do you really think they would be out donating trillions of dollars to other companies out of the goodness of their hearts, instead of just bankrupting everyone in the software industry if they could?
Huh? What kind of question is that? Who waste the opportunity to win the AI race to become another Jira vendor? Everything has the opportunity cost. Didn’t you already learn that?
Isnt that point kind of the counterpoint to the AI-first narrative.
With standard, human driven operations its true about opportunity costs. What we are told is that AI will replace human, essentially saying that opportunity cost becomes cash only. Then the question of why doesnt AI lab start SaaS fully managed by AI becomes ever more interesting. Maybe because it's not that simple. Hence, it's not that easy in other companies as well to just replace devs, engineers and so on with AI
Waste ? They can become both an AI race winner AND a disruptive Jira vendor. Yet they don't. Why ?
To be a successful Jira vendor will prove their point that software engineers are obsolete now. Why don't they do that already ?
Swahili is subcontinental lingua franca spoken by 200M people and growing quickly. Polish is spoken by a shrinking population in one country where English is understood anyways.
Yes, it's the same tech. There's been products on the market for a while even though this press release tries to spin it like it's new and linked to heat pumps.
IIRC BMW used to have a form of this in their cars about 25-30 years back so that the hvac would be able to blow heat before the engine coolant was up to temp after sitting overnight.
Integrated circuits don't "start with C". What does that even mean? C is just an interchangeable language the compiler frontend parses.
A microprocessor starts by executing the machine code at the reset vector. This machine code is generated by an assembler or a compiler backend. It has no idea what programming languages are.
If coding goes away, decades of experience become worthless instantly. Not all of it, but the vast majority, enough to justify starting over in another career.
In that world, it will have become more cost-effective for most companies to spend most of their budget on inference vendors and employ a few low-paid LLM wranglers, even if the final output is of terrible quality. No point in competing for that kind of employment experience with that kind of pay.