> Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.
This is true, and I bet there are thousands of people who are in this stage right now - having gotten there far faster than they would have without Claude Code - which makes me predict that the point made in the article will not age well. I think it’s just a matter of a bit more time before the deluge starts, something on the order of six more months.
I'd argue that LLMs are not yet capable of the last step, and because most sufficiently large AI-generated codebase are an unmaintainable mess, it's also very hard for a human developer to take over and go the last mile.
So what is the “last step”? I have one shotted a complete AWS CDK app to create infrastructure on an empty AWS account and deploy everything - networking, VPC endpoints, Docker based lambdas, databases, logging, monitoring alerts etc.
Yes I know AWS well and was detailed about the requirements z
This will not be a learned more robustly in the US until one or both of the only two (edit: major) gas turbine manufacturers in the world (GE Vernova, Siemens Energy) suffer a tail risk event causing their failure. Backlog for new gas turbines is ~7 years, as of this comment. Continued production capacity is a function of how fragile those two companies are.
Yes, but their production volume is limited (imho) compared to the two companies I mentioned. Good callout regardless. I'll have a post put together to share here enumerating and comparing.
(i track global fossil generation production capacity as a component of tracking the overall rate of global energy transition to clean energy and electrification, but some of my resources are simply an excel spreadsheet)
People laugh at this, but anthracite genuinely is cleaner than other coal in every regard save CO2 emissions. People just think it's a joke because they've come to believe that CO2 is the only coal emission worth caring about, which definitely isn't true.
The oxymoronic term "clean coal" refers to carbon-capture-and-storage (CCS) technology [0], touted by the fossil fuel industry as a way to reduce greenhouse gas emissions and continue employing coal workers.
Thus far, it is incredibly expensive, at a time when solar and wind generation is cost-competitive with fossil-fuel plants which don't employ CCS. It is simply a dead end. You can generate more renewable energy, and store it, for far less than it takes to equip and operate CCS in conjunction with a fossil-fuel-fired plant. Only direct government subsidy makes it viable for a vanishingly small amount of GHG emissions.
"Clean coal" is like saying "a fast snail". Sure it can be faster than other snails, but even if it's twice as fast as the second fastest snail, it's still a snail and I'll still laugh when an ant runs circles around it.
No, the criticism isn't because people get caught up about CO2 -- it's because "cleaner than other coal" is a very low bar to meet to be calling something "clean" full stop.
Also "clean coal" is not a type of coal being burnt (although that does matter too) but pollution control systems added to coal plants.
Anthracite burns clean enough to use in a pizza oven. If your neighbor told you he was going to install a new furnace and offered you the choice of it burning wood pellets or anthracite, from a smell standpoint you should absolutely choose the anthracite.
Anthracite, in these regards, is very different from bituminous coal.
Undoubtedly. Doesn't change the fact that one kind of coal burns smokeless with a clean blue flame while the other will cover everything for miles in a film of soot and tar.
The smell of wood might be nice for flavor, but that's beyond the point of anthracite being clean. That particulate pollution from wood burning is severe compared to the smoke you'll get off anthracite, which is virtually nonexistent.
Regardless of how good it might be at being the cleanest dirty thing, it's not what the US trope of "clean coal" refers to anyway. Anthracite is not used in the US to generate power because it is too expensive.
I do find the slow Sovietization of America funny, both mentally and economically. The year is 2050, autarky on energy has been established, the markets cut off, politics in the hands of erratic and geriatric leaders. Americans proudly drive 30 year old Fords the way people used to drive Ladas, while China exports green energy, cars and infrastructure to the world.
> The US (with Canada and Mexico) is self-sufficient with fossil fuel energy.
Oh boy can't wait for the reenactment of third reich intervening peacefully in czechoslovakia, for their own safety and wellbeing of course, and not at all for the resources they're hoarding, the filthy hoarders.
It's awesome the US hasn't destabilized one of those neighbors and alienated the other one by declaring it the prospective 51st state. Soft power really is America's super power.
Imports into the US will experience inflation regardless. Semiconductor imports from East Asia are one example, since they depend on helium and energy from the Gulf.
tbh I’m kind of surprised the admin hasn’t enacted export tariffs on oil and gas already to take the pressure off car owners.
Wouldn’t do anything to the prices of imported products since the entire intl supply chain would be subject to even higher prices, but would reduce pressure at the pump
Sure, if we build out refining capacity for the next ten years. Then we're golden until we run out of the finite well of combustible dead algae. So if you think we can revitalize American manufacturing and resource processing starting now, and you're okay with those investments being worthless in a few decades, and you don't give a shit about rendering the planet significantly less habitable to human life, then yeah, we're totally self-sufficient with fossil fuels.
Or we could, you know, pull energy out of the air and sun, a strategy which will be viable until our star dies.
Alberta tar sands have hundreds of years worth of reserves. They're also expensive and incredibly dirty to extract and emit significantly more CO2 during processing than a light oil well will. (The tar is usually melted by heating with natural gas).
I'm quite confident cheap renewable alternatives will make the tar sands inviable far before they run out.
Some good news though, with the war in Iran the spiking oil price means that Albertan executives can ramp up operations and stay quite profitable! Push the price to 200/barrel and we'll just strip mine the entire province after airlifting out Calgary and Edmonton.
This assumes that there isn't profound demand destruction caused by the stratospheric energy prices.
Fossil fuels were already an inferior energy source when oil was $60/barrel. Electrification has been moving fast and accelerating, even at the pre-energy crisis prices.
Now? Current events are likely to take fossil fuels out back and give 'em the Old Yeller treatment with surprising speed.
I absolutely agree, _in market driven economies_, fossil fuels are slowly pricing themselves out of relevancy. The issue is that for some reason the US specifically subsidizes their usage keeping them artificially lowly priced.
So, how many billions of newly printed debt is Trump willing to throw at the problem to keep those subsidies up so that he can be sheltered from the scary windmills?
I don't agree with redirecting towards fossil fuels instead of wind power, but its not really paying TotalEnergies "for not building wind capacity", its more like changing what was ordered on behalf of the population: first the wind power capacity was ordered, then it was stalled and blocked, and now this president and TotalEnergies have agreed to change the order to another type of meal (investing in fossil fuel facilities within the US).
The US is unable to implement export controls so consuming less than it creates doesnt mean theres enough since producers will export if international prices are better
Ignoring the part where just running everything off fossil fuel is suicidal for the planet, the US actually isn't self-sufficient with just fossil fuels.
Renewables are cheaper to build out, and we're facing a massive energy shortage. We need to be building renewable production as quickly as possible just to keep up with demand.
Insisting that we use obsolete, expensive and dirty technologies while the rest of the planet modernizes is just dumb.
That's a difficult question to answer. It shouldn't be, but it is. The reality is, SOC2 is a sales-enablement tool. You should:
* Run a SOC2/compliance program that is entirely disjoint from your security practice.
* Defer SOC2 until the work required to sell into customers demanding it (phone calls, questionnaires) exceeds the cost of obtaining SOC2.
* Prepare for SOC2 by making simple best-practices engineering decisions, in particular single-signon for virtually everything and protected branches for all your repositories.
* Do not allow SOC2 to force any engineering decisions that you would not have intuitively made yourself (this is a big risk with the evidence-gathering platforms like Drata, Delve, and Vanta).
* Assume your SOC2 Type I report will suffice as a first attestation (ie: buy you 1 year of time) with all your customers, and understand that you cannot fail to obtain a Type I; your Type I is guaranteed.
Over 5-6 years of discussing SOC2 with other security practitioners pretty intensively, the overwhelming weight of the evidence is that ~practically nobody actually reads SOC2 reports; they just check the box for each vendor and move on. Plan accordingly.
Since you know a lot about SOC: is SOC2 Type I (point in time) enough to close enterprise sales? Is it worth getting for a new startup (seems super simple)?
It's complicated. In theory, SOC2 forces you to do some important stuff, like define your threat model and say "I can mitigate against the threats and prove that my mitigations are in place". The problem is always that the companies that care don't need it but are burdened with it while the companies that don't care will just checkbox their way through it. It sort of enforces a very baseline security posture, in theory, but the major win of "We've thought our security through" is more of a choice - SOC2 can't actually force you to care.
A ton of these SOC2 vendors take all of the potential good parts of SOC2 out of the equation, building the threat models for you and then you just hook up your gsuite/ github and they check boxes for you or tell you to flip a policy here or there. Delve took this to the extreme by not even asking you to flip the checkboxes.
That said, it doesn't matter if it's legit. Everyone is SOC2, and part of being SOC2 is that the vendors whose products you purchase are SOC2, so it's not a choice - you have to be SOC2 if you want to sell (industry/ product specific, but at some point it'll be clear if it applies). If your goal is security, well, SOC2 is irrelevant.
Ultimately, you'll end up having a separate compliance team to manage SOC2 and you'll actively try to keep "real security" from it because real security has to change over time. You'll encode the absolute minimum possible into your compliance for that reason so that you can easily pass every year and then, if you care about security, you'll invest in that separately.
You can get a long, long way without SOC2; virtually every prospective customer you run into that asks for a SOC2 will have an alternate on-ramp for vendors without it, and the ones that don't will sign a contingent PO on your Type I, which (again) you are guaranteed to get.
The idea that SOC2 forces you to do important stuff gets it backwards; SOC2 documents your existing practice, and demands only extremely high-level controls that you can deliver in any number of ways. Your security practice should (minimally) inform your SOC2, not the other way around.
Yes, that's true. I edited my post to be a bit clearer about this. When you need a SOC2 is going to depend a lot on your business. Lots of companies can make exceptions very easily. Type 1 is easy, I would highly recommend starting there pretty much no matter what since it'll be good practice before your SOC2.
> The idea that SOC2 forces you to do important stuff gets it backwards;
It's the goal behind SOC2. You're assuming a company has a security practice that informs the SOC2 but I think the idea is that companies have no security practice and the SOC2 is what forces them to sit down and build one. What you're describing is more like what happens when a company that actually cares about security goes through SOC2 - you take what you have, put it into a NIST format, and map minimal controls from your practices to the CCs. Most companies have nothing to start with.
In my mind getting a clean report required three kinds of work:
1. Work that actively improved our security posture.
2. Work that didn't change much, but made our security posture easier to understand.
3. Busy work.
I think for most companies all three kinds of work will be required, but you can also make decisions that will push the percentages around. SOC 2 required us to start doing an annual security table top exercise. You could sit down, run a scenario, run it as fast as you can, and come up with a few pre-determined "improvements" that would help if you actually had that problem in the future. Or you could sit down and really put work into it, and see what works well and what doesn't.
As an example in our last tabletop I "exfiltrated" some data from one of our servers, and challenged the team to figure out what I'd done. The easy way out would have been for someone to say "We'll look at the logs and figure it out", but instead I asked them to actually try and find it. We discovered that the sheer volume of logs for that system made them hard to work with. So we made some changes to make them easier to work with and repeated the exercise later.
It could have been busy work, but instead we got real value from it.
Tangential to this but do ISO certifications make sense or are they security theater as well?
And another question but as a consumer, is there any certification which can meaningfully try to show if people/business take their security carefully or are all things security theater in that aspect and at some point, we just have to trust the enterprise and look for other signals of security (like for example blog posts which might show a deep-dive into security for example comes to my mind)
Not really. As long as current system where auditors are also clients of company being audited, the conflict of interest is too high.
Also, not to mention in many countries, the cost of getting breached is nothing so many companies are willing to just hope for the best and payout in case of the worst.
For enterprise sales you can get a SOC 2 Type I faster than any enterprise sale goes through. Typically, most enterprises are okay if you show them proof that you are "in the process" of getting the certification by showing them that you have signed up with one of those platforms (Delve, Vanta, etc.), so you would be okay to start only when you are about to close one of those enterprise deals.
Yeah, we got a signed letter of engagement from our auditor, which was enough to unlock a customer without having to go through any sidestepping process.
It’s fine for what it is: some light guardrails that attempt to nudge you towards answering “is this all just a house of cards that will obviously collapse under a light breeze”.
Getting a SOC2 doesn’t mean you’re amazing or secure or stable. If a customer says they’ll write you a fat check but they need you to have a SOC2, tell them you’ll get it within a year if they start paying. Otherwise don’t bother.
It basically shows clients that you are not doing wildly incompetent things with their data, or if you are, they can more easily sue you, since you probably lied to your auditor about it.
But it’s ultimately not up to you if you do it or not. If all of your potential clients demand it, it’s generally easier to get it than it is to get on the phone with all of your potential clients’ IT departments and explain why you don’t have it.
It's so wild to me that the world invests in US treasuries to fund a country that spends like a drunken sailor on wars and stock buybacks, with no plan to ever pay down the debt, nor to invest in its domestic future via infrastructure or state capacity. "You need another $200B for a conflict with no purpose or need? Sure, here you go."
I think working with the technology gives you powerful intuitions that improve your skill and lead to better outcomes, but you don't really notice that that's what's happening. Personally speaking - and I suspect this is true of most people in general - I have very poor recollections of what it was like to be really bad/new at things that I am now very skilled at.
If you have try teaching someone something from the absolute ground up, you will quickly realize that a huge number of things you now believe are "standard assumptions" or "obvious" or "intuitive" are actually the result of a lot of learning you forgot you did.
I was about to comment that there was no amount of money I would take in return for spending time in prison but then I realized that of course that’s not true. It would be fun to create a survey that would show a visualization of where people tend to fall on the time/money axis for this.
It logically should track closely to the person's age and life expectancy and "legit job" earning potential. I would spend my years 20-29 in jail for $400M, wealth that I'd enjoy for the rest of my life, without hesitation. Heck, I'd have been willing to spend my twenties in prison for $40M. That's still life-changing never-have-to-work-again money. 30-39? I'd probably do it for $400M. 40-49? Hmm, now that's getting kind of tough. Maybe I'd do it for $1B. 50-59? I don't think I could physically do it, and given the number of years I had left, I probably wouldn't even be able to enjoy whatever sum we are talking about.
This is kind of why I want to make this survey now because there’s no way I’d spend a decade of my life in prison for any amount of money. I would do six months for $3M. I’d maybe do 12 for $10M. But beyond that…I don’t know, even a year seems like too long to be behind bars.
Would a guarantee of a different kind of prison environment change your mind? For example, prison conditions in the Netherlands versus the US? If you were allowed 6+ hours of positive, structured activities a day? Less than if you weren't in prison of course, but as we're talking about 'How much is it worth to you...'
Sure - I think it would decrease the amount of money I’d insist on, and/or increase the amount of time I’d tolerate, but only by a factor of 1.5 or so. Conversely, if I had to stay on an American supermax facility, the calculus would swing way in the other direction.
I disagree that it’s “just a text generator” but you are so right about how primed people are to think they’re talking to a person. One of my clients has gone all-in on openclaw: my god, the misunderstanding is profound. When I pointed out a particularly serious risk he’d opened up, he said, “it won’t do that, because I programmed it not to”. No, you tried to persuade it not to with a single instruction buried in a swamp of markdown files that the agent is itself changing!
I insist on the text generator nature of the thing. It’s just that we built harnesses to activate on certain sequences of text.
Think of it as three people in a room. One (the director), says: you, with the red shirt, you are now a plane copilot. You, with the blue shirt, you are now the captain. You are about to take off from New York to Honolulu. Action.
Red: Fuel checked, captain. Want me to start the engines?
Blue: yes please, let’s follow the procedure. Engines at 80%.
Red: I’m executing: raise the levers to 80%
Director: levers raised.
Red: I’m executing: read engine stats meters.
Director: Stats read engine ok, thrust ok, accelerating to V0.
Now pretend the director, when heard “I’m executing: raise the levers to 80%”, instead of roleplaying, she actually issue a command to raise the engine levers of a plane to 80%. When she hears “I’m executing: read engine stats”, she actually get data from the plane and provide to the actor.
See how text generation for a role play can actually be used to act on the world?
In this mind experiment, the human is the blue shirt, Opus 4-6 is the red and Claude code is the director.
For context I've been an AI skeptic and am trying as hard as I can to continue to be.
I honestly think we've moved the goalposts. I'm saying this because, for the longest time, I thought that the chasm that AI couldn't cross was generality. By which I mean that you'd train a system, and it would work in that specific setting, and then you'd tweak just about anything at all, and it would fall over. Basically no AI technique truly generalized for the longest time. The new LLM techniques fall over in their own particular ways too, but it's increasingly difficult for even skeptics like me to deny that they provide meaningful value at least some of the time. And largely that's because they generalize so much better than previous systems (though not perfectly).
I've been playing with various models, as well as watching other team members do so. And I've seen Claude identify data races that have sat in our code base for nearly a decade, given a combination of a stack trace, access to the code, and a handful of human-written paragraphs about what the code is doing overall.
This isn't just a matter of adding harnesses. The fields of program analysis and program synthesis are old as dirt, and probably thousands of CS PhD have cut their teeth of trying to solve them. All of those systems had harnesses but they weren't nearly as effective, as general, and as broad as what current frontier LLMs can do. And on top of it all we're driving LLMs with inherently fuzzy natural language, which by definition requires high generality to avoid falling over simply due to the stochastic nature of how humans write prompts.
Now, I agree vehemently with the superficial point that LLMs are "just" text generators. But I think it's also increasingly missing the point given the empirical capabilities that the models clearly have. The real lesson of LLMs is not that they're somehow not text generators, it's that we as a species have somehow encoded intelligence into human language. And along with the new training regimes we've only just discovered how to unlock that.
> I thought that the chasm that AI couldn't cross was generality. By which I mean that you'd train a system, and it would work in that specific setting, and then you'd tweak just about anything at all, and it would fall over. Basically no AI technique truly generalized for the longest time.
That is still true though, transformers didn't cross into generality, instead it let the problem you can train the AI on be bigger.
So, instead of making a general AI, you make an AI that has trained on basically everything. As long as you move far enough away from everything that is on the internet or are close enough to something its overtrained on like memes it fails spectacularly, but of course most things exists in some from on the internet so it can do quite a lot.
The difference between this and a general intelligence like humans is that humans are trained primarily in jungles and woodlands thousands of years ago, yet we still can navigate modern society with those genes using our general ability to adapt to and understand new systems. An AI trained on jungles and woodlands survival wouldn't generalize to modern society like the human model does.
And this makes LLM fundamentally different to how human intelligence works still.
Iteration is inherent to how computers work. There's nothing new or interesting about this.
The question is who prunes the space of possible answers. If the LLM spews things at you until it gets one right, then sure, you're in the scenario you outlined (and much less interesting). If it ultimately presents one option to the human, and that option is correct, then that's much more interesting. Even if the process is "monkeys on keyboards", does it matter?
There are plenty of optimization and verification algorithms that rely on "try things at random until you find one that works", but before modern LLMs no one accused these things of being monkeys on keyboards, despite it being literally what these things are.
Of course it doesn't matter indeed. What I was hinting at is if you forget all the times the LLM was wrong and just remember that one time it was right it makes it seem much more magical than it actually might be.
Also how were the data races significant if nobody noticed them for a decade ? Were you all just coming to work and being like "jeez I dont know why this keeps happening" until the LLM found them for you?
I agree with your points. Answering your one question for posterity:
> Also how were the data races significant if nobody noticed them for a decade ?
They only replicated in our CI, so it was mainly an annoyance for those of us doing release engineering (because when you run ~150 jobs you'll inevitably get ~2-4 failures). So it's not that no one noticed, but it was always a matter of prioritization vs other things we were working on at the time.
But that doesn't mean they got zero effort put into them. We tried multiple times to replicate, perhaps a total of 10-20 human hours over a decade or so (spread out between maybe 3 people, all CS PhDs), and never got close enough to a smoking gun to develop a theory of the bug (and therefore, not able to develop a fix).
To be clear, I don't think "proves" anything one way or another, as it's only one data point, but given this is a team of CS PhDs intimately familiar with tools for race detection and debugging, it's notable that the tools meaningfully helped us debug this.
> No, you tried to persuade it not to with a single instruction
Even persuade is too strong a word. These things dont have the motivation needed to enable persuation being a thing. Whay your client did was put one data point in the context that it will use to generate the next tokens from. If that one data point doesnt shift the context enough to make it produce an output that corresponds to that daya point, then it wont. Thats it, no sentience involved
> The engineering confidence this gives for actual planetary defense is massive.
Is it? Isn’t it the case that we can’t even detect the vast majority of objects on a potentially problematic intersection path with earth? I feel like the most likely scenario is that by the time we realize we’re about to get slammed by an asteroid, it’s way too late.
Yes? Rubin is supposed to contribute, and more broadly we have more and better "eyes" on the night's sky than ever before. There's always the opportunity for more tracking, but tracking without being able to do anything about it would've been pointless.
Detection is still the weak link, that part is true. But the equation is shifting. Surveys like NASA’s NEOWISE mission and the upcoming NEO Surveyor mission are specifically aimed at finding those missing near-Earth objects earlier.
The point of DART mission wasn’t that we can deflect every asteroid tomorrow. It was to prove that physics and guidance actually work in space. Now the playbook is clearer: detect earlier, then nudge early.
If you get even a few years of warning, a tiny velocity change compounds into a huge miss distance. That’s the real takeaway.
I feel zero sense of sadness about how things used to be. I feel like the change that sucked the most was when software engineering went from something that nerds did because they were passionate about programming, to techbros who were just in it for the money. We lost the idealism of the web a long time ago and the current swamp with apex reptiles like Zuckerberg is what we have now. It became all about the bottom line a long time ago.
The two emotions I personally feel are fear and excitement. Fear that the machines will soon replace me. Excitement about the things I can build now and the opportunities I’m racing towards. I can’t say it’s the most enjoyable experience. The combo is hellish on sleep. But the excitement balances things out a bit.
Maybe I’d feel a sense of sadness if I didn’t feel such urgency to try and ride this tsunami instead of being totally swept away by it.
I see developers talking about this idea of intense and unimaginable excitement about AI. It seems orgasmic for them, like something the hardest drugs couldn't fulfill them. I find it very strange. What exactly is so exciting? I'm not disagreeing but when you say "opportunities I'm racing towards," what does that mean? This idea of "racing towards" sounds so frenetic, I struggle to know what that could mean? What I see people doing with AI is making slop and CRUD apps and maybe some employee replacement systems or something but I don't see this transcendental experience that people are describing. I could see a mortgage collapse or something like that, maybe that's what is so exciting? I don't know.
> What exactly is so exciting? I'm not disagreeing but when you say "opportunities I'm racing towards," what does that mean? This idea of "racing towards" sounds so frenetic
For me specifically it means two products, one that is something I have been working on for a long time, well before the Claude Code era, and another that is more of a passion project in the music space. Both have been vastly accelerated by these tools. The reason I say “racing” is because I suspect there are competitors in both spaces who are also making great progress because of these tools, so I feel this intense pressure to get to launch day, especially for the first project.
And yes it is very frenetic, and it’s certainly taking a toll on me. I’m self-employed, with a family to support, and I’m deeply worried about where this is all going, which is also fuelling this intense drive.
A few years ago I felt secure in my expertise and confident of my economic future. Not any more. In all honesty, I would happily trade the fear and excitement I feel now for the confidence and contentment I felt then. I certainly slept better. But that’s not the world we live in. I don’t know if my attempts to create a more secure future will work, but at least I will be able to say I tried as hard as I was able.
Maybe because it’s a non issue. I saw that those improvements are in the order of micro seconds, while the transfer time of a page is measure in 1/10 seconds or even several seconds. Even a game engine have something like 15 ms to have a frame ready (60hz).
> the total performance improvement is 53%. That's significant.
This percentage is meaningless on its own. It’s 4 ms shaved off a 7 ms process. You would need to time a whole flow (and I believe databases would add a lot to it, especially with network latency) and figure out how significant the performance improvement is actually. And that without considering if the code changes is not conflicting with some architectural change that is being planned.
Well, I have a backlog of at least 20 graveyard game projects that I stopped working on from one frustration or another over the past 20 years, or getting excited by a new exciting idea and leaving it alone, that I wouldn't mind resurrecting and finally putting some of them out there. Even if not a ton of people play them.
In fact it being easier to get them out there I might care less that they should be marketable and have a chance to make serious money, as opposed to when I was sinking hundreds of hours into them and second guessing what direction I should take the games to make them better all the time.
The art wasn't the problem (the art wasn't great, but I could make functional art at least), it was finding the time and energy and focus to see them through to completion (focus has always been a problem for me, but it's been even worse now that I'm an adult with other responsibilities).
And that hasn't always been the issue, I did release about a dozen games back in the day (although I haven't in quite a few years at this point).
Of course someone may say 'well that's slop then', and yeah, maybe by your standards, sure. These games aren't and never were going to be the next Slay The Spire or Balatro. But people can and do enjoy playing them, and not every game needs to be the next big hit to be worth putting out into the world, just like not every book needs to be the next 1984 or Great Gatsby.
Money, opportunity, status. It is all status games. Think of it as a nuclear war on old order and new players trying to take the niche. Or maybe commies killing whites and taking over Russia?
> Excitement about the things I can build now and the opportunities I’m racing towards.
What opportunities? Anything you spend effort over, like PMF and discovery, etc... I can now clone with a few bucks of Claude Code and charge less than you for the same product, at the same quality level :-/
Where is the opportunity here? Technology and knowledge used to be the moat a startup or bootstrapped individual could use to produce a sustainable business.
Why exactly are you excited about producing something that can be cloned for less cost than it took you? Especially as the quality will be almost exactly the same?
If I say "doing $FOO is a losing proposition", and I believe what I say, why on earth would I then move on to actually doing $FOO?
I am pointing out that there is nothing to be gained by joining this recursive race - anything you produce using LLMs I can clone using LLMs, but anything I produce using LLMs can be cloned by someone else, using LLMs.
Why would you assume that I want to insert myself into this recursively descending race to the bottom?
If cloning a product with LLMs is a losing proposition, why would anyone do it, and if nobody does it, isn't your original assertion false? Any argument relying on everyone cloning things with LLMs doesn't work if everyone doesn't clone things with LLMs.
> If cloning a product with LLMs is a losing proposition, why would anyone do it,
Because they don't yet know that it is a losing proposition?
> and if nobody does it, isn't your original assertion false?
False dichotomy, it isn't an all-or-nothing scenario like you present, it's the fact that there are enough cloners to make every race a race to the bottom in a matter of days.
> Any argument relying on everyone cloning things with LLMs doesn't work if everyone doesn't clone things with LLMs.
I think the rise of Facebook was possibly my first sense that our victory for "open" on the web was going to be short lived. Eg our (well not mine, I never used it) comms were moving to proprietary platforms.
Then with AWS our infra was moving to proprietary platforms. Now our dev tools are moving to expensive proprietary platforms.
Combined with widespread enshittification, we've handed nearly everything to the tech bros now.
According to Ryan Peterson, the CEO of Flexport, there was a large increase in the number of foreign companies registered as the "importer of record" in the US as a result of the tariffs. On the Odd Lots podcast, he stated this was due to fraud: companies set up subsidiary corps in the US, which then imported goods from their parent/sibling/related companies at much lower prices than market value. Because tariffs are a percentage of the value, this made them lower. Then the subsidiary could turn around and sell it in the US at market rates.
This is true, and I bet there are thousands of people who are in this stage right now - having gotten there far faster than they would have without Claude Code - which makes me predict that the point made in the article will not age well. I think it’s just a matter of a bit more time before the deluge starts, something on the order of six more months.
reply