> But time passes and situations evolve. Ed Zitron, though, clearly does not.
> Over the last two years, he has called the top repeatedly: The AI bubble was definitely about to burst here, and here, and here, and here, and here, and here. His conclusion hasn’t changed, but his arguments have.
> The 2024 and 2025 articles make, basically, the business case against AI: that companies aren’t really using it, it isn’t adding value, and AI investors are betting that will change before they run out of cash. In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.
> This is basically an admission that he can’t make the case in terms of the economics anymore. And in deciding how seriously to take his case in 2026, I think it’s valuable to read it in parallel with his case from 2024 and 2025.
Say what? This is exactly the progression that you'd expect if there was, in fact, outright fraud going on.
* Someone claims to be able to do <impossible thing>
* Critic call them on it
* Rather than folding, the hype machine grows and they start claiming to be doing the thing
* The critics start accusing them of fraud
Also, I note, it's a cute trick to start of claiming "time passes and situations evolve. Ed Zitron, though, clearly does not" and then in the next paragraph object that "his conclusion hasn’t changed, but his arguments have".
I don't have a pony in this race and don't know who Ed Zitron is, but this article makes me suspect he's correct. Acting as if going from "they are wrong" to "they are wrong and lying" is "losing the plot" is anti-convincing.
[edit]
The ending is much stronger:
> I don’t actually think we need less skepticism in AI world. These companies are, indeed, run by people who are not very trustworthy, who often contradict each other or oversell their products.
> And the things they say they’re trying to do are outrageous; people have every right to object to it. Skepticism is more than warranted.
> But we desperately need better skepticism.
In that spirit, I would like to offer this observation. The one substantive difference the author highlights is the claim that generative AI is now offering value that renders the claims that it's all fraud questionable. I would argue that the value it offers is effectively plagiarism-as-a-service, and, just as with the infinite energy machines that secretly harvest power from the wiring of the building, compatible with the notion of fraud.
The main claim that I was claiming that TFA the claims to be incorrectly claimed to be impossible by the LLM-hype critic is roughly "current generative AI business models can be successful".
The critic initially argued that there's no way to make money the way they were going, and then has subsequently concluded that any reports that they are making money are therefore fraudulent.
Not adjusting for inflation and quality really damages the integrity of the comparisons, as does cherry picking your base examples.
Taking your first, the $47K 3 bedroom starter home with a yard. In 2026 that would be $200K (cumulative inflation is a little over 4x[1]); picking a random US city[2] and looking on Zillow[3] I find that...yeah, you can get a comparable home today.
There are certainly arguments to be made about tradeoffs, quality issues (though those aren't as obvious as you might initially suppose[4]) and so on. But just listing unadjusted price comparisons like this is disingenuous.
Just "adjusting for inflation" isn't good enough. Minimum wage was $4/hr. Now its $8. An elementary school custodian could afford a mortgage, a car, support a family of 4 and go on vacation on just that single income. They had healthcare and a pension. You could work over the summer and pay for a year of college at a state school.
Yes, the house now is more energy efficient. The car is safer. But if the price of everything went up 4x-10x, and the median income only went up 2x, AND you have to pay for more things that used to be included, then everything is more unaffordable, inflation be damned.
"An elementary school custodian could afford a mortgage, a car, support a family of 4 and go on vacation on just that single income. " Can I ask what you are basing this off of? I'm fairly skeptical of this claim.
The head custodian at my elementary school (in the 80s) was a friendly guy that loved talking with the kids. He'd talk about his life, ask us if we did anything fun over the summer, and tell us where he went with his wife and kids. The school was in a small town in the rural midwest, not particularly affluent, but not poor either.
The elementary school where my kids went (in a much more wealthy district) doesn't even have a custodian that I'm aware of, just a 3rd-party cleaning service that hires immigrants for as cheap as possible, and I'm sure doesn't offer healthcare or even full-time work. They have too many highly-paid administrators to afford a custodian.
Ok so you're moving the goalposts here. What you said was a custodian (who I'm assuming did a lot more than just clean) and now you've switched to a head custodian and are comparing him to contract workers. So you're comparing two different jobs in two different school districts. Now is it possible that they've eliminated or reduced those positions? Sure, but you haven't actually shown that at all. Like I could easily counter with the fact that the school my wife teaches at still has a head custodian but I don't know what his family situation is and we'd just be trading anecdotes. So do you have any actual evidence for your initial claim? Because the overall stats with wages and prices are the opposite from what you claim.
>
Not adjusting for inflation and quality really damages the integrity of the comparisons, as does cherry picking your base examples.
But then they also need to make sure to also match salaries to inflation too.. Because wages have not kept up with inflation, which is the reason for most of this..
Thanks for the feedback. The whole experience/site is mainly satirical/humorous. It's not inflation adjusted cause I didn't want it turn into a finance piece tbh, but you're definitely right, if inflation adjusted most cards need to be re-built to account for inflation and better judge what we're getting today vs what we used to get
Not adjusting for inflation makes it look completely stupid.
There's one good effort - comparing a car to the salary of a car-worker. But it only has half the comparison (what are today's car workers earning?). That's the comparison that Marx would recognize: how long do the people making something have to work to buy the thing they made?
The reference to Marx and (implicitly) the labor theory of value renders the GP unserious. Just looking at the people doing the assembly (and not all the people in the supply chain), ignoring the time aspect (I doubt there's any product that costs more than one of the people assembling it would expect to make over the time it would take them to single-handedly create the product from scratch), and so on.
It's a nonsensical position, meant to invoke a certain sort of feels, and nothing more.
> It's a nonsensical position, meant to invoke a certain sort of feels, and nothing more.
It may be so, but those feelings are part of the disconnect and they themselves cause all sorts of problems - or benefits, if directed appropriately, e.g. the IKEA effect is part of the same thing: we put in effort so we think the result is worth that effort.
Marx being wrong doesn't mean trade unions didn't rise for much of the 20th century on the basis of similar feelings, being in error didn't stop the USSR being one of the world's two superpowers for about half a century.
If the employees are satisfied their labours will bear fruits then they historically have not minded much that billionaires skim the cream; but when they are not satisfied, revolutions come. Those are messy and unpleasant, but they come anyway.
Sure. Lying to people and telling them they're somehow oppressed just because other people are more successful than they are can be used to stir up a revolution and get a lot of the useful idiots killed.
That's why it's important to call the nonsense out.
When someone's making billions and the goods and services they get you to pay are functionally mandatory even if theoretically avoidable, with costs going up because someone somewhere has apparently cornered the market, is it even a lie?
People asking "Why can't I afford a car from 9 months salary when my dad could? Why is a house 10x my salary not 3x like my dad's? Why can't my partner and I afford kids even on two incomes when our parents managed it on one?" don't want a series of ten hour-long degree-level lectures on economic theory to be able to understand the real answer, especially not when there's clearly a bunch of very rich people who keep loudly telling them that they ought to be happy because some stock index has gone up (when they don't own stocks) or that they've moved up the value chain (which if true doesn't answer why cars and houses are less affordable).
Give them an answer about Baumol's cost disease (easy to understand without much economics study), and suddenly you're back to Marxism but with different language, where Marx's "means of production" happen to be inalienable to the human form (or at least have been so far, plumbers are not yet quaking in terror at the videos of androids falling over while attempting to open a dishwasher).
Counterpoint for [1], that is claiming $15k in 1947 is $76,801.50 today. Here's a quote from Wikipedia about a 1948 luxury brand car model:
The Cadillac Series 62 Coupe de Ville was introduced late in the 1949 model year.[4][9] Along with the Buick Roadmaster Riviera, and the Oldsmobile 98 Holiday, it was among the first pillarless hardtop coupes ever produced.[4][9] At $3,496 ($47,306 in 2025 dollars [5]) it was only a dollar less than the Series 62 convertible, and like the convertible, it came with power windows standard. It was luxuriously trimmed, with leather upholstery and chrome 'bows' in the headliner to simulate the ribs of a convertible top.[4][9]
And from the same source, the 1946 Crosley sedan was $905, three digits. (Page 813).
From your own [1], "Cars priced at $3,496 in 1949→$15,003.75 in 2026", and "Cars priced at $905 in 1947→$4,633.69 in 2026". The prices your [1] link give for 2026 sound like the price of new Chinese cars before tariffs, which Americans can't buy. And yes, I also noticed that $3,496 (1949) becoming $15k (2026) according to your link is completely out of sync with the number Wikipedia gave (Wikipedia agrees with https://www.bls.gov/data/inflation_calculator.htm which says the $905 (1946) Crosley sedan would cost $13,899.66 in 2026, where is 2026's new car for $14k? Oh, right, they're in China and just about here in Europe), which is my next counterpoint:
This matters because of all the debate about which goods and services go into the basket of goods that is used to measure inflation, and that the headline number is generally wrong in both directions at the same time once you look at subgroups who are more and less dependent on different goods and services.
This also means that as groups stop buying certain things, they move out of those baskets fractionally or entirely. Like, fewer kids being born means less spending on childcare means it's weighted less in the basket, even if the reason for fewer kids is more expensive childcare.
> A person transported from 1926 to 1976 would find the world nearly unrecognizable.
I'm not one of these, but I met some. One thing I recall is their comparing the early 70s to the late 20s (there were a lot of parallels), and expecting another great depression. The "how" of life had changed, but the "what" and "why" were largely the same.
To get some sense of this, read books written in the 1920s. Do you find the world they describe unrecognizable?
> A person transported from 1976 to 2026 would find it, after some orientation, quite familiar.
I am one of these, and I both agree and disagree. A lot has changed, but the core has remained the same.
The disappearance of cash is one of the biggest changes. Likewise the disappearance of tolerance for differences of opinion, privacy (showing ID to travel or buy things was something the bad guys did), distance (when you left somewhere you were gone, rather than "remote"), and third places.
There was no analog of LLMs except in fiction, but that was the case in 1926 as well.
We have to stop acting like these things "think"; it leads to really weird misinterpretations of the output as "meaning" things.
For example, they will occasionally replace "colour" with "color". Why? Because both occur in the training data in the "same role" but "color" is, apparently, more common[1]. You can also trick them into replacing things like "sardines" with "anchovies" (on pizza) and "head of lettuce" with "cabbage" in the context of rowboats.
They are lossy text compressing parrots and we are all suffering from a massive madness-of-crowds scale Eliza Effect.
This feels very different because there is no powerful political force trying to squelch discussion of colour or sardine. But there are lots of powerful folks trying to avoid discussions about Gaza or Palestine and related things. It's to their advantage to have tools hide that word
It feels different because it's a political matter, but this is just probability doing what it does. Considering "Ukraine" is likely far more common in the training data, this isn't a terribly surprising outcome.
There's always two sides, but there's a power imbalance. One side includes two of the most powerful nation states (Israel and the US), plus the oligarch billionaires in the US on Israel's side. The people wanting to talk about Palestinians is much weaker nations or states or forces, plus us based people. I hesitate to say us popular opinion because we aren't all on the same page... But it feels like it is trending that way.
When a company packages this tool up and makes it part of their product they are taking some of that responsibility. The end user isn't supposed to need to know what an LLM is or how it works, that's what they're paying Canva for.
There are trillions of dollars riding on the fact that they in fact think, and a bunch of people here have their lottery tickets tied up in that, so good luck with that
Don’t worry, goalpost shifting will ensure that no matter how useful LLMs get, there will always be a large contingent of people who insist that anything non-human is not thinking, just sparkling cognition.
LLMs are not/will never be thinking though, no matter how good they get? You could potentially argue that there is some level of cognition during the training phases (as long as that isn't being outsourced to humans anyways), but generation of output is stachostic selection of most common (/highly ranked if tuned) following patterns? They cannot learn things outside of training, nor do they actually "know" things. To use the parrot example from above, a parrot doesnt "know" what the words its been taught to mimic are, nor does an LLM "know" what the concept of love is, its just be trained to regurgitate the words that are used by humans to describe such a thing. This isn't a criticism of LLMs, that's what they're supposed to do, but its certainly not cognition.
You’re assuming that thinking requires learning, which I don’t necessarily agree with. Humans can have brain damage which inhibits the formation of long term memories, but such people can still function in the world. Would you say the thing such a person’s brain is doing is something other than thinking?
At any rate, just because the architecture of current LLMs doesn’t support learning at inference time does not constitute a fundamental limit that can never be changed, just a local maximum that has worked well to productize the approach.
And I’m quite certain that once systems that include post-training learning exist people like you will find a way to distinguish that from human learning, moving the goalposts again. You’re not arguing in good faith, you have an essentially religious opinion and you will stick to it as long as you are able.
> but generation of output is stachostic selection of most common (/highly ranked if tuned) following patterns
This is not an accurate description of the transformer architecture. I’m not surprised that you are misinformed about this.
For every "well of course, just...X, that's what everybody does" group-think argument there's a cogent case to be made for at least considering the alternatives. Even if you ultimately reject the alternatives and go with the crowd, you will be better off knowing the landscape.
Completely disagree; IMO we build far too many frameworks and alternatives (probably because it's fun) instead of a) enhancing the things that already exists to have the thing we want or b) just getting on with the actual work. The whole field would be much better off if we had half as many languages, half as many libraries, half as many build tools...
Every time you go off the beaten path, you're locking yourself into less documentation, more bugs (since there's less exploration of the dark corners), and fewer people you can go to for help. If you've got 20+ choices to make, picking the standard option is the right choice on average, so you can just do it and move on. You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
The exceptions to this are when a) it becomes apparent that the standard tool doesn't actually fit your use case, or b) the standard tool significantly overlaps the core problem you're trying to solve.
> You have finite attention, so doing a research report on every dependency means you're never actually working on the core problem.
Reading that took five minutes and gave a good intro to the counter argument to Curry-Howard-all-the-things monomania. If having invested those five minutes, Lean still seems like the way to go (for whatever reason) fine. You are making a (closer to) informed choice, and likely better off than if you'd spent those five minutes doubling down on the conventional solution.
Most deviations from the group consensus are mistakes, but all progress comes from seeing past the group consensus. So making frequent small bets on peeking around your blinders is a good strategy.
Which shows the lie of the common engineering trope "use the right tool for the job."
It really should be "use the same tool that everyone else is using so you don't have decide which tool is the right one -- the herd made that decision for you!"
This would be a sort of convergence? They were both right in part (Chomsky that there was structure there, Norvig that it could be sussed out using brute force statistics). As is often the case, when two smart people who have thought a lot about something complicated disagree, the truth comes out when their unstated assumptions are finally exposed to the light.
In this case, Chomsky's LAD almost certainly relies on Baldwin-effect structure to get around the paucity of stimuli, and the LLMs are just getting to "the same place" through sheer masses of data.
Compared to the processing already done to get data from astronomical data, yeah, it's essentially free.
reply