It has always baffled me how quickly, and how voraciously, people started to rely on privately owned AI systems.
AI is not something discovered by scientists and plucked out of the ether. It's engineered and controlled, for profit, by corporations which have demographics and KPIs. These companies don't owe you anything, and they make no promises.
If you're running a business that deeply relies on AI, you might as well add Sam Altman to your board of directors--because he has just as much control over your company as you do. If they have a bad quarter and need to increase rates by 1000%, your choices are to pay up or shut down.
This Mythos situation is just the beginning. Not only do they have everyone hooked, but they've actively stalled the personal skill growth of millions of people who fell into vibe-coding rather than genuinely learning. And now they have that choice: Pay up, or shut down.
The same corporations that insist upon private Maven repositories to control all code dependencies are nevertheless fine with establishing a massive dependency on a privately-held corporation in order to write software that hardly anyone in the organization understands. When I really think about this and how it plays out in the long run, I feel like I’m taking crazy pills.
They use private AI because it's hard work and expensive to provide. But you are not that locked in as xAI/OpenAI/Anthropic etc. seem pretty interchangeable for most purposes.
I can't run my business without electricity. Yet we don't fear of its access being revoked. Sam makes the comparison of intelligence to electricity a lot. So we are on the path to these systems becoming utilities.
I don't know but likely not. Factories were powered by steam then, and had a "power plant" on site. So they didn't convert to electricity until it was reliable and guaranteed.
Was anything regulated in those times? You could legally buy humans at that time.
But that doesn't mean we live with same standards. Lack of regulations in electricity led to a lot of deaths and disaster which is why it was regulated.
But we dont live in the start of 20th century, we live in 2026 and we must learn from the past instead of helbent on repeating it.
Comparing AI to electricity focusing on just one particular aspect (hey its like fuel guys!!) while completely ignoring all the structural difference between actual energy industries and big tech is really stupid.
I actually might want to subscribe to your newsletter, provided I read & enjoy your article. So why does the pop-up always interrupt me before the page has even finished loading?
If you inset an unobtrusive newsletter button 60% of the way through the article, perhaps I'll actually click it (or, more realistically, follow your RSS feed).
I work in a creative field, and we've started to get a lot of clients using AI to generate initial concepts for us to build upon. The problem is, they're not actually thinking about these concepts, they're just generating until they see something they like.
Then, we have meetings where we will ask a basic but specific question about what they want us to make, and we're just met with blank stares. They have no answers, because they've never actually thought about it.
And then everyone else needs to do the thinking for them.
This reminds me of what's happened back in the early days of Google Translate. Lots of folks would bring very low quality automatic translations "for correction" only. For many it was a way to get a lower price since in their minds it was cheaper to correct something that is "largely done" rather than do the work from scratch. Oh how wrong they were, haha.
Precisely. I'm not an artist but have worked with some, and I do so with the basic assumption that the artist knows their shit and knows better than me. This client basically made a draft (or think they did) and asked you to fill the gaps, then went blank wondering how is it you're such a noob you can't even do your job. I'd honestly tell them to piss off and find better people to work with/for.
Going ahead without asking is a sure recipe for having the client tell you "Sorry, that's not at all what I want" and then having to start over again. Your creatives ask questions for a reason. What is it that made you pick this specific draft out of the slop pile as a good match for your brand? The color scheme? The composition? The atmosphere? The line art style? If you expect your creatives to just magically guess, and then get frustrated when the output is not what you had in mind, then it's hardly your creatives' fault.
Yup, people aren't mind-readers. And it can be very hard to predict what bits the client cares about and what they don't, so it's worth biasing towards asking (though I think it's worth emphasizing that 'I don't care, you choose' is a valid response). The worst clients are the ones who can't express what they want in the first place and then reject output without explaining what it is they did or didn't like about the result.
That said, it can be very hard to be a good client. Writing requirements (whether for art or engineering) is something that on average, people are very bad at. And often you will only find out you cared about something after you see it (oh god I am so bad at this, especially because it's often delayed, so I will go 'looks good, no notes', then like a day later go 'oh wait, actually...'), which is why having a healthy dialogue and rapid feedback loop is so valuable to any project.
Except, calling it a "tool" is exactly why OP feels bad. Simply phrasing it another way, I.E. "OP paid for a service to implement a feature he wanted," would completely remove the guilt and be more technically accurate.
IMO, the way we talk about using AI leads to a lot of confusion and needs to change.
Are there any with backlit keypads (that are truly purpose-built calculators)?
I see this as a nice feature but not a must-have. Not sure if the industry agrees, or if the industry just doesn't know how many people would spring for backlit keypads because none have existed so far.
This is a great point, and I agree with you. If a weight loss supplement brand were to use an AI influencer to market their product, it does raise questions about whether their supplement does in fact work on real people.
Nevertheless, things are trending more in this direction, and AI influencers will soon become the norm. Brands should be required to disclose when their marketing is AI.
It's worth mentioning that AI videos on Prism (and on any platform) do not have to be purely prompt to creative. For example, a brand designer can take an existing creative for a billboard for example and then use AI to generate images of this creative at a train station, in the Louvre, at a bus stop etc (without actually going there and shooting images).
I understand the frustration but I don't understand the logic. The businesses who paid the tariffs (who were literally sent an invoice that they paid) should be the ones refunded.
How would the government even be able to determine if a business increased product prices due to tariffs vs other factors, or even if the business increased prices at all? What if the product is a loss leader and the company was fine just eating the expense?
Or what about a nefarious company who manufacturers their stuff in Canada but used "tariffs" as an excuse to increase prices? What would they be refunded from?
Yes, you're almost there, just go one step further. Now you've got a big pile of money and no clear rules on where it should go. Who gets to decide where it will go? Given how this administration operates, where do you think it will go?
> I understand the frustration but I don't understand the logic. The businesses who paid the tariffs (who were literally sent an invoice that they paid) should be the ones refunded.
So if I'm the owner of Uncle Billy Bobs Autoparts and I ship from Madeupcountry. I billed you $500 extra for some new car part. The US government refunds me on the tariffs they charged me to import my product to you, and now your taxes is going into my refund. Who wins in this scenario? They're effectively giving every country a free bonus. I wouldn't be surprised if some people got scammed by the tariffs by being overcharged.
There's no serious paper trail to any of this to meaningfully return lost revenue to the American consumer, I would rather not waste tax dollars on refunds.
I guess the only "winners" are maybe businesses that didn't pass on the revenue loss on to the consumer? But how do you even correctly refund those businesses?
I'm okay with that, though I don't think most of my receipts highlight how much went into a tariff. Maybe for very specific purchases it did, but for most things I've bought over the past year there's no real way to gauge this.
> How would the government even be able to determine if a business increased product prices due to tariffs vs other factors, or even if the business increased prices at all? What if the product is a loss leader and the company was fine just eating the expense? Or what about a nefarious company who manufacturers their stuff in Canada but used "tariffs" as an excuse to increase prices? What would they be refunded from?
At what point does this just wrap all the way back around to being genetic algorithms?
I'm also reminded of the old software called Formulize, which could take in a set of arbitrary data and find a function that described it. http://nutonian.wikidot.com/
The genetic algorithm comparison is actually pretty apt. Generate variations, evaluate fitness, keep the survivors. The main difference is that LLMs have a much richer prior about what "good" looks like, so the search space is dramatically smaller than random mutation.
But it raises an interesting question about where the fitness function comes from. In traditional GAs you define it explicitly. With LLM-generated code, the fitness function is often just "does it pass the tests" - which means the quality of your tests becomes the actual bottleneck, not the quality of the code generation.
I wonder if that shifts the core skill of programming from "write correct code" to "write correct specifications." And if so, is that actually a new problem, or is it the same problem formal methods people have been working on for decades, just wearing a different hat?
Taking the metaphor further, the traditional way of programming was to manually encode the logic, and the new way is to program the environment and context to let the correct program emerge through the constraints. The stricter and more precise the constraints, the closer the result is to what you want.
So then, as you say, being able to specify exactly what you want becomes the central skill of programming - I mean, describe the behavior not in terms of the final code, which is an implementation detail, but how it interacts with a given environment. That was always the case since in higher-level languages, including C, what we write is not the final code, which is technically the compiled result.
A difference I notice is that, now, even junior devs are expected to be the "mentor" to language models - teaching and guiding them to generate well-written code with plenty of tests, asserts, and other guardrails. In another comment someone said, breaking down a large program into smaller modules is useful - which is common sense, but we now have to guide an LLM to know and apply best practices, design patterns, useful tricks to improve code organization or performance, etc.
That means, it would be valuable to codify best practices, as documentation in Markdown as well as described in code, as specs and tests. Programming is becoming meta-programming. We're shifting emphasis from assembling genetic code manually to preparing the environment for such code to evolve.
AI is not something discovered by scientists and plucked out of the ether. It's engineered and controlled, for profit, by corporations which have demographics and KPIs. These companies don't owe you anything, and they make no promises.
If you're running a business that deeply relies on AI, you might as well add Sam Altman to your board of directors--because he has just as much control over your company as you do. If they have a bad quarter and need to increase rates by 1000%, your choices are to pay up or shut down.
This Mythos situation is just the beginning. Not only do they have everyone hooked, but they've actively stalled the personal skill growth of millions of people who fell into vibe-coding rather than genuinely learning. And now they have that choice: Pay up, or shut down.
reply