Thank you for saying this. It’s hard, but I’ve learned it’s a lot better to approach new information (and thus, articles) with curiosity, rather than skepticism.
I'm bullish on Claude. It will see a surge in users, and will likely surpass Perplexity this year. However I don't think it will catch up to even Meta AI (which had 10x the number of users) this year.
I use Claude. I use Codex. I've never heard of or used Meta AI. Nor do I have a Facebook account. Never have, never will.
I am also a software developer. So while the numbers of "people" that use one AI or another may be higher than either of these, it's not a useful metric for myself.
That's fine. I'm not making a value judgement about which LLMs you should use, if any.
I'm only pushing back against someone thinking "oh HN talks about Claude a lot, therefore Claude must be extremely popular". The information bubble is a real problem.
It's probably true that Anthropic's revenue is booming. But we need massive grains of salt:
a) they are private and revenue numbers for private companies are hopelessly unreliable, and
b) they are planning an IPO, so there's an extra incentive to big up the numbers. Anthropic always brings up ARR, which is very gameable when the year hasn't ended yet
LLMs can build anything. The real question is what is worth building, and how it’s delivered. That is what is still human. LLMs, by nature of not being human, cannot understand humans as well as other humans can. (See every attempt at using an LLM as a therapist)
In short: LLMs will eventually be able to architect software. But it’s still just a tool
This is only possibly true if one of two things are true:
1. All new software can be made up of of preexisting patterns of software that can be composed. ie: There is no such thing as "novel" software, it's all just composition of existing software.
2. LLMs are capable of emergent intelligence, allowing them to express patterns that they were not trained on.
I am extremely skeptical that either of these is true.
> And a horse breeder was important to transportation until the 1920s, but it doesn't mean their job was transportation. They didn't magically become great truck drivers.
Again: unrelated and pointless analogy. The horse breeder would be analogous to chipmakers or companies that make computers. Turns out they have more of a job than ever. They don’t need to “become truck drivers.”
> Programmers do not deliver products, they deliver code to make products.
That’s not even a little bit true. Programmers deliver product every day: see every single startup on the planet, and most companies.
Moreover, you said programmer. I didn’t.
I said software engineer/architect, as that was what the parent comment asked.
I chose my words intentionally. I am referring to people who engage in the act of engineering or architecting software, which is definitely not limited to writing code.
Yes, a pure programmer (aka a researcher or a junior programmer) may not fare as well, for the reasons you mentioned.
But that was never who we were discussing.
If you still think the code is the point, I’m not sure we’re going to see eye to eye, and I’m going to just agree to disagree. And if that’s the case, then you’re right: you may be left behind, keyboard in hand.
It's well understood that programming interviews are a pretty shitty tool. They're a proxy for understanding if you have basic skills required to understand a computer. Notably, most companies don't rely on these alone, they have behavioral questions, architecture questions, etc. Have you ever done an interview at these companies you're talking about? They're 8 hours lol maybe 1 is spent programming.
But it's just very obvious to any software engineer worth anything that code is just one part of the job, and it's usually somewhere in the middle of a process. Understanding customer requirements, making technical decisions, maintaining the codebase, reviewing code changes/ providing feedback, responding on incidents, deciding what work to do or not to do, deciding when a constraint has to be broken, etc. There are a billion things that aren't "typing code" that an engineer does every day. To deny this is absurd to anyone who lives every day doing those things.
And what do you derive from that fact? The position is that coding is only one portion of the job. "But there's a coding interview" was used to rebut this position. I have pointed out that the coding interview is a fraction of the procses, once again indicating that the job involves much more than coding.
So you saying "but there's a coding interview" again... who cares? Why is that relevant?
Everybody who works for salary cares. You can lament how coding is just 1% of work, it’s irrelevant what percentage is “real” coding work when you can’t pass that coding round and you’re not hired.
I have literally no clue what point you're trying to string together. I tried to refocus things to the topic at hand but you're just saying completely irrelevant things. What is your point?
I'm genuinely blown away at the attitude lately that developers spend their time programming/ our primary value is code. I guess because we tend to be organizationally isolated people just have no idea? But like... it's so absurd to anyone who does the job. It's like thinking that PM's primary role is assigning tickets, just so obviously false.
I think there's some resentment. I've seen repeatedly now people essentially celebrating that "tech bros" are finally going to see their salaries crash or whatever, it's pretty sick but I've noticed this quite a lot.
> Is that why most prestigious jobs grilled you like a devil on algos/system design?
No. That’s because interviews have always sucked, and have always been terrible predictors of how you do on the job. We just never had a better way of deciding except paying for a project.
> That’s just nonsense. It’s like saying “delivering product was always the most important thing, not drinking water”.
That’s… not an argument? It’s not even a strawman, it’s just unrelated.
The thing a customer has always paid for was the end product. Not the code. This is absolutely trivial to see, since a customer has never asked to read the code.
> No. That’s because interviews have always sucked, and have always been terrible predictors of how you do on the job. We just never had a better way of deciding except paying for a project.
Who cares? They’re here, and they will stay here for foreseeable future.
> That’s… not an argument? It’s not even a strawman, it’s just unrelated.
The thing a customer has always paid for was the end product. Not the code. This is absolutely trivial to see, since a customer has never asked to read the code.
Yeah, and they didn’t pay for the water that you drank. Without which, you know, you’ll fucking die. Code is part of the package, just like you eating and shitting in the process.
A software engineer will be a person who inspects the AI's work, same as a building inspector today. A software architect will co-sign on someone's printed-up AI plans, same as a building architect today. Some will be in-house, some will do contract work, and some will be artists trying to create something special, same as today. The brute labor is automated away, and the creativity (and liability) is captured by humans.
> It's a tool, but one that product or C levels can use directly as I see it?
Wait, I thought product and C level people are so busy all the time that they can’t fart without a calendar invite, but now you say they have time to completely replace whole org of engineers?
The commercial solutions probably don't work because they don't use the best SOTA models and/or sully the context with all kinds of guardrails and role-playing nonsense, but if you just open a new chat window in your LLM of choice (set to the highest thinking paid-tier model), it gives you truly excellent therapist advice.
In fact in many ways the LLM therapist is actually better than the human, because e.g. you can dump a huge, detailed rant in the chat and it will actually listen to (read) every word you said.
Please, please, please don’t make this mistake. It is not a therapist. At best, it might be a facsimile of a life coach, but it does not have your best interests in mind.
It is easy to convince and trivial to make obsequious.
That is not what a therapist does. There’s a reason they spend thousands of hours in training; that is not an exaggeration.
Humans are complex. An LLM cannot parse that level of complexity.
You seem to think therapists are only for those in dire straits. Yes, if you're at that point, definitely speak to a human. But there are many ordinary things for which "drop-in" therapist advice is also useful. For me: mild road rage, social anxiety, processing embarrassment from past events, etc.
The tools and reframing that LLMs have given me (Gemini 3.0/3.1 Pro) have been extremely effective and have genuinely improved my life. These things don't even cross the threshold to be worth the effort to find and speak to an actual therapist.
I never said therapists were only for those in crisis; that is a misreading of my argument entirely.
An LLM cannot parse the complexity of your situation. Period. It is literally incapable of doing that, because it does not have any idea what it is like to be human.
Therapy is not an objective science; it is, in many ways, subjective, and the therapeutic relationship is by far the most important part.
I am not saying LLMs are not useful for helping people parse their emotions or understand themselves better. But that is not therapy, in the same way that using an app built for CBT is not, in and of itself, therapy. It is one tool in a therapist’s toolbox, and will not be the right tool for all patients.
That doesn’t mean it isn’t helpful.
But an LLM is not a therapist. The fact that you can trivially convince it to believe things that are absolutely untrue is precisely why, for one simple example.
As you said earlier, therapists are (thoroughly) trained on how to best handle situations. Just 'being human' (and thus empathizing) may not be such a big part of the job as you seem to believe.
Training LLMs we can do.
Though it might be important for the patient to believe that the therapist is empathizing, so that may give AI therapy an inherent disadvantage (depending on the patient's view of AI).
Socialization with other humans has so many benefits for happiness, mental health, and longevity. Conversely, interaction with LLMs often leads to AI psychosis and harms mental health. IMO, this is pretty strong evidence that interaction with LLMs is not similar to socialization with real humans, and a pretty good indicator that LLM “therapy” is significantly less helpful or even harmful than human-driven therapy.
While I agree with you, I also find that an LLM can help organize my thoughts and come to realizations that I just didn't get to, because I hadn't explained verbally what I am thinking and feeling. Definitely not a substitute for human interaction and relationships, which can be fulfilling in many-many ways LLM's are not, but LLM's can still be helpful as long as you exercise your critical thinking skills. My preference remains always to talk to a friend though.
EDIT: seems like you made the same point in a child comment.
Came here to say this. The right solution to this is still the same as it always was - teach the juniors what good code looks like, and how to write it. Over time, they will learn to clean up the LLM’s messes on their own, improving both jobs.
Someone bedridden is not the focus of the article or conversation; once you are no longer capable of being active, it is obviously true that you’ll partake in more sedentary activities.
If you let it, sure. But I don't go into a session asking 'what should I write.' Rather, I ask it to help fight me on my ideas, so that I can stress-test the logic behind them, which is precisely what I do with humans too.
Only with humans, it's admittedly way more fun. :)
I've worked corporate jobs all my life, and I was never one misunderstood message away from being fired. Instead they would've talked to me and, even if they figured it was my fault, they would've given me a warning since it was the first time. No worthwhile employer is firing people for the first offense, corporate or otherwise.
> I've worked corporate jobs all my life, and I was never one misunderstood message away from being fired.
100% you have been, you just didn't understand nor send the wrong message.
As a sidenote, working for a Corporation is not solely the bar for what people mean when they say working for Corporate. "Corporate" implies a larger organization that promotes policies developed under different circumstances to your work environment which minimizes liability and promotes homogeneity in all aspects of the working experience.
> 100% you have been, you just didn't understand nor send the wrong message.
Sure, there are thousands of messages I can come up with that would be immediately fireable; but that’s true anywhere, not just in corporate life, and is thus a strawman.
I have worked plenty of corporate jobs; Morgan Stanley, KBC Financial Products, Apple, Synopsys, the intelligence community (not corporate, but just as bad).
Never once was I "one misunderstood message" away from getting canned. I would have quit immediately if that were true. I understand not everyone can quit, but more people can than do.
Nobody deserves to work under that kind lack of of psychological safety, and certainly anyone on Slack and not in a factory has more of a choice.
reply