Hacker Newsnew | past | comments | ask | show | jobs | submit | bambax's commentslogin

Exactly that. Plus I need to be able to make adjustments here and there without the whole thing collapsing on me.

If you know Rust inside and out (if, as one example in TFA, you co-wrote The Rust Programming Language!) then sure, why not Rust?

But if not, it would be unwise.

That said, I use AI to write small C utilities that compile and run on any Windows version starting with Vista (which neither Go nor Rust can do). Yet I'm not a C programmer; but I can read and adjust it when needed, and the whole thing does work.


> right now

It's always been like this. I used to build websites in the 90s and it was exactly like that. It was also horrible. People who had no tech background whatsoever making decisions on which tech to use (PHP vs ASP vs ColdFusion, remember those?); overpaying agencies to make HTML "templates" that had to have round corners everywhere. Etc.

Not everything's great today, but it's a little less bad I think.


I don’t know. I think back to my first dialup connection and getting internet for the first time. In no way do I remember fear being a driver. I remember people being curious. Nobody ran around saying you need to get on the internet or you will be left in the dust. Would be curious if anyone had examples of this if I am wrong. Youtube links to old news broadcasts or magazine print ad archive or something.

> by solving hard problems you get an insight into the problem-solving process itself, at least in your area of expertise, in a way that you simply don’t if all you do is read other people’s solutions. One consequence of this is that people who have themselves solved difficult problems are likely to be significantly better at using solving problems with the help of AI, just as very good coders are better at vibe coding than not such good coders

Yes but it's not just that if you solved a problem yourself, you're better at solving other problems; it's also that you actually understand the problem that you solved, much better than if you simply read a proof made by somebody (or something) else.

I see this happening in the enterprise. People delegate work to some LLM; work isn't always bad, sometimes it's even acceptable. But it's not their work, and as a result, the author doesn't know or understand it better than anyone else! They don't own it, they can't explain it. They literally have no value whatsoever; they're a passthrough; they're invisible.


Are you a cutting edge research scientist or something? Everyone I know works in the same domain every day. The problems are the same. People aren't solving brand new problems to humanity every day. We make budgets and look at ticket counts. Roll out patches. Replace hardware. Upgrade software packages. Make a new dashboard to track a project. I guess if every day is a completely novel thing for you, ok. I feel like the goalposts have moved to an absolutely ridiculous place. Oh no, I won't have a bunch of random error log numbers memorized anymore? Who gives a shit. I just want to afford a place to live so I can play my guitar and make something good for dinner. Maybe I'm just old, but I don't see why the average person needs to be a fuckin genius problem solver.

I think that’s fine, but 1) that mentality leaves you extremely vulnerable to being disrupted by LLMs and 2) IMO, if you are solving the same problems every day it means you are not making progress on solving the root causes of those problems. What you are describing is toil, not knowledge work

I don't think it matters much what kind of problem it is. If it is challenging enough to benefit from assistance and you end up playing a minor role in the solution, it seems like you are putting yourself in the worst position possible. You lose your edge for functioning within the problem space and it raises the question why you are even in the loop at all. If its job security you want, transforming your role into LLM babysitter seems like the worst way to ensure it.

It's an adversarial economy. Using a LLM at work doesn't mean the work is challenging. A lot of jobs are "bullshit jobs". People are using LLMs because it gives them back time. If they don't use it their colleague will and make them look bad.

Company might fire you tomorrow. Fundamentally if a LLM can do the job it's not just employees at risk, it is also the company. There is a lot of symmetry actually with how companies delegate to employees to how employees delegate to LLMs. You can follow the logic to conclude a lot of companies are then bullshit companies. This is not a problem for the individual to solve. Your job at work is akin to the company's - earn the best return while you still can. Wasting your time for the essentially the same output at a slower pace is a bad return.

When people get laid off en masse this incentive structure will have to be altered. But telling an individual to ignore their basic economic incentives until then is unlikely to work.


I have also come to the conclusion independently that a lot of companies are bullshit companies, maybe that is closer to the core issue. For the individuals who do have some choice in the matter, I think it is important to hold on to their skills by continuing to use them. It sucks that our work culture is so competitive, but from that angle I believe they will stand out eventually as more competent.

Most companies are real, it's just that a good fraction of the work is mostly unnecessary. Partially because of the overhead of doing business activities that is unneeded most of the time, partly because we don't know what work will be useful, and partly for silly social reasons

I keep coming back to the idea that all the upheaval combined with all the new tools at our disposal will empower and motivate people to start businesses that challenge the status quo. I've lived long enough to see that play out at scale, it is basically how we got Google. That might not sound encouraging, but Google was once a really inspiring company and one of the best places to work.

Ok let's make math illegal and burn down the data centers I guess. Idk what to tell you, but we will adapt and new roles will be created. Just like every single tool and piece of tech that came before. LLM manager? Fine.

The difference so far is that these LLMs are owned by corporations, and very aggressive American corporations at that.

So now you are essentially reliant on them.

Not saying that this is something new, but times they are a changin


Don't use your laptop. Or your phone. It's owned by a corporation.

Do you hear yourself? If you don't want to rely on corporations go live in the woods.


The parent said American corporations. No one with any sense wants a dependency critical to their state or private company sitting under the direct control of America any more.

I think it's intellectually dishonest to dismiss the absolute accumulation of human's knowledge under very specific brands for profitability using false equivalencies. When I build something using chatGPT, especially if I was unable to build it before, I arrive at a result that I could have previously arrived with "hard work" by skipping the "hard work" part.

Now, many will argue that you wouldn't have poured in time and energy in that endeavour anyways, so it's fine. But the crucial part missing here is the effort. We're about to witness the side effects of societal-wide reliance on LLM's, the same way we're still paying the price for the social media boom, misinformation, propaganda, echo-chambers and algorithmic bubbles.

Notice that none of the above actually invented misinformation, etc. they just magnified an existing problem. LLM's magnify the need to "get it done, fast" but I don't see the engineering excellence everyone promised me that I'll see at any level.


In the US, much of the woods are owned by corporations too. Those that aren't are, in theory, owned by the public, but the oligarchs work hard to hollow that out so that practically public lands are owned by them too.

>Just like every single tool and piece of tech that came before.

The thing about relying on the past to predict the future is that works ... until it doesn't.

We've yet to see a technology with as diverse utility as LLMs. What happens when not just the tech sector starts downsizing, but the whole white collar workforce?


> new roles will be created.

In the past, one such "new role" was that of slave. In fact, we expect slavery is <10,000 years old! Yes, new roles will be created. But there's nothing to say that they'll be pleasant for us to take on.


It doesn't seem like you're responding to my post, more to the quote? But my point isn't that everybody should be a genius problem solver, although that would help, while being stuck in the same routine doesn't.

My point is, if you delegate your job to AI, and it works, then 1/ you don't know the result of the work in more detail than any other person, and 2/ the people you're reporting to can probably write a prompt as good as yours, if not better.

Which means: you've made yourself dispensable. Nothing very good for dinner; no nice place to live. But lots of time to practice guitar I guess.


so how would an LLM being able to do your job help you afford a place to live

>Who gives a shit. I just want to afford a place to live so I can play my guitar and make something good for dinner. Maybe I'm just old, but I don't see why the average person needs to be a fuckin genius problem solver.

I enjoy programming and want to be engaged for the 40 hours a week where I sell my labor.

I also care about my profession and technology, and I don't want the world to become an idiocracy where nobody understands any of the technology we're overly depedent upon.


I’m sorry, but who cares if this doesn’t apply to you?

> I see this happening in the enterprise. People delegate work to some LLM; work isn't always bad, sometimes it's even acceptable. But it's not their work, and as a result, the author doesn't know or understand it better than anyone else! They don't own it, they can't explain it. They literally have no value whatsoever; they're a passthrough; they're invisible.

According to the blog post linked in the OP, the LLM-generated results were read, understood, and confirmed by the mathematician whose work they built on.

I notice a dichotomy here between people who care about results and people who care about process. The former group wants to use LLMs insofar as they can contribute to getting results. The latter group is wary of LLMs because they're more interested in the process and less interested in the results themselves. Needless to say, I think the former group is right, and I'm happy to see that mathematicians (or some of them) agree.


I think you are misunderstanding the parent's comment.

>the LLM-generated results were read, understood, and confirmed by the mathematician whose work they built on.

The mathematician and the blog author are not the same person (as you seem to understand). Nathanson (the mathematician) is the one who is the expert verifier. He is the person who has the higher value and won't be fired in some hypothetical.

>>They don't own it, they can't explain it. They literally have no value whatsoever; they're a passthrough; they're invisible.

This is the blog author in the parent's description. If their boss asks them what they need to prove that the AI is more than capable in this domain and the author tells their boss they need Nathanson (the mathematician) to verify the results, his boss will thank him for demonstrating the AI's capability in this domain, fire him, pass his prompt history to Nathanson, and keep Nathanson on the job (the expert verifier).

Which is the parent's point after all, because he's referring to the hypothetical job security of the blog author not the mathematician.


  > The mathematician and the blog author are not the same person
  > (as you seem to understand). Nathanson (the mathematician) is
  > the one who is the expert verifier. He is the person who has
  > the higher value and won't be fired in some hypothetical.
The article's author is https://en.wikipedia.org/wiki/Timothy_Gowers

> They literally have no value whatsoever; they're a passthrough; they're invisible.

Then middle management also have no value, since they're also a passthrough between upper management and ICs, yet they never went extinct.


Working on it!

I delegate writing a binary executable to the compiler and the linker.

I don't know or understand the binary executable.

I don't own the binary executable, I don't understand it, I can't explain it, it's not my work.

I'm a passthrough; I'm invisible.

I have literally no value whatsoever.


> quite a lot of perfectly good human mathematics consists in putting together existing knowledge and proof techniques

Creativity is connecting ideas from different domains and see if something from one field applies to another. I do think AI is overhyped generally; but a major benefit from AI could be that after ingesting all the existing human knowledge (something no single human can ever hope to achieve) it would "mix and connect" it and come up with novel insights.

Most published research sits ignored and unread; AI can uncover and use everything.


> Creativity is connecting ideas from different domains and see if something from one field applies to another.

That's true. The question is whether the produced pattern has any value. LLMs are incapable of determining this, and will still often hallucinate, and make random baseless claims that can convince anyone except human domain experts. And that's still a difficult challenge: a domain expert is still needed to verify the output, which in some fields is very labor intensive, especially if the subject is at the edge of human knowledge.

The second related issue is the lack of reproducibility. The same LLM given the same prompt and context can produce different results. This probability increases with more input and output tokens, and with more obscure subjects.

The tools are certainly improving, but these two issues are still a major hurdle that don't get nearly as much attention as "agents", "skills", and whatever adjacent trend influencers are pushing today.

And can we please stop calling pattern matching and generation "intelligence"? This farce has gone on long enough.


> And can we please stop calling pattern matching and generation "intelligence"

thats literally what an IQ test tests - abstract pattern matching. but I guess you dont like IQ tests either


Some IQ tests like the WAIS test on retained, common facts. They are not all just pattern matching.

Also, I do not like IQ tests either (having taken one myself). They are unbelievably boring, pointless, and measure more than just "intelligence."


It may not be a major achievement by the mathematician (although it's debatable) but it would still be a major result.

OpenRouter lets you pay by the token only (no subscription), has all the frontier models (including Opus 4.7, GPT-5.5) and most of the others, and if you use it sparingly it usually turns out to be quite cheap.

API pricing for Claude is about an order of magnitude more expensive than subscriptions (numbers: https://she-llac.com/claude-limits). But it may be worth it with DeepSeek V4 Pro, which is currently on discount.

Depends very much on usage! If you connect it to tools like Cursor, etc. then yes a subscription is probably cheaper -- although, you'd have to subscribe to each provider if you want to use them all.

But if you ask questions occasionally, (and don't resend, for example, your whole codebase with each request), then the API feels really cheap, even for the frontier models.


My problem with pay-by-the-token is that it discourages me using the thing ("oh the prompt will cost me $0.1"), so I pay a subscription which I'm pretty sure costs me about two-three times what I'd pay just for the api costs, but encourages me to use it more ("oh I have a subscription already, better make use of it").

Yeah but mouse jigglers 1/ have to be plugged in / occupy a USB port, 2/ usually don't turn off when LOGOFF, resulting in battery depletion and 3/ don't work on remote servers where you would want an RDP session to stay open but there are group policies that prevent it.

I wrote a small C utility that avoids all 3 problems and now I couldn't live without it!


I intensely agree with everything that's being said in TFA; this however could be nuanced:

> Never ask a model for confirmation; the tool agrees with everyone

If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore. So yes, never ask a model for confirmation or encouragement; but you can absolutely ask it to critique something, and that's often of value.


There is always a chance that the LLM will hallucinate something wrong. It's all probabilities, quite possibly the closest thing to quantum mechanics in action that we have at the macro level. The act of receiving information from an LLM collapses its state, which was heretofore unknown.

However, your actions can certainly influence those probabilities.

> If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore.

Since, at the most basic level, LLMs are prediction engines, and since one of the things they really, really want (OK, they don't "want", but one of the things they are primed to do) is to respond with what they have predicted you want to see.

Embedding assertions in your prompt is either the worst thing you can do, or the best thing you can do, depending on the assertions. The engine will typically work really hard to generate a response that makes your assertion true.

This is one reason why lawyers keep getting dinged by judges for citations made up from whole cloth. "Find citations that show X" is a command with an embedded assertion. Not knowing any better, the LLM believes (to the extent such a thing is possible) that the assertion you made is true, and attempts to comply, making up shit as it goes if necessary.


While I’m not disagreeing, if you ask the LLM to critique something, it will try very hard to find something to critique, regardless of how little it might be warranted. The important thing is that you have to remain the competent judge of its output.

One of the best uses of AI I've found is code reviewing stuff I've written either entirely myself, or even code generated in a previous session.

Yes or boiler plate! I usually go in and tweak it anyways because it's not good. But it does help. This agentic coding thing is madness to me.

I switched over to small local models. I do not need the vibe coder expensive models at all


But those giant models get the boilerplate correct the first try! You're totally right though. My favorite thing to do these days is to hand craft the code in the middle of the app, then tell AI to make me a rest endpoint and a test. I do the fun/important part. :D

Though, that's coming from someone who can't justify thousands on personal hardware and is instead paying $20/month to Openai. Might as well use the best.


I hear you in the local model upfront cost. I lucked out and I like to play video games and took my GPU a little to seriously. Buyers remorse is now gone I guess.

You can get pretty good results with even smaller models. Cant prompt and pray with them as much though. So I get it.

Deepseek is like pennies. I might sign up with them one day


> never ask a model for confirmation or encouragement; but you can absolutely ask it to critique something, and that's often of value.

What's the difference? The end result is equally unreliable.

In either case, the value is determined by a human domain expert who can judge whether the output is correct or not, in the right direction or not, if it's worth iterating upon or if it's going to be a giant waste of time, and so on. And the human must remain vigilant at every step of the way, since the tool can quickly derail.

People who are using these tools entirely autonomously, and give them access to sensitive data and services, scare the shit out of me. Not because the tool can wipe their database or whatnot, but because this behavior is being popularized, normalized, and even celebrated. It's only a matter of time until some moron lets it loose on highly critical systems and infrastructure, and we read something far worse than an angry tweet.


I think this is exactly correct.

Yes, but I don't think having LLMs only write functions, and doing the architecture yourself qualifies as "vibe coding": rather "AI-assisted engineering" (which is what I do).

Vibe coding, to me, means having an LLM, with or without agents, do everything after an initial vague prompt. Which is why "anyone" can vibe code (because anyone can write general hand-waving imprecise instructions). This inevitably results in pointless demos and/or unmaintainable monsters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: