Hacker Newsnew | past | comments | ask | show | jobs | submit | more runtimepanic's commentslogin

Lovely piece of digital archaeology. Reverse-engineering a Classic Mac era device is equal parts patience and respect for old constraints. What stood out to me is how much implicit knowledge was baked into drivers back then, timing assumptions, undocumented commands, “it just works on System 7”. Also a good reminder that long-term hardware usability often survives only because someone is curious enough to poke at it with a hex editor instead of letting it die in a landfill.


Nice idea and clean execution. One question though: where does the “AI” add durable value beyond a rules engine plus a decent questionnaire? For workout planning, constraints and progression logic often matter more than free-form generation. Curious how you validate routines over time (injury risk, overload, adaptation) and whether users can audit or override the model’s decisions. Open-source here is a big plus


Excellent feedback. I coded this proto based on a friend's idea, and I didn't think about progression logic or routine validation. I'll keep i in mind for a future version.


My initial thought: "Why do you need A.I. for this?"

But then again, I haven't checked it out yet.


SmarterMail Build 9406 and earlier is vulnerable to arbitrary file upload. An unauthenticated attacker can upload arbitrary files to any location on the mail server, potentially enabling remote code execution.


“IDEsaster” is a good term, because most of the risk isn’t in the model but in the ambient authority we casually hand to agents. An AI that can read repos, write code, run tests, hit package managers, and access secrets is effectively a junior engineer with prod keys and zero fear. The interesting vulnerabilities aren’t prompt injections in isolation, but cross-boundary ones: repo → CI → secrets → cloud. Until IDEs treat agents like untrusted plugins with strict sandboxing, least privilege, and auditable actions, we’re just automating foot-guns at scale.


The title is doing a lot of work here. What resonated with me is the shift from “writing code” to “steering systems” rather than the hype framing. Senior devs already spend more time constraining, reviewing, and shaping outcomes than typing syntax. AI just makes that explicit. The real skill gap isn’t prompt cleverness, it’s knowing when the agent is confidently wrong and how to fence it in with tests, architecture, and invariants. That part doesn’t scale magically.


Is anyone else getting more mentally exhausted by this? I get more done, but I also miss the relaxing code typing in the middle of the process.


I think there are two groups of people emerging. deep / fast / craft-and-decomposition-loving vs black box / outcome-only.

I've seen people unable to work at average speed on small features suddenly reach above average output through a llm cli and I could sense the pride in them. Which is at odds with my experience of work.. I love to dig down, know a lot, model and find abstractions on my own. There a llm will 1) not understand how my brain work 2) produce something workable but that requires me to stretch mentally.. and most of the time I leave numb. In the last month I've seen many people expressing similar views.

ps: thanks everybody for the answers, interesting to read your pov


I get what you're saying, but I would say that this does not match my own experience. For me, prior to the agentic coding era, the problem was always that I had way more ideas for features, tools, or projects than I had the capacity to build when I had to confront the work of building everything by hand, also dealing with the inevitable difficulties in procrastination and getting started.

I am a very above-average engineer when it comes to speed at completing work well, whether that's typing speed or comprehension speed, and still these tools have felt like giving me a jetpack for my mind. I can get things done in weeks that would have taken me months before, and that opens up space to consider new areas that I wouldn't have even bothered exploring before because I would not have had the time to execute on them well.


I think the comprehension part is very important.

When I write my own code without an LLM, it is an extension of my own thinking, my own mental model.

But when I use an LLM, that LLM produces code that I need to comprehend, understand. It's like I'm continually reading some other developers' code, and having to understand their mental model and way of thinking to truly understand the code.

For me, this is very tiring. It just costs more energy for me to review and read other people's code than when I write it myself.


The sibling comments (from remich and sanufar) match my experience.

1. I do love getting into the details of code, but I don't mind having an LLM handle boilerplate.

2. There isn't a binary between having an LLM generate all the code and writing it all myself.

3. I still do most of the design work because LLMs often make questionable design decisions.

4. Sometimes I simply want a program to solve a problem (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable.


> I do love getting into the details of code, but I don't mind having an LLM handle boilerplate.

My usual thought is that boilerplate tells me, by existing, where the system is most flawed.

I do like the idea of having a tool that quickly patches the problem while also forcing me to think about its presence.

> There isn't a binary between having an LLM generate all the code and writing it all myself. I still do most of the design work because LLMs often make questionable design decisions.

One workflow that makes sense to me is to have the LLM commit on a branch; fix simple issues instead of trying to make it work (with all the worry of context poisoning); refactor on the same branch; merge; and then repeat for the next feature — starting more or less from scratch except for the agent config (CLAUDE.md etc.). Does that sound about right? Maybe you do something less formal?

> Sometimes I simply want a program to solve a purpose (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable.

Yeah, that sounds about right.


I think for me, the difference really comes down to how much ownership I want to take in regards to the project. If it’s something like a custom kernel that I’m building, the real fun is in reading through docs, learning about systems, and trying to craft the perfect abstractions; but if it’s wiring up a simple pipeline that sends me a text whenever my bus arrives, I’m happy to let an LLM crank that out for me.

I’ve realized that a lot of my coding is on this personal satisfaction vs utility matrix and llms let me focus a lot more energy onto high satisfaction projects


> deep / fast / craft-and-decomposition-loving vs black box / outcome-only

As a (self-reported) craft-and-decomposition lover, I wouldn't call the process "fast".

Certainly it's much faster than if I were trying to take the same approach without the same skills; and certainly I could slow it down with over-engineering. (And "deep" absolutely fits.) But the people I've known that I'd characterize as strongly "outcome-only", were certainly capable of sustaining some pretty high delta-LoC per day.


That's kind of the point here. Once a dev reached a certain level, they often weren't doing much "relaxing code typing" anyways before the AI movement. I don't find it to be much different than being a tech lead, architect, or similar role.


As a former tech lead and now staff engineer, I definitely agree with this. I read a blog post a couple of months ago that theorized that the people that would adopt these technologies the best were people in the exact roles that you describe. I think because we were already used to having to rely on other people to execute on our plans and ideas because they were simply too big to accomplish by ourselves. Now that we have agents to do these things, it's not really all that different - although it is a different management style working around their limitations.


Exactly. I've been a tech lead, have led large, cross-org projects, been an engineering manager, and similar roles. For years, when mentoring upcoming developers what I always to be the most challenging transition was the inflection point between "I deliver most of my value by coding" to "I deliver most of my value by empowering other people to deliver". I think that's what we're seeing here. People who have made this transition are already used to working this way. Both versions have their own quirks and challenges, but at a high level it abstracts.


LLMs are just a programming language/compiler/REPL, though, so there is nothing out of the ordinary for developers. Except what is different is the painfully slow compile time to code ratio. You write code for a few minutes... and then wait. Then spend a few more minutes writing code... and then wait. That is where the exhaustion comes from.

At least in the olden days[1] you could write code for days before compiling, which reduced the pain. Long compilation times has always been awful, but it is less frustrating when you could defer it until the next blue moon. LLMs don't (yet) seem to be able to handle that. If you feed them more than small amounts of code at a time they quickly go off the rails.

With that said, while you could write large amounts of code and defer it until the next blue moon, it is a skill to be able to do that. Even in C++, juniors seem to like to write a few lines of code and then turn to compiling the results to make sure they are on the right track. I expect that is the group of people who is most feeling at home with LLMs. Spending a few minutes writing code and then waiting on compilation isn't abnormal for them.

But presumably the tooling will improve with time.

[1] https://xkcd.com/303/


Programming languages are structured and have specifications. It is possible to know what code will do just by reading it.


Well designed ones do, at least. LLMs, in their infancy, still bring a lot of undefined behaviour, which is you end up stuck in the code for a few minutes -> compile -> wait -> repeat cycle. But that is not a desirable property and won't remain acceptable as the technology matures.


I don't see any way this is changing, acceptable or not.


It is quite possible the tools will never improve beyond where they sit today, sure, but then usage will naturally drift away from that fatiguing use (not all use, obviously). The constant compile/wait cycle is exhausting exactly because it is not productive.

Businesses are currently willing to accept that lack of productivity as an investment into figuring out how to tame the tools. There is a lot of hope that all the problems can be solved if we keep trying to solve them. And, in fairness, we have gotten a lot closer than we were just a year or so ago towards that end, so the optimism currently remains strong. However, that cannot go on forever. At some point the investment has to prove itself, else the plug will be pulled.

And yes, it may ultimately be a dead end. Absolutely. It wouldn't be the first failure in software development.


Ya know, I have to admit feeling something like this. Normally, the amount of stuff I put together in a work day offers a sense of completion or even a bit of a dopamine bump because of a "job well done". With this recent work I've been doing, it's instead felt like I've been spending a multiplier more energy communicating intent instead of doing the work myself; that communication seems to be making me more tired than the work itself. Similar?


It feels like we all signed up to be ICs, but now we’re middle managers and our reports are bots.


I forget where I saw this (a Medium post, somewhere) but someone summed this up as "I didn't sign up for this just to be a tech priest for the machine god".


Someone commented yesterday that managers and other higher-ups are "already ok with non-deterministic outputs", because that's what engineers give them.

As a manager/tech-lead, I've kind of been a tech priest for some time.


Which is why it's so funny to hear seasoned engineers lament the probabilistic nature of AI systems, and how you have to be hand setting code to really think about the problem domain.

They seem to all be ICs that forget that there are abstraction layers above them where all of that happens (and more).


> and our reports are bots.

With no gossip, rivalry or backstabbing. Super polite and patient, which is very inspiring.

We also brutally churning them by "laying off" the previously latest model once the new latest is available.


You’re possibly not entering into the flow state anymore.

Flow is effortless. and it is rejuvenating.

I believe:

While communication can be satisfying, it’s not as rejuvenating as resting in our own Being and simply allowing the action to unfold without mental contraction.

Flow states.

When the right level of challenge and capability align and you become intimate with the problem. The boundaries of me and the problem dissolve and creativity springs forth. Emerging satisfied. Nourished.


Flow state can happen at various levels of abstraction, not just when hand writing code in a gen 3 language.


This is why I think LLMs will make us all a LOT smarter. Raw code made it so we stopped heavily thinking in between but now it's just 100% the most intense thought processes all day long.


It seems pretty obvious that the opposite is true. I know I’ve experienced some serious skill atrophy that I’m now having to actively resist. There’s a lot lost by no longer having to interact with the raw materials of your craft.

Thinking is a skill that is reinforced by reading, designing and writing code. When you outsource your thinking to an LLM your ability to think doesn’t magically improve…it degrades.


Sure my raw coding ability is degraded but my architecting, debugging, and planning have all skyrocketed.

Those always required significantly more though just like before and that's all I do now.

I mean old staff engineers and managers are often not coding at all but they are significantly better designers than a freshgrad that leetcode grinded for 4 years. Isn't that the same argument?


Nah, I don’t miss at all typing all the tests, CLIs, and APIs I’ve created hundreds of times before. I dunno if I it’s because I do ML stuff, but it’s almost all “think a lot about something, do some math, and and then type thousands of lines of the same stuff around the interesting work.”


Yes, absolutely, I can be mentally wiped out by lunch.


Yes it's taxing and mentally draining, reading code and connecting dots is always harder than writing it.

And if you let the AI too loose, as when you try to vibe code an entirely new program, I end up in the situation where in 1 day I have a good prototype and then I can spend easily 5 times as much sorting the many issues and refactoring in order to have it scale to the next features.


I think it's the serial waiting game and inevitable context switching while you wait.

Long iteration cycles are taxing


So far what I've been doing is, I look for the parts that seem like they'd be rewarding to code and I do them myself with no input from the machine whatsoever. It's hard to really understand a codebase without spending time with the code, and when you're using a model, I think there's a risk of things changing more quickly than you can internalize them. Also, I worry I'll get too comfortable bossing chatbots around & I'll become reluctant to get my hands dirty and produce code directly. People talk about ruining their attention spans by spending all their time on TikTok until they can no longer read novels; I think it'd be a real mistake to let that happen to my professional skill set.


For me it's the opposite, I'm wasting less energy over debugging silly bugs and fighting/figuring out some annoying config.

But it does feel less fulfilling I suppose.


I like to alternate focusing on AI wrangling and writing code the old fashioned way.


It's difficult to steer complex systems correctly, because no one has a complete picture of the end goal at the outset. That's why waterfall fails. Writing code agentically means you have to go out of your way to think deeply about what you're building, because it won't be forced on you by the act of writing code. If your requirements are complex, they might actually be a hindrance because you're going have to learn those lessons from failed iterations instead of avoiding them preemptively.


The stereotype that writing code is for junior developers needs to die. Some devs are hired with lofty titles specifically for their programming aptitude and esoteric systems knowlege, not to play implementation telephone with inexperienced devs.


I don't think that anyone actually believes that writing code is only for junior developers. That seems to be a significant exaggeration at the very least. However, it is definitely true that most organizations of this size are hiring people into technical lead, staff engineer, or principal engineer roles are hiring those people not only for their individual expertise, or ability to apply that expertise themselves, but also for their ability to use that expertise as a force multiplier to make other less experienced people better at the craft.


In my world there are Hard Problems that need to be solved for bu$ine$$ rea$on$, no being a "force multiplier" required (whatever that really means).


> I don't think that anyone actually believes that writing code is only for junior developers.

That is, unquestionably, how it ought to be. However, the mainstream – regrettably – has devolved into a well-worn and intellectually stagnant trajectory, wherein senior developers are not merely encouraged but expected to abandon the coding altogether, ascending instead into roles such as engineering managers (no offence – good engineering managers are important, it is the quality that has been diluted across the board), platform overseers (a new term for stage gate keepers), or so-called solution architects (the ones who are imbued with compliance, governance and do not venture out past that).

In this model, neither role is expected – and in some lamentable cases, is explicitly forbidden[0] – to engage directly with code. The result is a sterile detachment from the very systems they are charged with overseeing.

Worse still, the industry actively incentivises ill-considered career leaps – for instance, elevating a developer with limited engineering depth into the position of a solution designer or architect. The outcome is as predictable as it is corrosive: individuals who can neither design nor architect.

The number of organisations in which expert-level coding proficiency remains the norm at senior or very senior levels has dwindled substantially over the past couple of decades or so – job ads explicitly call out the management experience, knowledge of vacuous or limited usefulness architectural frameworks (TOGAF and alike). There do remain rare islands in an ever-expanding ocean of managerial abstraction where architects who write code, not incessantly but when a need be, are still recognised as invaluable. Yet their presence is scarce.

The lamentable state of affairs has led to a piquant situation on the job market. In recent years, headhunters have started complaining about being unable to find an actually highly proficient, experienced, and, most importantly, technical architect. One's loss is another one's gain, or at least an opportunity, of course.

[0] Speaking from firsthand experience of observing a solution architect to have quit their job to run a bakery (yes) due to the head of architecture they were reporting to explicitly demanding the architect quit coding. The architect did quit, albeit in a different way.


"it’s knowing when the agent is confidently wrong and how to fence it in with tests, architecture, and invariants."

Strongly suspect this is simply less efficient than doing it yourself if you have enough expertise.


Does using an LLM to craft Hackernews comments count as "steering systems"?


You're totally right! It's not steering systems -- it's cooking, apparently


This is the kind of curiosity that leads to the most interesting findings. Hotels are a perfect storm of shared networks, opaque vendor integrations, and “it just works” assumptions. A mysterious UDP stream could be anything from Chromecast-style discovery to IPTV control or some half-documented vendor heartbeat. What’s usually more revealing than the payload is the pattern: broadcast vs unicast, frequency, and who responds. Also a good reminder of how much ambient network noise we’re all swimming in without noticing.


Subtle but important bug. This is a good example of how “is valid” checks in crypto are rarely as simple as they sound. Accepting points outside the prime-order subgroup can quietly undermine higher-level assumptions, even if no immediate exploit is obvious. Also a reminder that low-level primitives tend to be reused far more widely than intended, so small validation gaps can have surprisingly large blast radii.


Do note thought that X25519 and Ed25519 were designed so they wouldn’t need those checks at all. It’s only when you’re trying to design fancier protocols on top of Curve25519 or Edwards25519 that you can run into subgroup issues.

And for those use cases, I personally try my best to just reproject everything back into the prime order subgroup whenever possible. Monocypher has a number of such fancy functions:

  crypto_x25519_dirty_fast()
  crypto_x25519_dirty_small()
  crypto_elligator_map()
  crypto_elligator_rev()
  crypto_elligator_key_pair()
The dirty functions explicitly produce public keys that cover the entire curve, so that random such keys are truly indistinguishable from random when converted with `crypto_elligator_rev()`. But instead of just removing the clamp operation, I instead add random low-order point, so that when we later use the point in an X25519 key exchange, the shared secret is exactly the same as it would have been for a genuine X255119 key.

That’s where I thank DJB for designing a key exchange protocol that project the shared secret to the prime order subgroup, even when the public key it processes is not. The original intent may have been to make checks easier (low order keys all end up yielding zero), but a nice side effect is how it enabled a nice API for Mike Hamburg’s Elligator2.

> Accepting points outside the prime-order subgroup can quietly undermine higher-level assumptions, even if no immediate exploit is obvious.

If on the other hand we can prove that all computed results are low-order-component-independent (as is the case for X25519), then we know for sure we’re safe. In the end, Ristretto is only really needed when we can’t tweak the protocol to safely reproject to the prime order subgroup.

Don’t get me wrong, having a prime order group abstraction does help. But if someone is qualified to design a protocol that may require this, they’re qualified to try and make it work with a non-trivial cofactor as well — that, or prove it cannot be done.


Nice write-up. Bluetooth on Kindle always felt like a black box, and this does a great job of peeling it open layer by layer. The mix of firmware analysis, protocol reverse engineering, and practical constraints of embedded devices is especially useful. Also a good reminder of how much attack surface quietly lives in “secondary” features like Bluetooth that most users never think about.


This is the nightmare scenario for live-service games: once the integrity of progression and bans is compromised, trust evaporates fast. Rolling back “billions of credits” is easy compared to undoing random bans.


Clean, readable, and refreshingly boring in the best possible way. Sometimes a calendar should just… be a calendar. Bookmarking this for 2026.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: