Hacker Newsnew | past | comments | ask | show | jobs | submit | monkpit's commentslogin

Because of the way training works, is this possible/feasible? Not saying it shouldn’t be done, just wondering _how_ it could be done?

It’s hard to give up, but likely necessary. That doesn’t mean quality has to suffer, we can still gate with deterministic quality tooling where it matters. But yeah, at some scale it stops mattering how human readable the code is, as long as AI can effectively and efficiently (token-wise) make edits or add features.

The point is not human readability, but good structure. Spaghetti code is as bad for an LLM as for a human, because structural complexity and the amount of coupling are fundamental limits, not human-specific.

Amazing tweet.

https://x.com/stevesi/status/2050325415793951124

Here's how history rhymes with this logic. The development of compilers v writing assembly language was not without a very similar "controversy" — that is, are the new tools more efficient or less efficient.

The first compilers were measured relative to hand-tuned assembly language efficiency. The existing world of compute was very much "compute bound" and inefficient code was being chased out of every system.

The introduction of the first compilers generally delivered code "within 10-30%" as efficient as standard professional assembly. This "benchmark" was enough for almost a generation of Fortran programmers to dismiss the capabilities of compilers.

Also worth noting, early compilers (all through the 1980s) routinely had bugs that generated incorrect code. Debugging a compiler is a nightmare (personal experience). This only provided more "ammo."

With the arrival of COBOL the debate started to shift. COBOL generated decidedly "bloated" code so there was no way to win the efficiency argument. But what people started to realize was that a "modern" programming language made it possible to deliver vastly more software and for many more people to work on the same code (ASM notorious for being challenging for multiple engineers on the same portion of code). So the metric slowly started to move from "as good as hand tuned assembler" to "able to write bigger, more sophisticated code in less time with more people). Computers gained timesharing, more memory, and faster CPUs which made the efficiency argument far less compelling (only to repeat with the first 8K or 64K PCs).

This entire transition is capped off with a description in Fred Brooks "Mythical Man Month" book, one of the seminal books in the field of programming and standard issue book sitting in my office waiting for me on my first day at Microsoft. (See full book free here https://web.eecs.umich.edu/~weimerw/2018-481/readings/mythic...)

It is very early. I was not a programmer when the above happened though I did join the professional ranks while many still held these beliefs. For example, I interned writing COBOL on mainframes while PCs were using C and Pascal which were buggy and viewed as inefficient on processor/space-constrained PCs.

The debate would continue with C++, garbage collection, interpreted v compiled (Visual Basic) and more. As a fairly consistent observation over decades, every new tool is viewed through a lens (at first) by experienced programmers over what is worse while new programmers use the tool and operate in a new context (eg "more software" or "bigger projects"). The excerpt below shows this debate as captured in 1972.


> Also worth noting, early compilers (all through the 1980s) routinely had bugs that generated incorrect code.

Incorrect. They had bugs that generated incorrect code. They didn't routinely have bugs that generated incorrect code :-/

And the bugs they had were reproducible.


Didn't someone say LLMs memorize Harry Potter books? You can't have it both ways.

> Didn't someone say LLMs memorize Harry Potter books? You can't have it both ways.

What both ways? You can't consistently get it to output Harry Potter verbatim. That's my point - not reproducible.


That’s where the tooling comes in!

3.25% is whole milk, they absolutely sell it in Canada.

> You can pay $20 a month and use $10k in api tokens.

Do you have a source? I would be interested to read more about any hard figures that have been posted like this.


I suddenly feel compelled to post about openclaw

You’re very defensive in these comments - are you the author?

Isn’t the biggest rule to have working backups with 3-2-1 strategy?

Ah I was thinking this created the webpage itself, which I always thought was an interesting concept. Some future where the application is crafted in realtime to fulfill the needs of the user. Has anyone made something like this?

Anthropic has experimented with this: https://www.youtube.com/watch?v=dGiqrsv530Y

My needs as a user are for a UI to be consistent and predictable.

Good luck.


To be fair, I didn’t say it was a _good_ idea…

And, no need for OpenClaw either

You’ve honed in specifically on VO2 but what about cardio health in general? Like light treadmill, not like a demanding marathon.

Cardio of course is short for "cardiovascular system," which consists of a whole lot of moving parts. Saunas improve some parts of it but certainly not all of it.

Will fixing up your radiator fix your car? Maybe, if the radiator was the problem, but there's a lot of other stuff inside a car to work on, too.

Your body evolved under the expectation that it would be stressed in numerous different ways, but those stressors can all be avoided in the modern era. If you want to most reliably recreate those stresses you need to do cardio and resistance training.


Without exercise, you won't burn ATP and thus won't increase mitochondrial count.

A light treadmill session won't do much to improve your cardiovascular system health either. I mean it's better than nothing but don't expect too much.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: