Hacker Newsnew | past | comments | ask | show | jobs | submit | mattgreenrocks's commentslogin

I’m convinced the comment section hates multi-paradigm languages because you can misuse them. And it has features that may not be needed, which triggers this weird purist mentality of, “gee, it would be so much better if it didn’t have feature X.” But oftentimes that’s just pontification for its own sake, and they aren’t really interested in trying it out. Feature X remains something they won’t use, so it should go.

Yep, up there with ActiveRecord as the finest ORM I’ve ever used. What seals it for me is the low coupling it imposes on entities.

Perhaps we're in an AI summer and a tech winter. Winter is always the time when people hole up, dream, and work on whatever big thing is next.

We're about due for some new computing abstractions to shake things up I think. Those won't be conceived by LLMs, though they may aid in implementing them.


We have 2 decades of abstraction.

The stacks of turtles that we use to run everything are starting to show their bloat.

The other day someone was lamenting dealign with an onslaught of bot traffic, and having to deal with blocking it. Maybe we need to get back to good old fashioned engineering and optimization. There was a thread on here the other day about PC gamer recommending RSS readers and having a 36gb webpage ( https://news.ycombinator.com/item?id=47480507 )


~36 MB.

(though it sounds like if you left it for long enough, you'd get 36 GB of ads downloaded eventually)


Yeah. Oftentimes get crickets here when I talk along those lines. Can't tell if apathy, learned helplessness, or obliviousness. Regardless, devs seem like an extremely docile labor group based on how they react to this and other economic pressures.


We will all be shocked at the rug pull after it has finished training on all our high-quality feedback for code it has written.


The defensiveness is almost as interesting as the meeting itself.


Way too many people have tied their egos to the success of AI.


And too many people have their egos tied to its failure, too.

Im a massive AI skeptic. If anyone were to be jumping up and down on the corpse of AI and this incessant drive to use it everywhere, it’d be me. But I also work at Amazon. I got the email. I attended the meeting. I can personally attest that there are no new requirements for AI-generated code. The articles about this in the meeting at extremely misleading, if not outright wrong. But instead of believing the person that was actually there in the room, this thread is full of people dismissing my first-hand account of the situation because it doesn’t align with the “haha AI failed” viewpoint.


Not just their egos, but their paychecks. This place is either going to get very quiet or really weird when the hype train derails and the AI bubble bursts.


So much projection these days in so many areas of life.


It’s not about perfectly architected code. It’s more about code that is factored in such a way that you can extend/tweak it without needing to keep the whole of the system in your head at all times.

It’s fascinating watching the sudden resurgence of interest in software architecture after people are finding it helps LLMs move quickly. It has been similarly beneficial for humans as well. It’s not rocket science. It got maligned because it couldn’t be reduced to an npm package/discrete process that anyone could follow.


Very well put.

I've always been interested in software architecture and upon graduating from university, I was shocked to see the 'Software Architect' title disappear. Software devs have been treating software architecture like phrenology or reading tea leaves.

But those who kept learning and refining their architecture skills during this time look at software very differently.

It's not like the industry has been making small, non-obvious mistakes; they've veen making massive, glaringly obvious mistakes! Anticipating a reasonable range of future requirements in your code and adhering to the basic principles of high-cohesion and loose-coupling is really not that hard.

I'm taken aback whenever I hear someone treating software architecture as some elusive quest akin to 'finding Bigfoot'.


Which makes me wonder: how is serving static content at all nondeterministic?


Yet it still fumbles even when limiting context.

Asked it to spot check a simple rate limiter I wrote in TS. Super basic algorithm: let one action through every 250ms at least, sleeping if necessary. It found bogus errors in my code 3 times because it failed to see that I was using a mutex to prevent reentrancy. This was about 12 lines of code in total.

My rubber duck debugging session was insightful only because I had to reason through the lack of understanding on its part and argue with it.


Once you've gone through that, you might want to ask it to codify what it learned from you so you don't have to repeat it next time.


I would love to see that code.


Try again with gpt-5.3-codex xhigh.


Try again with Opus 4.5

Try again with Sonnet 4

Try again with GPT-4.1

Here I thought these things were supposed to be able to handle twelve lines of code, but they just get worse.


The goalposts have been moved so many times that they’re not even on the playing field.


Nahh, just trying to make it concrete. I could instead just ask which model they used instead.


If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.


how do you take over the world if you have access to 1000 normal people? if AGI is by the original definition (long forgotten by now) of surpassing MEDIAN human at almost all tasks. How the rebranding of ASI into AGI happened without anyone noticing is kind of insane


AGI is not a "godlike power" under this definition.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: