Hacker Newsnew | past | comments | ask | show | jobs | submit | danielbln's commentslogin

There were no "dark ages", that's the same common wisdom blunder like "in the middle ages everybody was dressed in drab grey clothing, ate gruel and walked through mountains of poop everywhere". It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today. It was by no means a time of standstill.

As far as I can tell, the dark ages were called the dark ages because there wasn't much evidence to be found: writing was less prominent during that time.

> It was a time of transition away from the slave powered empire to decentralized kingdoms and ultimately the Europe of today.

You are seeing the fall of the western part of the Roman Empire a bit too rosy. Compare and contrast https://acoup.blog/2022/01/14/collections-rome-decline-and-f...


Yes, Europe did not have dark ages, it only had period of population decline, of less emissions, less building, less inventions, less records and severed trade networks.

Population decline? Less emissions? Haven't we reached consensus that those would be welcome today? Is it time for a pro-dark-age movement?

The world is projected to hit population decline already sometime between 2060 and 2080, so I guess the younger ones of us will find out definitively whether it's a good or bad thing.

I am very sorry, but you are wrong. Between the fall of Rome (476 AD) and the Carolingian empire (~800 AD) there was a period of not only standstill, but regression, devolution and forgetfulness. Compared with what came before, it can be rightly called the dark ages.

Open weight models that run under your desk are not frontier model level, but they are getting closer. Improvements in agentic post training and things like TurboQuant mean that even if all frontier labs pull the plug tomorrow, we will still have agents to work with.

TurboQuant is not a step change, it's more of a smaller incremental improvement to KV quantization, and possibly (unsure) to quantization more generally. I'm actually more positive about SSD weights offload, which opens up very large local models for slow inference (good enough for slow chat) to virtually any hardware or amount of RAM.

I'm definitely looking forward to that, as I really want people to control their own tools.

Every AI/agentic thread on HN follows the same tension: builders want to build and solve problems. Code or task completion are implementation details to be done on the path to the actual prize: solving the problem. And then there are the coders, that have honed their mechanical skill of implementation and derive their intellectual fulfillment from that. The latter crowd has a rough time because much of it can be automated now, the former camp is happy because look at all the stuff that can now be built!

How often do humans make mistakes? That's the better comparison.

That's why I drive with 10 pounds of paper maps in my car. I won't have any of this new fangled GPS tech atrophy my map reading skills that I've honed so much.

If you're carrying ten pounds of paper maps, you're doing no GPS / no digital maps navigation wrong.

Next up: scientists are creating the everything vaccine for dogs and cats.

What nepobaby are you talking about?

AI is the world's biggest nepo_hire_ (sorry, not nepobaby).

You don't have to continually tell it, you tell it once, persist it as convention and move on with your life.

It also went real fast from "GPT hallucinated a library, literally useless" to "this agent has created this entire service up to spec, no notes".

It seems that we are getting bitten by the law that says things that can be measured trumps things that cannot be.

How fast it was to create an initial version of a piece of software can be easily measured.

But how efficient it is, how easy it is to make changes to it, how easy it is to debug it, how easy it is to extend in the direction that the domain requires...all these cannot be easily measured or quantified, but is 10 times more important than that initial creation time....For a software that has to run and maintained for decades delivering value all that time, it does not really matter if the initial version was created in 5 minutes or 1 month, if the 5 minute version does not contribute negatively to all those non-measurable, non-marketable traits of the software.

It is like how camera marketing was mostly around the megapixel value, instead of something vastly more important like low light performance, dynamic range, or fast auto-focus. Because the LCD of the market won't be able to grasp the relevance, and would not act on it. So it was all about megapixel, but at least that does not have a lot of negative consequence unlike the marketing around AI...


I said nothing about speed, I said to spec. Speed is a welcome side effect.

LLMs also make the cynicism go up among the HN crowd.

Hm. Is HN starting to become more skeptical of LLMs? For the past couple of years, HN has seemed worryingly enthusiastic about LLMs.

How so? Half the people here have LLM delusion in every thread posted here; more than half of the things going to the frontpage are AI. Just look at hours where Americans are awake.

Fucking Americans. Only 4% of the world population, with the magic of disproportionately afflicting the global news headlines which make their way here.

It’s impressive, honestly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: