Hacker Newsnew | past | comments | ask | show | jobs | submit | Wobbles42's commentslogin

I don't really see that happening here.

Microsoft doesn't have any trust to lose, and they won't be gaining any by this move.

That is the one advantage they have in all of this. Their public image is as bad as it can get.


> they won't be gaining any by this move.

Then why even do it?


I keep hearing "this is the worst it's going to be" as if we can expect a monotonic increase in quality and value generation.

Meanwhile, search was better in the past and is at this point the best it's going to be.

Enshittification comes for all things.


This only remains true so long as open weight models lack significant utility.

Access to compilers was almost as controlled as access to LLMs to prior to the GNU toolchain and Linux putting a C compiler and unix (ish) machine in the hands of anyone who cared for one.


The problem is compute and memory. I think OpenAI bought RAM supply mainly to choke the ability of consumer hardware to run open weight models (that hit the memory bottleneck before other constraints). Now there's a shortage in other components as well. I don't see how local AI can compete in usefulness.


Perhaps there will be a lot more people who can write software as well as I can weld metal.

Welding is peculiar in that becoming a professional welder takes a great deal of time and effort (and probably some talent that I don't have), but becoming a terrible welder can be accomplished by anyone in a couple of weekends, and there is great utility in being a terrible welder. Well worth the investment of a couple hundred dollars and a couple weekends.

With LLMs there is now much more utility in being a terrible programmer too. A couple of weekends yields real return on the effort now.


It's an extension of pretending that developer productivity can be measured in lines of code per day, as well as the managerial blindness to the fact that code can have negative value.


This will be a hard argument to make.

The decision makers who are the target audience for these metrics value "objective" data. They value the appearance of being quantitative, but lack the intellectual tools to distinguish between quantitative science and pseudoscience with numbers bolted on.

That's modern bureaucracy in a nutshell.


All of academic publishing has fallen victim to Goodhart's law.

Our metrics for judging the quality of academic information are also the metrics for deciding the success of an academic's career. They are destined to be gamed.

We either need to turn peer review into an adversarial system where the reviewer has explicit incentives to find flaws and can advance their career by doing it well, or else we need totally different metrics for judging publications (which will probably need to evolve continuously).

We assume far too much good faith in this space.


I wonder if the term "published" as a binary distinction applied to a piece of writing is a term and concept that is reaching the end of its useful life.

"Peer reviewed" as a binary concept might be as well, given that incentives have aligned to greatly reduce its filtering power.

They might both be examples of metrics that became useless as a result of incentives getting attached to them.


Both metrics are supposedly binary but in reality have always depended heavily on surrounding context. Archival journals have existed all along. Publication is useful as an immutable entry in the public record made via a third party. Blog posts have a tendency to disappear over time.


"Steam" is very definitely the gas phase of water. Water vapor is too. If we are talking about chemistry they are essentially synonyms.

If we are talking engineering, the term steam generally implies water vapor that is at or above the saturation temperature.

In every day usage they are usually drawing a distinction between visible and invisible water vapor, usually caused by the presence of liquid droplets, with "steam" being essentially "fog", but hotter.


Do you think this comes from a gradual internalization of a real linguistic concept? Or it more a familiarity with common (if unspoken) conventions of the puzzle makers?

I suspect the answer isn't binary, but it's interesting to think about.

This "sixth sense" phenomenon seems to pop up a lot. Crosswords are a great example. The sense some people are getting for detecting LLM output might be another.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: