Hacker Newsnew | past | comments | ask | show | jobs | submit | dcre's commentslogin

Too annoying; didn’t read.

It's already out of date because it makes no sense. If it's true that the superficial signals of quality were once somehow good enough to keep the entire economy on the rails (it's not true), surely you can have an LLM look at given piece of work and extract comparably useful signals of quality or effort.

> If it's true that the superficial signals of quality were once somehow good enough to keep the entire economy on the rails (it's not true)

It was true. The negative signals (we called them "code smells") weren't the be-all-end-all of reviews, they indicated to the reviewer where to spend more effort. It got us 90% of the benefit of an in-depth review with 10% of the effort. But with LLMs eliminating this, we now have to spend all our effort on everything, taking a lot more time and energy overall.


I think it’s true that we were able to establish trust and produce good work without verifying every detail — what I’m suggesting is that signals of that kind were not a very important factor. And code smells still work!

Why is AGI required to make the investments work out?

With AGI we expect a huge return on investment and a GDP growth that could be accelerating at a rate we couldn't even comprehend. Imagine an algorithm that improves itself each iteration and finds ways to increase its capacity every day. Robots suddenly capable of doing dishes, grocery shopping, picking produce from the field. Imagine all your ailments handled... age becomes just a number.

Also with AGI we expect a winner take all situation. The first AGI system would protect itself against any other AGI system. Hence why it's go time for all these AI companies and why they stopped sharing their research.


This does not answer my question.

It does. If AGI is achieved, OpenAI or whatever frontier AI Lab will effectively become a mega company with the ability to do anything.

A chatbot is not the long term goal for any of these companies.


My question is why AGI is required for these companies to be viable, i.e., why these companies cannot be viable in the case where AGI is not achieved. A response about what happens when AGI is achieved does not address that.

SOTA models on medium are probably still better than free or cheap models, but you should experiment.

I recommend people look at the actual study and think about how representative are the subjects, the tasks involved (SAT essay writing), and the way LLMs are being used.

https://arxiv.org/abs/2506.08872

To be concrete, this is taking a task in isolation that LLMs can do much better than humans (writing garbage essays) and using LLMs to do that task. In the real world, tasks have parts and they exist in a larger context. When we use LLMs for one part of a task, there are other things we're doing that the LLM is not helping with. If you compared people doing arithmetic by hand and with a calculator, you would also see very big differences in how active their brains are. But it's not anyone's job to add up numbers. Adding up numbers is a subtask of a subtask in someone's job.


He wasn’t predicting slop; he was describing mass culture, which already existed when he was writing.

LLMs can tell you exactly how to acquire the materials and manufacture the materials. They might even come up with novel formulations that rely on substances that are easier to get. There might be information about this stuff online but LLMs are much better than random idiots at adapting that information to their actual situation.

On top of LLMs reducing the cost/difficulty, the other reason biological and chemical weapons are such a worry is their asymmetric character — they are much much easier and cheaper to produce and deploy than they are to defend against.


I've never seen an argument like this that, if true, wouldn't also apply to the cognitive offloading we do by relying on culture, by working with others, or working with the artifacts built by others.

Cognitive offloading via culture has many forms and many of them are not sustainable at all.

Sure! I don't mean they're all good. I just mean that it can't be cognitive offloading itself that is the problem, but the particular character of it.

It seems like all of your comments are like this. Consider stopping that!

Wonderful post and I will be taking inspiration from it. Surprised not to see TypeSpec https://typespec.io/ mentioned, which is a TypeScript-like schema language that I like to describe as "what if OpenAPI was good". I'm guessing they considered it and decided building their own would be both simpler and more flexible. The cost of BYO has come down a lot thanks to agents.


Love TypeSpec, agree it makes writing OpenAPI really easy.

But I’ve moved to using https://aep.dev style APIs as much as possible (sometimes written with TypeSpec), because the consistency allows you to use prebaked aepcli or very easily write your own since everything behaves like know “resources” with a consistent pattern.

Also Terraform works out of the box, with no needing to write a provider.


Which parts of Openapi does it fix?


Looking at the first example - it's far less verbose. Although, seems to be suspiciously minimal, so I can't even tell from a single .tsp route definition what response content type to expect (application/json is the default most likely).


I would guess that is defined by the `@route` decorator.


It’s actually human-readable, it has generics, it supports sum and product types in a much more natural way. There’s a lot more, that’s just off the top of my head.


Open api is an exchange format, it quickly become to verbose and repetitive, it is only OK when auto generated and consumed by tooling.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: