Hacker Newsnew | past | comments | ask | show | jobs | submit | dehsge's commentslogin

Compilers can never be error free for non trivial statements. This is outlined in Rices theorem. It’s one of the reasons we have observability/telemetry as well as tests.


That's fine, but this also applies to human written code and human written code will have even more variance by skill and experience.


There are some numbers that are uncomputable in lean. You can do things to approximate them in lean however, those approximates may still be wrong. Leans uncomputable namespace is very interesting.


Most math books do not provide solutions. Outside of calculus, advanced mathematics solutions are left as an exercise for the reader.


The ones I used for the first couple of years of my math PhD had solutions. That's a sufficient level of "advanced" to be applicable in this analogy. It doesn't really matter though - the point still stands that _if_ solutions are available you don't have to use them and doing so will hurt your learning of foundational knowledge.


There are other bounds here at play that are often not talked about.

Ai runs on computers. Consider the undecidability of Rices theorem. Where compiled code of non trivial statements may or may not be error free. Even an ai can’t guarantee its compiled code is error free. Not because it wouldn’t write sufficient code that solves a problem, but the code it writes is bounded by other externalities. Undecidability in general makes the dream of generative ai considerably more challenging than how it’s being ‘sold.


LLMs are bounded by the same bounds computers are. They run on computers so a prime example of a limitation is Rices theorem. Any ‘ai’ that writes code is unable (just like humans) to determine if the output is or is not error free.

This means a multi agent workflow without human that writes code may or may not be error free.

LLMs are also bounded by runtime complexity. Could an llm find the shortest Hamiltionian path between two cities in non polynomial time?

LLMs are bounded by in model context: Could an llm create and use a new language with no context in its model?


There still maybe some variance at temperature 0. The outputted code could still have errors. LLMs are still bounded by the undecidable problems in computational theory like Rices theorem.


LLMs and its output are bounded by Rices theorem. This is not going to ensure correctness it’s just going to validate that the model can produce an undecidable result.


Errr, checking correctness of proofs is decidable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: