Hacker Newsnew | past | comments | ask | show | jobs | submit | measurablefunc's commentslogin

This is similar to AWS & their Graviton VMs.

The author does not exist & the paper is pure nonsense: https://scholar.google.com/citations?user=G97KxEYAAAAJ&hl=en. Might even be a psyop by some 3 letter agencies. So the obvious question, why did you post this?

Sorry for the confusion, even though the author's names may not have an active record on Scholar. But I would like to share it here because I read the paper, and I find it interesting.

All traffic is monitored, all signal sources are eventually incorporated into the training set in one way or another. The person you're responding to is correct, even a single API call to any AI provider is sufficient to discount future results from the same provider.

You live in a conspiracy world. Those AI providers don't update the models that fast. You can try ask them solve ARC-AGI-3 without harness and see them struggle as yesterday yourself.

Which part is the conspiracy? Be as concrete as possible.

ok! So if someone uses an existing, checkpointed, open source model then the answer is yes the results are valid and it doesn't matter that the tests are public.

Yes, assuming the checkpoint was before the announcement & public availability of the test set.

That's great but how about UltraAgents: Meta-referential meta-improving self-referential hyperagents?

AGI-MegaAgent 5.7 Pro Ultra

Somehow still financed w/ ads & ubiquitous surveillance.

It's "vibe" research. Most of it is basically pure nonsense.

Care to elaborate?

The headline theorem, "every sigmoid transformer is a Bayesian network," is proved by `rfl` [1]. For non-Lean people: `rfl` means "both sides are the same expression." He defines a transformer forward pass, then defines a BP forward pass with the same operations, wraps the weights in a struct called `implicitGraph`, and Lean confirms they match. They match because he wrote them to match.

The repo with a real transformer model (transformer-bp-lean) has 22 axioms and 7 theorems. In Lean, an axiom is something you state without proving. The system takes your word for it. Here the axioms aren't background math, they're the paper's claims:

- "The FFN computes the Bayesian update" [2]. Axiom.

- "Attention routes neighbors correctly" [3]. Axiom.

- "BP converges" [4]. Axiom, with a comment saying it's "not provable in general."

- The no-hallucination corollary [5]. Axiom.

The paper says "formally verified against standard mathematical axioms" about all of these. They are not verified. They are assumed.

The README suggests running `grep -r "sorry"` and finding nothing as proof the code is complete. In Lean, `sorry` means "I haven't proved this" and throws a compiler warning. `axiom` also means "I haven't proved this" but doesn't warn. So the grep returns clean while 22 claims sit unproved. Meanwhile the godel repo has 4 actual sorries [6] anyway, including "logit and sigmoid are inverses," which the paper treats as proven. That same fact appears as an axiom in the other repo [7]. Same hole, two repos, two different ways to leave it open.

Final count across all five repos: 65 axioms, 5 sorries, 149 theorems.

Claude (credited on page 1) turned it into "Not an approximation of it. Not an analogy to it. The computation is belief propagation." Building to a 2-variable toy experiment on 5K parameters presented as the fulfillment of Leibniz's 350-year-old dream. Ending signed by "Calculemus."

[1] https://github.com/gregorycoppola/sigmoid-transformer-lean/b...

[2] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...

[3] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...

[4] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...

[5] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...

[6] https://github.com/gregorycoppola/godel/blob/bc1d138/Godel/O...

[7] https://github.com/gregorycoppola/sigmoid-transformer-lean/b...


Thanks for writing such an elaborate reply! I wish I was familiar with Lean, so I could follow. But if you're right, it would put the claims of the paper in a totally different light.

Perhaps others with knowledge in Lean could also chime in?


Doubtful:

- articles two days old

- I got links right to the code

- its clearly a waste of time if you know Lean, I went way above and beyond already

Maybe if you were able to show "no actually > 0 of this is well founded", someone might be tempted. But you'd need someone who showed up days later, knows enough Lean to validate for you, yet, not enough Lean to know it's a joke just from looking at the links.

You're welcome! Don't mean to be mean (pun intended), hope you don't read it that way. Just, figured it'd give you some food for thought re: exactly how much work you can expect from other people, and that you might need to set more constraints on an "idk, can someone else tell me more?" reaction than "one person said something, but someone else said they're wrong, so score is 1 to 1"


Thanks again - this time I have to admit I really don't get what you're trying to say?!

Sorry, I was unclear!

You said you wished someone with Lean knowledge could check my work. I'm saying: you can check it yourself, right now, without knowing Lean.

Click any of links [2] through [5]. You'll see the word `axiom` in the code. In Lean, `axiom` means "assume this is true without proof." That's it. That's the whole critique. The paper says "formally verified," but the key claims — FFN computes Bayesian update, attention routes correctly, BP converges, no hallucination — are all just assumed.

You don't need to take my word for it, and you don't need a Lean expert to break a tie. The evidence is right there in the links. It'd be like a paper claiming "we formally proved this bridge is safe" and the proof says "Axiom: this bridge is safe." You don't need a civil engineer to tell you that's not a proof.


I suspect it means it's LLM generated without it being checked

There is nothing continuous on the computer, it's all bit strings & boolean arithmetic. The semantics imposed on the bit strings does not exist anywhere in the arithmetic operations, i.e. there is no arithmetic operation corresponding to something as simple as the color red.

It sounds like you're saying that if a computer had infinite precision then hallucinations would not occur?

The way neural networks work is that the base neural network is embedded in a sampling loop, i.e. a query is fed into the network & the driver samples output tokens to append to the query so that it can be re-fed back into the network (q → nn → [a, b, c, ...] → q + sample([a, b, c, ...])). There is no way to avoid hallucinations b/c hallucinations are how the entire network works at the implementation level. The precision makes no difference b/c the arithmetic operations are semantically void & only become meaningful after they are interpreted by someone who knows to associated 1 /w red, 2 w/ blue, 3 w/ clouds, & so on & so forth. The mapping between the numbers & concepts does not exist in the arithmetic.

Oh, I thought that the embedding space of the residual stream was precisely that.

The arithmetic is meaningless, it doesn't matter what you call it b/c on the computer it's all bit strings & boolean arithmetic. You can call some sequence of operations residual & others embeddings but that is all imposed top-down. There is nothing in the arithmetic that indicates it is somehow special & corresponds to embeddings or residuals.

Ah ok, so if we had such a mapping then models wouldn't hallucinate?

Maybe it's better if you define the terms b/c what I mean by hallucination is that the arithmetic operations + sampling mean that it's all hallucinations. The output is a trajectory of a probabilistic computation over some set of symbols (0s & 1s). Those symbols are meaningless, the only reason they have meaning is b/c everyone has agreed that the number 97 is the ascii code for "a" & every conformant text processor w/ a conformant video adapter will convert 97 (0b1100001) into the display pattern for the letter "a".

So kind of like if you flip a coin, the sampling means the heads or tails you get isn't real?

It's when you define heads or tails however you want & then tell me you have objective semantics for each side of the coin when all you've really done is established a convention about which side is which. The coin is real, what you call each side is a convention & what semantics you attach to a sequence of flips is also a convention that has nothing to do with the reality of the coin.

I'm struggling to differentiate that from how we use coinflips normally. We can pretty easily create arbitrary mappings and then sample from the binomial in a way that has meaning far beyond just heads or tails. Maybe I'm not quite understanding.

Which part are you confused about? Symbols are meaningless until someone imposes semantics on them. There is nothing meaningful about arithmetic in a neural network other than whatever conventions are imposed on the binary sequences, same way 97 has no meaning other than the conventional agreement that it is the ascii code point for "a".

I guess I don't get the main idea. Chemical reactions in our brains are semantically void and yet we're able to use it as substrate for thinking.

This has nothing to do with chemical reactions. The discussion was about symbols and arithmetic. But in any event, this discussion has run its course so good luck: https://chatgpt.com/s/t_69c473e1f71c8191a4ed1e3e2dbdef83

Yes! Excellent example of an ungrounded response, a hallucination.

Also a demonstration of your rhetoric.

> The semantics imposed on the bit strings does not exist anywhere in the arithmetic operations,

Correct, the semantics is actually in the network of relations between the nodes. That has been one of the major lessons of LLMs, and it validates the systems response to Searle's Chinese Room.


I guess this means AI researchers should be out of jobs very soon.

The coming super el nino cycle is going to be very interesting.

All of them. It's not like AI companies have managed to fix the security issues since last time they promised they had fixed all the hallucinations & accidental database deletions.

You know it’s open source code, right?

It's literally a loop that wraps APIs from AI providers. Go ahead & explain how an open source AI wrapper fixes security holes inherent in existing AI.

do you think anybody has actually read all 700k lines of the ai generated code?

Not even LLMs can read that.

I asked various models to list configurations options of OpenClaw and none of them could make heads or tails of it.


This looks like another avant-garde "art" project.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: