Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>his whole job

Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you.

IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review.

(e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still. Or some RegEx!)

 help



Alternatively… there was no AI error, the reporter made up the quotes, and lied when they were challenged.

The chance that the very first time AI was used it screwed up and was caught is pretty low.

It’s likely been used before but nobody got caught.


Are you familiar with the reporter's work & reputation?

Are we talking about the same guy who purportedly never even read an article with their name on it containing multiple insulting false quotes and was summarily dismissed?

Mind you, I’m not purporting malice but their choosing the blame AI rather than brain fog, bad notes, accidental transcription, or any other human error which would us look down upon them more.

Right now, AI errors like this are excusable. Soon they won’t be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: