Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you.
IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review.
(e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still. Or some RegEx!)
Are we talking about the same guy who purportedly never even read an article with their name on it containing multiple insulting false quotes and was summarily dismissed?
Mind you, I’m not purporting malice but their choosing the blame AI rather than brain fog, bad notes, accidental transcription, or any other human error which would us look down upon them more.
Right now, AI errors like this are excusable. Soon they won’t be.
Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you.
IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review.
(e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still. Or some RegEx!)