Ha! Totally agree. Used it in my emails to OpenAI in the early days of GPT-3, hoping they’d adapt the term. I knew it from psychology, where it refers to people giving rational explanations for something that isn’t there (specifically in split brain patients).
I guess “hallucinate” stuck because it works across all disciplines: text, audio, vision…
I feel like it had something to do with DeepDream - in the popular consciousness, tripping on acid / hallucinating became something that computers are surprisingly good at, and maybe that transferred to text models.