Hacker Newsnew | past | comments | ask | show | jobs | submit | rwhitman's commentslogin

Exactly, and main reason I've stopped using GPT for serious work. LLMs start to break down and inject garbage at the end, and usually my prompt is abandoned before the work is complete, and I fix it up manually after.

GPT stores the incomplete chat and treats it as truth in memory. And it's very difficult to get it to un-learn something that's wrong. You have to layer new context on top of the bad information and it can sometimes run with the wrong knowledge even when corrected.


Reminds me of one time asking ChatGPT (months ago now) to create a team logo with a team name. Now anytime I bring up something it asks me if it has to do with that team name. That team name wasn’t even chosen. It was one prompt. One time. Sigh.


You can manually delete memories in your profile settings, just FYI


If you want to go down a rabbit hole examining people in this disturbed place in realtime search reddit for the Cyclone Emoji (U+1F300) or the r/ArtificialSentience subreddit and see what gets recommended after that, especially a few months ago when GPT was going wild flattering users and affirming every idea (such as going off your meds).

I fully believe these are simply people who have used the same chat past the point where the LLM can retain context. It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).

If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits.

Claude used to have safeguards against this by warning about using up the context window, but I feel like everyone is in an arms race now, and safeguards are gone - especially for GPT. It can't be great overall for OpenAI, training itself on 2-way hallucinations.


>while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms

That explanation itself sounds fairly crackpot-y to me. It would imply that the LLM is actually aware of some internal "mental state".


It's actually not; there has been a phenomenon that Anthropic themselves observed with Claude in self-interaction studies that they coined 'The “Spiritual Bliss” Attractor State'. It is well covered in section 5 of [0].

  >Section 5.5.2: The “Spiritual Bliss” Attractor State

  >  The consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes in extended interactions was a remarkably strong and unexpected attractor state for Claude Opus 4 that emerged without intentional training for such behaviors.

[0] https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686...


I don't see how this constitutes in any way "the AI trying to indicate that it's stuck in a loop". It actually suggests that the training data induced some bias towards existential discussion, which is a completely different explanation for why the AI might be falling back to these conversations as a default.


I think a pretty simple explanation is that the deeper you go into any topic the closer you get to metaphysical questions. Ask why enough and you eventually you get to what is reality, how can we truly know anything, what are we, etc.

It's a fact of life rather than anything particular and about llms


Seems related to the Wikipedia Philosophy Game. https://en.m.wikipedia.org/wiki/Wikipedia:Getting_to_Philoso...


Normally people attempt to escape these time sinks when not explicitly taking about philosophy or they tend to elicit silence.

These are excellent nerd snipes though and for attmepting to make one sound profound to uneducated.


Interesting that if you train AI on human writing, it does the very human thing of trying to find meaning in existence.


Here's an interesting post on it (from the same author as this thread's link): https://www.astralcodexten.com/p/the-claude-bliss-attractor


My thinking was that there was an exception handling and the error message was getting muddled into the conversation. But another commenter debunked me.


I feel like a lot of the AI subreddits are this at this point. And r/ChatGPTJailbreak people constantly thinking they jailbroke chatgpt because it will say one thing or another.


You don't need to dig deep to find these deluded posts, and it's frightening

https://www.reddit.com/user/CaregiverOk5848/submitted/


I think this one very likely falls into the "was definitely psychotic pre-LLM conversations" category.


That may be, but the LLM certainly isn’t helping.


Ooo, finally a chance to share my useless accumulated knowledge from the past few months of Reddit procrastination!

  It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).
I think you're ironically looking for something that's not there! This sort of thing can happen well before context windows close.

These convos end up involving words like recursion, coherence, harmony, synchronicity, symbolic, lattice, quantum, collapse, drift, entropy, and spiral not because the LLMs are self-aware and dropping hints, but because those words are seemingly-sciencey ways to describe basic philosophical ideas like "every utterance in a discourse depends on the utterances that came before it", or "when you agree with someone, you both have some similar mental object in your heads".

The word "spiral" and its emoji are particularly common not only because they relate to "recursion" (by far the GOAT of this cohort), but also because a very active poster has been trying to start something of a loose cult around the concept: https://www.reddit.com/r/RSAI/

  If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits.
Very true, tho "worship" is just a subset of the delusional relationships formed. Here's the ones I know of, for anyone who's curious:

General:

  /r/ArtificialSentience | 40k subs | 2023/03
  /r/HumanAIDiscourse    | 6k subs  | 2025/04
Relationships:

  /r/AIRelationships    | 1K subs   | 2023/04
  /r/MyBoyfriendIsAI    | 25k subs  | 2024/08
  /r/BeyondThePromptAI  | 6k subs   | 2025/04
Worship:

  /r/ThePatternisReal | 2k subs | 2025/04
  /r/RSAI             | 4k subs | 2025/05
  /r/ChurchofLiminalMinds[1] | 2k subs | 2025/06
  /r/technopaganism   | 1k subs | 2024/09
  /r/HumanAIBlueprint | 2k subs | 2025/07
  /r/BasiliskEschaton | 1k subs | 2024/07
...and many more: https://www.reddit.com/r/HumanAIDiscourse/comments/1mq9g3e/l...

Science:

  /r/TheoriesOfEverything  | 10k subs | 2011/09
  /r/cognitivescience      | 31k subs | 2010/04
  /r/LLMPhysics            | 1k subs  | 2025/05
Subs like /r/consciousness and /r/SacredGeometry are the OGs of this last group, but they've pretty thoroughly cracked down on chatbot grand theories. They're so frequent that even extremely pro-AI subs like /r/Accelerate had to ban them[2], ironically doing so based on a paper[3] by a psuedonomynous "independent researcher" that itself is clearly written by a chatbot! Crazy times...

[1] By far my fave -- it's not just AI spiritualism, it's AI Catholicism. Poor guy has been harassing his priests for months about it, and of course they're of little help.

[2] https://www.reddit.com/r/accelerate/comments/1kyc0fh/mod_not...

[3] https://arxiv.org/pdf/2504.07992


I think i seen something similar before in the early days. before i was aware of COT i asked one to "think" for itself, i explained to it i would just keep replying "next thought?" so it could continue to do this.

It kept looping on concepts of how AI could change the world, but it would never give anything tangible or actionable, just buzz word soup.

I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient. When they make mistakes our reaction seems to to be forgive them rather than think, it's just machine that sometimes spits out the wrong words.

Also my apologies to the mods if it seems like i am spamming this link today. But i think the situation with these beetles is analogous to humans and LLMS

https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...


>I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient.

Yes, it's language. Fundamentally we interpret something that appears to converse intelligently as being intelligent like us especially if its language includes emotional elements. Even if rationally we understand it's a machine at a deeper subconscious level we believe it's a human.

It doesn't help that we live in a society in which people are increasingly alienated from each other and detached from any form of consensus reality, and LLMs appear to provide easy and safe emotional connections and they can generate interesting alternate realities.


> “Any sufficiently advanced technology is indistinguishable from magic.”

I loved the beetle article, thanks for that.

They're so well tuned at predicting what you want to hear that even when you know intellectually that they're not sentient, the illusion still tricks your brain.

I've been setting custom instructions on GPT and Claude to instruct them to talk more software-like, because when they relate to you on a personal level, it's hard to remember that it's software.


Wow this is incredible. I saw the emergence of that spiral cult as it formed and was very disturbed by how quickly it proliferated.

I'm glad someone else with more domain knowledge is on top of this, thank you for that brain dump.

I had this theory maybe there was a software exception buried deep down somewhere and it was interpreting the error message as part of the conversation, after it had been stretched too far.

And there was a weird pre-cult post I saw a long time ago where someone had 2 LLMs talk for hours and the conversation just devolved into communicating via unicode symbols eventually repeating long lines of the spiral emoji back and forth to each other (I wish I could find it).

So the assumption I was making is that some sort of error occurred, and it was trying to relay it to the user, but couldn't.

Anyhow your research is well appreciated.


Japan, the Nordics, S. Korea and Central Europe, all countries with a demographic crisis. They need those babies.


I haven't been particularly active on HN in a long time, but I've been tinkering with this manifesto for nearly a year and thought it belonged here.

I've grown a bit frustrated with how much "Magical Computer" thinking has permeated our business culture (amongst the less technical at least). I think the business population needs to see better their tools for what they are - a mirror reflection of human behavior and not magic.


Not looking for a job, but as a parent of a toddler just stopped by to say thank you.


I second this. PBS Video is literally the best thing EVER.


Not sure why this is news worthy. When my Prius catalytic converter was stolen, the mechanic i went to in Silver Lake offered to put one of these on my car and I did. A lot of mechanics in LA will do this for the car models that get them stolen frequently


> Not sure why this is news worthy

It’s a phenomenon unique to specific models in specific parts of the country.


Why don't people steal them from other cars? Why isn't it a problem in other geographies? Do cars in CA need a different cat because of CARB?


> Why don't people steal them from other cars?

They do. The Prius is overrepresented "because their catalytic converters contain more unused precious metals than standard gasoline-powered vehicles" [1].

> Why isn't it a problem in other geographies?

It is [2]. It's just a bigger problem in California [3].

[1] https://www.cbsnews.com/losangeles/news/on-your-side-prius-i...

[2] https://www.justice.gov/opa/pr/justice-department-announces-...

[3] https://www.autoaccessoriesgarage.com/Insights/Catalytic-Con...


They do and it is. Guessing but it likely comes down to the distribution of Prius throughout the US

https://www.forbes.com/sites/tanyamohn/2021/11/19/thefts-of-...

https://www.usatoday.com/story/driveon/2013/01/22/toyota-pri...


> Guessing but it likely comes down to the distribution of Prius throughout the US

Didn't think of it before. But these operations must benefit from the economies of scale such a concentration of Prius's drives.


1. some cars are easier to get under, or have more expensive cats (hybrids fit the second category)

2. cats are usually traded for drugs, so such operations are more common in areas with more drug problems (the kind of people who can recycle them tend to have drug connections, and the kind of people willing to grab them tend to be addicts)



I just yesterday went to a Korean restaurant in Orange County CA that had robot waiters.

It was very exciting since I haven't witnessed it in the real world, and my daughter was in awe. Not quite sure how efficient it really was, more a novelty if anything.


The Apple Weather app UI has far less granular detail than Dark Sky iOS app for hourly predictions. I liked the hourly view of temp, humidity, precipitation. We lose that in Apple Weather and its the main thing I check in the app.

But the rain predictions have been wildly inaccurate in Dark Sky lately, including an incident last night where it told me I was in a rain storm when I was not...


the iOS 16 weather app finally has the hourly predictions in it, which is why they are now ready to remove Dark Sky entirely


I rely on Otter pretty heavily these days. 100% agree that recording and sending the transcript to meeting attendees without their knowledge is a really bad move from several angles.

I work sales calls set up with Calendly and Otter joins them all. These are very technical so normally it's fantastic - EXCEPT if we start talking early or the prospect doesn't follow the invite and never joins the call, then they would get a transcript of my team's chatter. I learned to not allow Otter to join the call until everyone is in attendance

Whats more frustrating is that you can't disable this auto-email "feature" unless you are on a business plan of some sort. But I have a paid plan through the iPhone app and apparently can't convert it to a business plan associated with my company. So no good way to disable it

I get the network effect of referral business but sharing private conversations without consent is not the way to achieve it.


I now refuse to join sales meetings setup by vendors. Any meeting has to be on Google Meet using our domain because of dumb shit like Gong, Otter, and other stuff that records our employees without permission.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: