Hacker Newsnew | past | comments | ask | show | jobs | submit | nosuchthing's commentslogin

This paper is an incredible read: TESCREAL hallucinations: Psychedelic and AI hype as inequality engines

https://akjournals.com/view/journals/2054/7/S1/article-p22.x...

  > "Researchers have called attention to the ways that the hype promoting psychedelics as miracle cures 
  replicates preceding claims about the efficacy of SSRIs and other antidepressants in prior decades. 
  As the drug historian David Herzberg articulated in conversation with UC Berkeley's The Microdose:

    There’s been an enormous amount of money invested in psychedelics as people hope that they 
    can be the real Prozac in the same way that Prozac hoped it would be the real Valium and 
    Valium would be the real barbiturates, which would be the real morphine. 
    There’s a long history of hoping that maybe this time, it’s not so complicated; 
    maybe there is a simple switch to change people without having to change any [other] aspect of their [lives].

  While others have noted similarities between the earlier SSRI hype and the ongoing hype for psychedelic medications,
   the rhetoric of psychedelic hype is tinged with utopian and magico-religious aspirations that have no parallel 
   in the discourse surrounding SSRIs or other antidepressants. I argue that this utopian discourse provides insight 
   into the ways that global financial and tech elites are instrumentalizing psychedelics as one tool 
   in a broader world-building project that justifies increasing material inequality. 
   This elite project reveals how medicalized psychedelics can potentially undermine the very prosocial and 
   pro-environmental outcomes that the field's funders insist psychedelics will promote. 
   To understand the envisioned role of psychedelics within this elite project, this paper analyzes a different 
   parallel hype, revealing correspondences between the psychedelic industry hype and the concurrent 
   hype surrounding artificial intelligence (AI), including the Large Language Models (LLMs) that power ChatGPT. 
   The presence of these parallels is understandable when one considers their underlying affinities, 
   like two blooms from one plant: the same Silicon Valley and venture capital forces are investing 
   enormous amounts of capital to develop both as cultivars in their own image, 
   selecting for desired traits that further the existing socioeconomic order.


”maybe there is a simple switch to change people without having to change any [other] aspect of their [lives]”

The difference with psychedelics is that they enable and manifest those behavioral changes.


LLMs can't access the training data that's less than the statistically most common token, so they use a random jitter.

With that randomness comes statistically irrelevant results.


It's a type of cognitive bias not much different than an addict or indoctrinated cult follower. A subset of them might actually genuinely fear Roko's basilisk the exact same way colonial religion leveraged the fear of eternal damnation in hell as a reason to be subservient to the church leaders.

hyperstitions from TESCREAL https://www.dair-institute.org/tescreal/


It looks like most of Peter's projects are just simple API wrappers.

Peter's been running agents overnight 24/7 for almost a year using free tokens from his influencer payments to promote AI startups and multiple subscription accounts.

  Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ...  I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.

  ...  Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.


Peter has been running agents overnight for months using free tokens from his influencer payments to promote AI startups and multiple subscription accounts:

  Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ...  I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.

  ...  Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.

https://github.com/steipete/steipete.me/commit/725a3cb372bc2...


The long list of domain names that vercel deployed to is interesting


you're telling me the guy isn't committing 1000 times a day manually?!


Where is your agent committing that many times?


  OpenAI has deleted the word 'safely' from its mission (November 2025)
https://theconversation.com/openai-has-deleted-the-word-safe...

Thread: https://news.ycombinator.com/item?id=47008560

Other words removed:

   responsibly
   unconstrained
   safe
   positive


The headline implies they selectively removed the word "safely," but that doesn't seem to be the case.

From the thread you linked, there's a diff of mission statements over the years[0], which reveals that "safely" (which was only added 2 years prior) was removed only because they completely rewrote the statement into a single, terse sentence.

There could be stronger evidence to prove if OpenAI is deemphasizing safety, but this isn't one.

[0]: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc...


They also removed the words build, develop, deploy, and technology, indicating that they're no longer a tech company and don't make products anymore. Wonder what they're all gonna do now?

/s


Can you elaborate on this more or point a link for some context?


Some crypto bros wanted to squat on the various names of the project (Clawdbot, Moltbot, etc). The author repeatedly disavowed them and I fully believe them, but in retrospect I wonder if those scammers trying to pump their scam coins unwittingly helped the author by raising the hype around the original project.


either way there's a lot of money pumping the agentic hype train with not much to show for it other than Peter's blog edit history showing he's a paid influencer and even the little obscure AI startups are trying to pay ( https://github.com/steipete/steipete.me/commit/725a3cb372bc2... ) for these sorts of promotional pump and dump style marketing efforts on social media.

In Peter's blog he mentions paying upwards of $1000's a month in subscription fees to run agentic tasks non-stop for months and it seems like no real software is coming out of it aside from pretty basic web gui interfaces for API plugins. is that what people are genuinely excited about?


What is your point exactly. He seemed very concerned about the issue, he said he did not tolerate the coin talks.

What else would he or anyone do if someone is tokenizing your product and you have no control over it?


I just made the observation that whoever was behind it, it ultimately benefited the author in reaching this outcome.


That's not actually research though. The LLM API is only requesting a few of the top search results from a search engine and then adding those web pages to its context window.

That might work for simple tasks, but it's easily susceptible to prompt injection attacks and there's no way to validate the quality if it's statistically novel enough and outside of the core training data.


"Is curing patients a sustainable busines model?" - Goldman 2018

https://www.investmentwatchblog.com/goldman-sachs-asks-in-bi...

Many of the biggest medical innovations have come from publicly funded university researchers, which then license or give away their findings to private businesses.


LTO9 is only 18TB.

The LTO compression ratio is theoretical and most peoples data will be incompatible with native LTO compression method used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: