I think that we are going to see more and more of this. To the point where most interactions you have online will likely be with bots. So I started building something that actually has a chance of fixing it: a social network for only humans.
I’m really curious how this will go. I have a suspicion that we will see more and more accounts all over the internet being controlled by AI agents and no amount of moderation will be able to stop it.
I am pretty sure that through daily exposition to LLM output, most people's writing style will evolve and will soon be indistinguishable from LLM output
I assume we’ll end up with proof-of-identity attestation as a part of public posting (e.g. Worldcoin) which doesn’t necessarily solve the issue but will at least identify patterns more likely to be LLMs (e.g. a firehose of posts at all hours of the day from one identity). Then we’ll enter the dystopia of mandated real identity on the internet
Even prior to LLMs, a single comment was rarely enough to identify a bot. Even if nonsensical, there's too little information to separate machine from confused human (plenty of people posting drunk on their phones).
On reddit people sometimes go through the comment history and see that it seems to be a bot, but that's fairly high effort.
The key is to accuse everyone of being an LLM. Those who don't react are bots. Those that fight the charge no matter how much its levied are also bots, but with better programming. Those that complain at first but give up when too much effort is required are the real humans. Any bot able to feel frustration is cool.
Maybe a reasonable approach would be that people could flag posts with a "probably AI" button to eventually trigger a "bot test" for that account (currently, the "score 5 in this mini game" type seem pretty clanker proof). If they pass, their posts for the hour, week, whatever result in a "not AI" indicator when someone clicks the "probably AI" button.
In the text, we did share one hallucination benchmark: Claim-level errors fell by 33% and responses with an error fell by 18%, on a set of error-prone ChatGPT prompts we collected (though of course the rate will vary a lot across different types of prompts).
Hallucinations are the #1 problem with language models and we are working hard to keep bringing the rate down.
It's funny how seemingly easy it is to tell articles like this have that AI generated whiff to them. The first bit that raised my suspicion was the "The Identity Crisis Nobody Talks About" headline. This "The x nobody talks about" feels like such a GenAI thing.
I think we’re on the precipice of this being a requirement to have any faith you’re talking to another human. As a side effect it also helps avoid state actors from influencing others.
It adds enough of a barrier to be worth it. In the way I have implemented it, you can only have one account per ID (for example passport). Yes, you can buy fake passports, but it's prohibitively expensive. Read my blog post for more info.
This is not a technical issue - it's a societal one. Do we want online ID verification? Are the trade-offs worth it? Do we want to make the internet a place that requires an ID everywhere for age verification or to prove that you're human? What would the implications be?
Regarding your implementation: Most people don't have a passport, so it's a non-starter - but again, this topic is not a technical issue.
I think that it is a technical issue to a certain extent. Governments could make it very easy to prove humanity (and age) in a secure manner that doesn't leak your personal details to the third party that wants to perform the verification.
I don't see that as "requiring ID".
I think the real question is how much do we care that our online spaces are composed of not just AI bots, but also sock puppet accounts controlled by various people (from governments, rich people, all the way to harassers that use alt accounts) wanting to trick us.
You're still arguing from a technical perspective while not addressing the societal issues that online ID verification leads to. Do we as society really want an internet that resembles a gated community where you can only enter with an ID? What about the people we exclude? Should we abandon the free internet just because of bots and sock puppet accounts? What about other ways to address the issue?
I mean, reddit accounts are valued based on the identity they have built. Its not farfetched to imagine uninterested users making and selling a single account each.
I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
reply