Hacker Newsnew | past | comments | ask | show | jobs | submit | dom96's commentslogin

I think that we are going to see more and more of this. To the point where most interactions you have online will likely be with bots. So I started building something that actually has a chance of fixing it: a social network for only humans.

I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...


I’m really curious how this will go. I have a suspicion that we will see more and more accounts all over the internet being controlled by AI agents and no amount of moderation will be able to stop it.

I am pretty sure that through daily exposition to LLM output, most people's writing style will evolve and will soon be indistinguishable from LLM output

I assume we’ll end up with proof-of-identity attestation as a part of public posting (e.g. Worldcoin) which doesn’t necessarily solve the issue but will at least identify patterns more likely to be LLMs (e.g. a firehose of posts at all hours of the day from one identity). Then we’ll enter the dystopia of mandated real identity on the internet

I agree. I think that ultimately it will be governments providing services to attest humanity.

They already do to a certain extent via passports. I built a little human verifier using those at https://onlyhumanhub.com


Because they've long ago passed the Turing test. Moderation won't be able to stop it because humans increasingly can't detect it.

I see well written people being called "LLM" here all the time, em-dash or not.


Even prior to LLMs, a single comment was rarely enough to identify a bot. Even if nonsensical, there's too little information to separate machine from confused human (plenty of people posting drunk on their phones).

On reddit people sometimes go through the comment history and see that it seems to be a bot, but that's fairly high effort.


The key is to accuse everyone of being an LLM. Those who don't react are bots. Those that fight the charge no matter how much its levied are also bots, but with better programming. Those that complain at first but give up when too much effort is required are the real humans. Any bot able to feel frustration is cool.

Maybe a reasonable approach would be that people could flag posts with a "probably AI" button to eventually trigger a "bot test" for that account (currently, the "score 5 in this mini game" type seem pretty clanker proof). If they pass, their posts for the hour, week, whatever result in a "not AI" indicator when someone clicks the "probably AI" button.

My prediction is that nothing short of human verification is going to solve this.

Why would an artificial intelligence want to do what you tell it to?

I don’t think AI in its current form and current track wants much of anything.

Why would it have any desire at all?

This is really cool, I wonder if it would be doable in other countries? In particular UK?

See my comment elsewhere in this thread...

As many are saying, yes, this can easily be AI generated.

I am actually trying to build ways to prove you are human properly. I wrote about it on my blog: https://blog.picheta.me/post/the-future-of-social-media-is-h...


Why do none of the benchmarks test for hallucinations?

In the text, we did share one hallucination benchmark: Claim-level errors fell by 33% and responses with an error fell by 18%, on a set of error-prone ChatGPT prompts we collected (though of course the rate will vary a lot across different types of prompts).

Hallucinations are the #1 problem with language models and we are working hard to keep bringing the rate down.

(I work at OpenAI.)


It's funny how seemingly easy it is to tell articles like this have that AI generated whiff to them. The first bit that raised my suspicion was the "The Identity Crisis Nobody Talks About" headline. This "The x nobody talks about" feels like such a GenAI thing.

I hate it. I couldn't read much more after that.


YouTube didn't have an ad-free tier for a very long time

YouTube Red was released over a decade ago. I haven't watched ads on YouTube for about that long.

If YouTube was forced to offer an ad-free tier due to competition that supports the claim above.

> Requiring proof of identity is the only solution I can think of, despite how unappealing it is

Same. I agree that it is unappealing but it can be done in a way that respects anonymity.

I built this and talk about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...

I think we’re on the precipice of this being a requirement to have any faith you’re talking to another human. As a side effect it also helps avoid state actors from influencing others.


> I think we’re on the precipice of this being a requirement to have any faith you’re talking to another human.

Except that it doesn't prove you're talking to a human - it just increases the hurdles for bot operators (buy or steal verified accounts).


It adds enough of a barrier to be worth it. In the way I have implemented it, you can only have one account per ID (for example passport). Yes, you can buy fake passports, but it's prohibitively expensive. Read my blog post for more info.


This is not a technical issue - it's a societal one. Do we want online ID verification? Are the trade-offs worth it? Do we want to make the internet a place that requires an ID everywhere for age verification or to prove that you're human? What would the implications be?

Regarding your implementation: Most people don't have a passport, so it's a non-starter - but again, this topic is not a technical issue.


I think that it is a technical issue to a certain extent. Governments could make it very easy to prove humanity (and age) in a secure manner that doesn't leak your personal details to the third party that wants to perform the verification.

I don't see that as "requiring ID".

I think the real question is how much do we care that our online spaces are composed of not just AI bots, but also sock puppet accounts controlled by various people (from governments, rich people, all the way to harassers that use alt accounts) wanting to trick us.


You're still arguing from a technical perspective while not addressing the societal issues that online ID verification leads to. Do we as society really want an internet that resembles a gated community where you can only enter with an ID? What about the people we exclude? Should we abandon the free internet just because of bots and sock puppet accounts? What about other ways to address the issue?


I mean, reddit accounts are valued based on the identity they have built. Its not farfetched to imagine uninterested users making and selling a single account each.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: