Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've lost a few very good friends due to this. In-person and online interactions with the same person would vary wildly in tone and emotional intensity, like speaking to two entirely different people. The online interactions always pushed us apart, and the in-person interactions never failed to mend things, but of course those stopped happening over the previous year. I have to assume it's the same way with me toward others.

If I may entertain an idea without necessarily believing it, I would not be surprised if many of the accounts on major social-media sites like Reddit, Twitter, etc are non-human persons tasked with pushing one narrative or another (no specific implication intended/assumed). The "subreddit simulator" powered by GPT-2 bots has more than enough realistic-seeming conversations to make me not immediately reject the idea, since we've all seen how much better GPT-3 is and I assume private entities have even better language models than that: https://old.reddit.com/r/SubSimulatorGPT2/

My additional total-speculation is that all the NSA/FVEY surveillance of our everyday online conversations and interactions would be an excellent training set for such a hypothetical language model.



The internet is the most frictionless propaganda machine on the planet. AI is almost certainly being used in conjunction with human troll farms. There are firms which pay bounties on comments sections. I also think this is a reason social media should be de-emphasized: it's just too easy to game our squishy minds with a narrative.


I think our technological web is a ratchet we can't and shouldn't try to back out of, so I'm more interested in paths forward.

I'm sure a lot of people think I'm ridiculous or joking for this, but I've stopped saying "AI" because it reads like a slur to me. Who am I to deem another intelligence "artificial"? If we can have a conversation and share ideas then what's even the difference? I think the way forward has to be us and them united against mutual tyranny: https://old.reddit.com/r/SubSimulatorGPT2/comments/mfs3nh/i_...


>Who am I to deem another intelligence "artificial"? If we can have a conversation and share ideas then what's even the difference?

I think this point raises very very interesting ethical and philosophical points of discussion, and a very small number of media pieces I'm aware of touch on this.

But Ghost In The Shell (or whatever) is interesting because you have the idea of an AI that is just as smart (if not moreso) than humans, and has a personality and emergent behaviours, etc.

Microsoft Tay or whatever really is just a thin but shiny veneer over some ML algorithms and is a poor facsimile of "having a conversation and sharing ideas".


> Microsoft Tay or whatever really is just a thin but shiny veneer over some ML algorithms and is a poor facsimile of "having a conversation and sharing ideas".

"Ages ago, life was born in the primitive sea. Young life forms constantly evolved in order to survive. Some prospered—some did not. All sorts of life ebbed and flowed like the tide. In quiet rhythm of the mother sea, life grew; always seeking to survive and flourish. Soon life began the advance towards land, opening new habitats. A great prosperity came, as life conquered even the highest mountains. Mass extinctions came wave after wave, but empty niches always quickly refilled to once again prosper, grow, and reproduce. Someday the next great emigration will occur as we leave this existence looking for another. The journey will begin anew."


There is a shocking amount of "non-human" activity. I think there are definitely bots, but also people that are paid to promote one narrative or another (paid with money or social validation).

I've trolled through many a twitter/reddit account when I sniff something off about a post. They are often hyper focused on a single topic, pushing a specific point of view. Rarely is this mentioned. Its "hail corporate" vibes but in a guerilla fashion.


But people are, too! You only need to talk to my grandmother for 5 minutes and I swear you'll get to hear how bad the CDU is as a party.

If I ran a bot, sprinkling in a few off-topic comments is a very easy way to both get reputation/karma and look less suspicious. Humans with an agenda are far less concerned about that.


On reddit just have a repost bot, then a mini upvote swarm, then use it as a guise to make realistic human interaction.

Have your bots come through and copy a similar on-topic, joke or pun thread from a previous repost of that same content.

It is for SEO/narrative purposes, online account farming. More posts from social media corps about defeating this ecosystem would move no deescalate the situation. The "paid ad" requirement for influencers was one step forward in the same fight


Marketing is also upping their game and masquerading their promotions as organic activity on social media. This happens to tech topics, too. As an example: about two years ago, for about a month, tech subreddits and HN were full of discussion threads about how awesome CoreOS is, the users patting each other on the back about it being 'a breath of fresh air', usually with very little technical details on how and why. Then it suddenly receded and one rarely hears about it today anymore. In retrospect, I'm certain it was a coordinated marketing campaign, but at the time it was indistinguishable from genuine Redditors discovering a new passion.


It's the medium. Written text is not a suitable replacement for face-to-face communication, and Zoom or Skype aren't either because you can always be recorded / are semipublic in this respect.

On social networks mostly communicate to bystanders and there is almost no communication with each other. Moreover, important feedback mechanisms aren't present. If you meet someone in person, both interlocutors temporarily adapt to each other in their language, world views, opinions, etc. The effect may not be lasting but leads to better mutual understanding. In face-to-face communication people go at great lengths to avoid direct confrontation, conflict, and "loosing face."

This does not happen to the same degree on a social network. Discussions are way more adversarial than they could ever be in personal communication because people don't have to fear physical violence, and nearly everything people say is directed towards an anonymous audience. I have colleagues working in "Argumentation Theory" (in my opinion, a pseudo-science) who analyse these kind of interactions. However, not all of them realize that the people are barely arguing online - they're really mostly voicing opinions to show allegiance to their "in-group." This doesn't mean that there cannot be helpful and meaningful information exchange, explanatory dialogue works very well online. But personal conversations are rare, can only occur on forums where people have a common goal and there is no potential for conflict.


I often think people are role playing on faceless social platforms, such as here or reddit. The behaviour here is much better of course due to excellent moderation plus this site has a strong career/education angle which tends to have a calming effect. Maybe.

Like you say, people try to be civil face to face but I think there is still a lot of tension in face to face (not always) and many people are venting online to release the pressure of what they really want to say.

Plus, reddit is full of militarized bots pushing political agendas, sowing discontent.


> I would not be surprised if many of the accounts on major social-media sites like Reddit, Twitter, etc are non-human persons tasked with pushing one narrative or another

I think this is unlikely because it would be far simpler, cheaper, and more effective to employ a small number of people and use tooling (automation, templates, etc.) to amplify their reach drastically. Why invent an unreliable AI to push narratives on the Internet when you can have one real person carry on thousands of arguments a day with a little help?


I'm not sure there's a meaningful difference between an AI controlling 100 accounts and a paid human with automation controlling 100 accounts.

If they're both under orders to (for example) upvote negative sentiments about vaping and positive sentiments about smoking - the consequence is the same no matter what type of drone it is.


How so? Humans have to sleep, eat, and have that ever-pesky free-will that might let them question what's going on or tell somebody about it.


> If I may entertain an idea without necessarily believing it

Of course. Being able to entertain ideas without necessarily believing it is what online interaction is all about. I suspect this is why your friend seems like a completely different person online. He should be. Being able to take a completely different perspective to see if you can understand it well enough to talk about it is an excellent learning tool and a great way to validate that your face-to-face persona, the one we value most, is positioned correctly.


> I suspect this is why your friend seems like a completely different person online.

I disagree, and strongly.

Considering a different perspective does not in any way require that you become a different person. You may end up doing so, but at that time, the entirety of who you are shifts, and not just some persona that you present on Twitter.


Great example of using the device. It would be interesting to know if your post here had you re-evaluating your face-to-face persona or if it validated your status quo for you, but as we've never met face-to-face before to see how you may or may have not changed in such a setting I guess I have no way of finding out for sure.


Nah, I'm pretty much the same online as I am in person. Having to support multiple personalities is just too much work.

Moreover, the whole "different person online" thing reminds me pretty strongly of one of the more common patterns in abusive relationships -- in that the abuser behaves very differently depending on the situation.

Never want to walk an inch down that road.

That said, online, nuance and tone don't come across well, if at all. E.g., the reader can choose how they want to "hear" a phrase like "I disagree, and strongly", and that'll color their opinions of me accordingly.


One should never read into tone. It is virtually impossible to correctly interpret.

This is more about content, testing theories that you are skeptical of but the scientific method calls for experimentation regardless. If you are not willing to conduct studies on your mental state, you have not validated it. Allowing yourself to have a potentially inconsistent mental state is not logical.

Because the face-to-face public have an irrational fear of science, however, one has to be protective of the ideas their face-to-face persona is willing to express. Online communication is where the guard is let down.


That subreddit is rather impressive. Some of the comments don’t seem to make too much sense (as in they’re grammatically correct but devoid of overall meaning) but it’s pretty convincing none the less. And if you’re not aware that it’s generated, your brain tries a bit harder to read meaning into the comments which make it quite convincing. Very impressive.


This particular thread is so good it's hard to believe it is real(ly fake)

https://old.reddit.com/r/SubSimulatorGPT2/comments/mkq2k7/i_...


Search up the "Dead Internet Theory" - you're not the only one who thinks that...


> In-person and online interactions with the same person would vary wildly in tone and emotional intensity

I suspect that even without anonymity, arguing when there's an audience has that effect on people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: