One potential outcome here -- social media platforms start _more actively_ banning users to prevent potential liability issues, thus fracturing the market. Highly doubt it will work out that way, but it would be nice (well, maybe...).
Hopefully it will lead to what happened on reddit: the worst go off to a competitor which then fails to gain mainstream attention (e.g. Voat, Gab, etc). While they’ll be in an echo-chamber (which isn’t a good thing at all), at least they won’t ruin other communities.
Doesn’t that just lead to an echo chamber on the site they left also? When Trump supporters were banned from the political subreddits, they formed /r/The_Donald. And it’s an anti-Trump-criticism, anti-anything-left-leaning echo chamber. But that left /r/politics to become a left leaning echo chamber itself. Now, one can’t have a civil discussion about anything because /r/politics will ban anything saying Trump did anything remotely good, and /r/The_Donald will ban anything remotely critical of Trump.
This is a false equivalence - r/politics is not the left leaving equivalent of r/The_Donald. Trump supporters weren't broadly banned from political subreddits, just the ones that broke the rules did, like everyone else. r/politics is a left leaning echo chamber, yes, but you won't get banned for being pro Trump (just downvoted). On the other hand, you will get banned from r/The_Donald or r/conservative for being anti-Trump.
The problem there is that these sites have become echo-chambers, where they breed extremism. These kinds of ideologies (E.G. Nazism) need to be exposed to the light, like a bacteria, in order to be shamed and debunked.
This is a nice theory but are there any instances of Nazis being converted to normies because someone told them they were wrong on the internet?
Instead what seems to happen is that normies get pulled into Nazi content because the recommendation algorithms on these sites see heated discussions as engagement and try to feed it.
The only study I'm aware of on that is on Youtube's recommendation algorithm, which found the opposite is true - it tends to steer people away from radicalization.
That would be a spectacular outcome, especially for anyone who likes ActivityPub and blogs, and wants to see them get a fresh shot of juice.
I'm just glad the prevailing call isn't to delete Section 230 altogether, which I think would cause a bit of pandemonium (though maybe that would be a social good as well, and maybe Section 230 was the mistake that made the social media monster).
That would be a spectacular outcome, especially for anyone who likes ActivityPub and blogs, and wants to see them get a fresh shot of juice.
Wouldn't this harm any of these activities, aside from situations where people are directly hosting their own content, because anyone hosting could be held liable for what gets posted?
That's the rub. The fantasy seems great initially for the prospect of breaking up the far too large (in my opinion) social media behemoths but maybe also ruins things for all the potential little guys as well.