On the other hand, there must be a limit on the amount of "moderation" you're allowed to perform before becoming a publisher. If you start labelling content as misleading (but don't remove it) or even if you privilege certain content instead of another (for example, you privilege social media posts advertising a product because it increases your revenue) then you could be considered a publisher.
Let's put in another way: publishers used to produce content (as in select, solicit, remunerate) to further their interests (financial, but sometimes also political, or cultural). Now the big social networks don't need to produce the content, but they can tap into such an enormous amount of it that they can limit themselves to shaping it. Promote certain posts, reduce the visibility of others. The overall effect is the same. You could probably make Facebook or Twitter in a great cultural publications just by tweaking the algorithm that determines the visibility of the posts.
> On the other hand, there must be a limit on the amount of "moderation" you're allowed to perform before becoming a publisher.
That isn't clear to me at all. The publisher is essentially the author from a liability perspective.
I do agree (see my other post in this thread) that there are serious concerns about the moderation power of the operators of these massive online public forums. But that doesn't make treating them as the author from a liability perspective the right tool for dealing with that.
Particularly because in most specific cases where their moderation power might be misused there is no question of liability. Imagine a site where everyone was always honest and nice, but the operator ruthlessly censored all posts by Georgists and S230 didn't exist. Okay, they'd be liable for their users posts-- but so what? There is nothing there for anyone to sue over. The Georgists would still be totally silenced.
> That isn't clear to me at all. The publisher is essentially the author from a liability perspective.
In a newspaper that selectively chooses to print some letters from readers, would the author be the publisher or the newspaper? That's an example of extreme moderation ("out of the 1000 letters we received this week, we will print this one") while Twitter is likely closer to the other extreme ("out of these 1000 Tweets, we'll hide/delete this one").
IMO it's when you start curating and suggesting in complex ways rather than simple "most popular having (user-provided) tag X" ways. Throw in other stuff like monetizability but denying that to some creators, and IMO Youtube is way over the line and the rest aren't far behind, because they can't resist using their position to manipulate what the user sees—there's too much money in it.
Let's put in another way: publishers used to produce content (as in select, solicit, remunerate) to further their interests (financial, but sometimes also political, or cultural). Now the big social networks don't need to produce the content, but they can tap into such an enormous amount of it that they can limit themselves to shaping it. Promote certain posts, reduce the visibility of others. The overall effect is the same. You could probably make Facebook or Twitter in a great cultural publications just by tweaking the algorithm that determines the visibility of the posts.