- The First Amendment is content-neutral and provides a nonpolitical framework for regulating speech. It would behoove social media companies to abide by it.
- Rather than resort to censorship, social media companies can offer users tools that block unwanted content, including content that could be hurtful or offensive.
- Facebook, Twitter, and other social media platforms have become the new public square. Rather than resorting to corporate speech policies, these companies should promote free speech principles in the U.S. and abroad.
- Social media companies are global. The U.S. Constitution is based on American values and, therefore, should not be used to regulate international platforms.
- The spread of hateful digital content dilutes meaningful discourse and, in some cases, causes emotional and physical harm. Social media companies have a duty to offer safe, welcoming platforms for users.
From election interference to “fake news,” nefarious actors are using social media to undermine democracies and deepen partisan divides. Social media companies must act to prevent this type of conduct.
The Debaters
David French - Senior Writer, National Review
Corynne McSherry - Legal Director, Electronic Frontier Foundation
Nathaniel Persily - Professor, Stanford Law
Marietje Schaake - International Policy Director, Stanford's Cyber Policy Center & Fmr. Member, European Parliament
Is the problem not precisely that there are very apparent and relevant conflicts between these points? Reconciling free speech with "offer[ing a] safe, welcoming platforms for users" is exactly what's at play here when it comes to TOS and deleting content.
> "Rather than resort to censorship, social media companies can offer users tools that block unwanted content, including content that could be hurtful or offensive."
I think this is an interesting idea, but I think it would effectively mean de facto speech censorship like HN has with flagged/dead comments. Sure they exist, but most users don't have them on, and the larger the forum, the more annoying having them on will be. Given the amount of spam/hate comments on Twitter, no sane user would ever turn that feature on. So you end up right back where we are now, only by technicality they aren't censored. I suppose a positive, but then it gets right back into editorialization arguments re what content is flagged or "blocked".
> Social media companies must act to prevent this type of conduct.
Even if most can agree here, this doesn't appear to lay out concrete policies, nor does anyone appear to have any proposed that align with these main points that I know of. I didn't watch the media specifically so if you have any detail on those specifics or any concrete recommendations they make, a summary would be very much appreciated.
https://www.intelligencesquaredus.org/debates/constitutional....
Main Points from the page.
- The First Amendment is content-neutral and provides a nonpolitical framework for regulating speech. It would behoove social media companies to abide by it.
- Rather than resort to censorship, social media companies can offer users tools that block unwanted content, including content that could be hurtful or offensive.
- Facebook, Twitter, and other social media platforms have become the new public square. Rather than resorting to corporate speech policies, these companies should promote free speech principles in the U.S. and abroad.
- Social media companies are global. The U.S. Constitution is based on American values and, therefore, should not be used to regulate international platforms.
- The spread of hateful digital content dilutes meaningful discourse and, in some cases, causes emotional and physical harm. Social media companies have a duty to offer safe, welcoming platforms for users.
From election interference to “fake news,” nefarious actors are using social media to undermine democracies and deepen partisan divides. Social media companies must act to prevent this type of conduct.
The Debaters
David French - Senior Writer, National Review
Corynne McSherry - Legal Director, Electronic Frontier Foundation
Nathaniel Persily - Professor, Stanford Law
Marietje Schaake - International Policy Director, Stanford's Cyber Policy Center & Fmr. Member, European Parliament