I don't think it's wrong to discuss protocols, in particular, in the context of a social problem. If we assume the concept of our world as an info-centric one - bodies as libraries, food as signals, etc. - then digital networks act as a boundary of this info-verse which is more manipulable than most, one which we can revise and reorder by changing the systems of communication.
However, even though it's "more manipulable" it's still not _easy_. "Social" in its surface forms - the forum, the comments section, the chat room, etc. - has persisted in a churn cycle where the form changes somewhat between iterations, but the characters, goals, and tactics are age-old. All the stuff about "us" versus "them" manifests itself in every permutation of every medium, whether it's "women" or "those bastards over the next hill" that are deemed the enemy.
Troll tactics ultimately come down to the old "signal to noise" problem. Forum admins know pretty well - if you trace the "noise" in a discussion backwards to the source bad actors, and then quietly remove those actors so that innocent people aren't dragged in, the conversation changes dramatically for the better. There is no magic algorithm needed, or even privacy invasion: just sharp eyes that can spot and predict the reactions to a problematic train of thought, and prune it out before there's a flame war. But we persistently end up in spaces where high volume is encouraged, anonymity is encouraged, and moderation is hands off, hands tied, or willfully blind. We are allowing Sybil attacks to take place on our networks all the time, every time.
Security aspects play a role in this, but they would be more of a long-term "how do we rebuild the internet" question; multiplayer games of a certain stripe(usually twitchy FPS games) have had to work around an inherent lack of security and the resulting cheating for a long time, and the best guarantees they've come up with are social in nature - as long as you can create identity persistence, the rest of the social profile can be built up.
Advogato put a ton of thought into technical aspects of identity and trust metrics; I think many of its results remain applicable to a new network. Blockchains also enable distributed persistent identity, but not unique identity. Perhaps we could incorporate market forces and allow identities or their postings to exist as cryptoassets - creating a universe where "valuable comments" translate to wealth. Or perhaps that would be our undoing, allowing trolls to profiteer off their own cultivated following. Perhaps we could force more accountability into the system by explicitly defining top-down fiefdoms and have individuals act to "represent" those fiefdoms, therefore motivating the leaders to moderate them into the best possible light.
As I see it, it's a big open field for protocol designers to come up with the right balance of incentives, privileges, and accessibility.
However, even though it's "more manipulable" it's still not _easy_. "Social" in its surface forms - the forum, the comments section, the chat room, etc. - has persisted in a churn cycle where the form changes somewhat between iterations, but the characters, goals, and tactics are age-old. All the stuff about "us" versus "them" manifests itself in every permutation of every medium, whether it's "women" or "those bastards over the next hill" that are deemed the enemy.
Troll tactics ultimately come down to the old "signal to noise" problem. Forum admins know pretty well - if you trace the "noise" in a discussion backwards to the source bad actors, and then quietly remove those actors so that innocent people aren't dragged in, the conversation changes dramatically for the better. There is no magic algorithm needed, or even privacy invasion: just sharp eyes that can spot and predict the reactions to a problematic train of thought, and prune it out before there's a flame war. But we persistently end up in spaces where high volume is encouraged, anonymity is encouraged, and moderation is hands off, hands tied, or willfully blind. We are allowing Sybil attacks to take place on our networks all the time, every time.
Security aspects play a role in this, but they would be more of a long-term "how do we rebuild the internet" question; multiplayer games of a certain stripe(usually twitchy FPS games) have had to work around an inherent lack of security and the resulting cheating for a long time, and the best guarantees they've come up with are social in nature - as long as you can create identity persistence, the rest of the social profile can be built up.
Advogato put a ton of thought into technical aspects of identity and trust metrics; I think many of its results remain applicable to a new network. Blockchains also enable distributed persistent identity, but not unique identity. Perhaps we could incorporate market forces and allow identities or their postings to exist as cryptoassets - creating a universe where "valuable comments" translate to wealth. Or perhaps that would be our undoing, allowing trolls to profiteer off their own cultivated following. Perhaps we could force more accountability into the system by explicitly defining top-down fiefdoms and have individuals act to "represent" those fiefdoms, therefore motivating the leaders to moderate them into the best possible light.
As I see it, it's a big open field for protocol designers to come up with the right balance of incentives, privileges, and accessibility.