How will these systems defend against spam, click farms, troll armies, and AI assisted or powered sock puppeting when they get big and/or influential?
This is usually what kills federated and decentralized communication platforms. They can work fine as long as they are too niche for bad actors to target, but as soon as there is money to be made or political influence to be had from targeting them they are destroyed by abuse.
It's a major threat for the centralized platforms, and those are easier to defend.
Today's Internet is a battlefield in a global information war and new systems must be designed accordingly. Unfortunately most efforts that I see in these areas still make optimistic assumptions and underestimate the sophistication and determination of bad actors.
Instances that don't police this get silenced or defederated. Much of the fediverse is made up of small invite-only instances that share information on bad actors like this. The bigger ones have mod teams and are generally run by people who don't equivocate on keeping bad actors to protect ad revenue.
edit: This is, at a minimum, a good experiment to see if these kinds of propaganda are inherent to social media or only possible because ad-funded silos are loathe to ban obvious bad actors and not enable them with their tools.
I'm on the side of blaming Facebook and Twitter. They didn't create propaganda, but they sure did make it cheap and easy.
That's a start, but my concern is that the volunteer militia will get swamped and burnt out when the real attacks come.
Right now Mastodon and other ActivityPub platforms are too small for a Cambridge Analytica / Russian FSB or other similar caliber actors to bother with. Twitter, Facebook, Instagram, etc. are where most of the users are so that's where most of the effort will go.
If these platforms ever "tip" into mainstream adoption, prepare to be targeted by organized crime gangs running financial scams, nation states, corporate PR firms, and other organizations with hundred million to billion dollar budgets.
What I really wanted to do was to stress the fact that this is a battlefield. One of the trends I see in the early 21st century is the dematerialization of warfare. Wars can now be fought entirely online. Governments can be toppled. Economies can be destroyed. Corporations can be imploded. All this can be done with a mixture of cyber attacks and propaganda. As a result we are seeing the redirection of military budgets toward these things. The sort of spam and amateur brigading that most volunteers are used to dealing with on social forums and platforms is nothing compared to what the big social media platforms are facing now and that is nothing compared to what's coming. Billions of dollars are currently being spent by PR firms, advertisers, and governments to develop increasingly advanced AI and big data powered propaganda platforms to weaponize the Internet. In the future we'll probably see fully automated AI driven propaganda, what I've started calling "con artistry at scale."
Federated and decentralized platforms are very vulnerable in ways that silos are not, and this has to be thought about. It's easy to create quiet apparently friendly and normal Sybil nodes that passively suck down data and then use that data to mount active attacks from other directions. Volunteers may fight active attacks, but they may have no way of knowing which apparently normal nodes are actually passive participants in those attacks. Also keep in mind that "attacks only get better." With each attack the attackers learn, and it's generally easier to attack than to defend (in cyber-security in general, not just here).
I'm on masto regularly and the community is much more pleasant there (than say Twitter). A lot of users attribute that pleasantness to decentralization. I call BS. It's because it's a tiny self-selected group that wants the opposite of what Twitter provides.
You're absolutely right: once this tips, and people join because that's what you have to join to talk to people and not because they're looking for a real change, then it'll be unbearable.
The best thing that can happen to fediverse is that it will continue to grow incrementally, so that at each step they can see the missing moderation tools and build them before it all blows up.
Personally I think we'll look back on these massively open networks where everybody can reply to everybody as an anomaly.
> Personally I think we'll look back on these massively open networks where everybody can reply to everybody as an anomaly.
It's cyclical.
- The internet and online communities (BBSes) started out decentralized.
- Then came AOL / Compuserve / etc. Centralized platforms.
- The web broke up those platforms and shuffled everything around.
- Then the current Google / Facebook / Twitter megaplatforms arose and that's the state we're in.
So it's logical that we're ready for a decentralized cycle but over time, users will forget why decentralization is good and another big platform will rise.
I have found this silly WRT "there are no viruses for Linux because no-one uses Linux" too.
If it is small, the attack is going to be much cheaper too. You don't need 24/7 active armies of trolls, you need one, maybe two parttimers to attack a small mastodon service.
My argument is that at all moments the attackers will weight benefits and costs. Currently, apparently, the costs to infiltrate, spread fake-news, or troll is too high.
This is caused by both the size of the attacked community (the benefits to the attackers are low) but mostly due to the attack being hard enough to not implement it.
> My argument is that at all moments the attackers will weight benefits and costs. Currently, apparently, the costs to infiltrate, spread fake-news, or troll is too high.
This is essentially restating the argument you find silly :). Linux didn't have viruses, and Mastodon doesn't (yet) have paid troll armies, because in both cases, why go to all the effort there, when the same effort on the popular platforms can yield you orders of magnitude more gain?
I don't think the important question to ask here is if it can survive, it's how can it survive. These sort of tools should be in the commons anyway, just like social media.
Monolithic social media is not sustainable and often times quite harmful because it optimizes for engagement. The Fediverse will deal with this one way or another, worst case everyone switches to whitelisting.
I've been in favor of adding a limit - once you can remove by deleting a few lines of code of course - to the number of users an instance can hold. Purely for symbolism. It worked when Elasticsearch was added to Mastodon and a "viewable_by" property was added so you could only search for things you had engaged with.
I totally agree. I'm not trying to poo-poo this stuff, just trying to ask some questions. I'm asking in part because I'm curious about what has been developed and what's being contemplated for the future.
I think one promising avenue is to start doing our own work in AI / deep learning powered countermeasures against attacks. AI doesn't replace humans (yet?), but it does augment them. A citizen militia vs. an AI-augmented propaganda army would be toast unless the citizen militia is also leveraging the force multiplying power of AI.
Given the agility of small actors at innovation, the citizen militia might actually end up with better AI than the PR flunkies and propaganda mills.
> Federated and decentralized platforms are very vulnerable in ways that silos are not, and this has to be thought about.
It's funny because current FLOSS decentralized platforms are so horrendously designed, hopelessly unscalable, and obviously unmaintainable that the silos like Facebook and Twitter should be decades ahead in their ability to eradicate nation-state trolls and sock puppets.
Yet look at the recent NY Times article on Facebook-- their Republican lobbyist said they couldn't publicly release the results of their 2016 election investigation because his own mother had friended one of the Russian trolls revealed by their chief security officer. And we only know any of this because of leaks.
And that's apparently after their security officer ignored the board and legal's advice to stop investigating!
So right now you've got two choices:
1. silos with a business model that prevents effective defense of users from the attacks you describe.
2. poorly designed/conceived decentralized FLOSS thingies that will probably fail in ways similar to Debian's openssl RNG/valgrind debacle.
Debian eventually fixed that bug. How do you predict Twitter/Facebook will fix their business models?
Given the kinds of communities that have flocked to the fediverse the past year, moderation tools have always been a top priority of apps adopting ActivityPub. Features have been killed due to the potential for abuse, too.
My hope is that as the fediverse grows, support becomes a must-have feature for any new social site/app. A nascent social site doesn't need to attract an initial core community if it can plug you into one that already exists.
The ActivityPub grassroots is slowly growing.