A little while ago I played with the idea that if the masses were convinced that breaking up Facebook (or Google, Amazon, etc) was a good idea and that it eventually happened - the surviving BigTech "shrapnel" would be less capable to stand up against a government seeking information or control over them.
A paranoid and overly Orwellian idea for sure, but interesting to play around with!
Personally I found it quite useful, although I was already familiar with several topics, it was interesting to read the author's perspective. Answers to exercises would be great, but even without them I found the content satisfying and useful enough.
I think that in the west, we have a different dynamic between tech and government - one that makes a huge difference.
In China, it would appear that the government is deeply involved with WeChat (which itself, is deeply entrenched in everybody's lives). This close co-operation gives them access and control to monitor their citizens - leaving the possibility of an Orwellian system.
In the west, instead, its a little bit more back and forth between tech and government. The big tech companies take advantage of opportunities they discover (for the purposes of profit) with the government subsequently cracking down on activity they deem abusive and harmful. Conversely, if the government itself seeks to spy on or inappropriately extend its reach into people's privacy, the tech companies are big enough to stand together and object to these types of intrusions. Cases such as those involving the government seeking to break into an iPhone and Apple refusing to help or with Facebook making WhatsApp E2E encrypted by default (and soon Messenger as well) are a few examples that come to mind.
With the current uproar and battle cry to break these big tech companies up, part of me feels that we might collaterally lose this balance of power between the two.
I know this is a highly unpopular opinion (and hence the throwaway), but I really don't understand this idea of wanting to "break up" Facebook and the underlying motivation behind these types of statements.
What exactly is it about the platform that makes it so "evil"? Do we just think Zuckerberg is innately corrupt and trying to slip some sort of mastermind plan of internet domination without us noticing?
I understand and similarly dislike the idea that Facebook is aggregating statistics about its users and selling them to advertisers, but at the same time, do we honestly believe that the vast majority of people would rather subscribe to "get rid of ads and access to data"?
I don't use Facebook myself (despite having an account), but from my perspective, it just seems that society as a whole has simply decided to scapegoat Facebook and its "ecosystem". We seem to be trying to blame the company for the unwanted secondary effects of the rise of the internet and all the inter-connectivity that has come as a result.
A good example in my mind is WhatsApp. Facebook went ahead and had the whole messaging system E2EE, much to the applause of everybody. From this point on, even among security experts, WhatsApp was well regarded as a messaging platform - essentially only second to Signal. However, a little while ago, following the Brazilian elections, there was a massive uproar that "fake news" was being spread through WhatsApp and that Facebook hadn't done enough to intervene and stop this. Really? So now we are complaining that Facebook isn't reading and censoring messages accordingly?
Do people not remember those annoying e-mail chains back in the day - usually forwarded by some naive friend? Are we really to blame the medium? I feel people will always find ways to use the internet to spread misinformation.
Do we really think we can stem "fake news" by expecting Facebook to bear the responsibility for everything that is communicated on their platform(s)?
Lastly, are we truly confident that hamstringing Facebook with a "break-up" is really going to lead to a better future? Do I really want my kids to be using TikTok/WeChat/Telegram or some other foreign controlled platform over which my government has much less oversight?
I’m glad somebody said this. I could list a thousand companies more evil than Facebook just in energy (3 million people a year die from coal!) and healthcare.
While N companies being more evil doesn’t make X company not evil, employing tens of thousands of people while being significantly less evil is definitely relevant.
We really need to start discouraging the use of throwaway accounts for comments like this. One should be willing to stand by their beliefs and navigate the process of push-back with courage and honesty.
Themselves. People will attack others on the internet for having an unpopular opinion that differs from their own. If your username is attached to your real name, then sometimes it's safer to post risky comments on an alternate account, instead of your real account. This is similar to when you have a public account and private account.
Also you can't edit/delete comments after a couple hours, and
you can't delete them after someone replies to you.
Ah, yes, I see that now. That's a shame - quite annoying really.
I don't get it. I don't really see what it is I wrote that warranted getting "flagged". I just said I wasn't keen on maintaining an account and, with that, a HN reputation.
I'm assuming it's against the rules to copy/paste my comment again in response to someone else so I'll just leave it, but that kills the want to participate.
Ironically, I'd bet that had it been a Facebook algorithm moderating the content (instead of just anybody having the power to censure/flag) my comment would still stand - and in this case, that'd be more fair to "free speech".
We are quick to plaster a company as "evil" and yet mob rule worries me more.
You forgot one major thing. The algorithm. Facebook decides what you see and what is being recommended to you. If they recommend to you something that has more engagement but is fundamentally false, we have a problem. It's one thing when it's just a pure communication platform (whatsapp), another when you push/promote some content in the feed over something else. No they don't have a master evil plan, but they are careless and don't care about their own responsability. One example where it becomes a problem: it was proven that there was violence in Myanmar triggered by false information that was pushed by the algorithm to more people
That's very fair criticism, and you're right, promoting something to others based on the probability of it triggering a reaction has some pretty major downsides. Although I don't know the specifics of Facebook's algorithms, from my experience, I would agree it seems to amplify posts trigger an emotional response, e.g. "oh, that's so CUTE!", "oh, that's so COOL!", "oh, that's so FUNNY!", "oh, that's so TERRIBLE!", etc.
I guess the only thing that I can say is that it may take a while before we have AI that is capable enough to distinguish baseless posts intended to stir people's emotions from those which may have merit but are emotionally engaging and controversial. In the interim, it seems like Facebook is trying to brute force solve this with human moderators - but again, perhaps not the most objective way of handling this and I wonder how they will tackle this long term.
(On a different note, I'm being throttled by HN for posting too fast, so sorry for the delay in response).
> What exactly is it about the platform that makes it so "evil"?
Off the top of my head I'd personally nominate
Ads on WhatsApp
Password breaches affecting both fb and ig
Forced homogenization of the product offering via Stories
Sleaziness around graph growth
An upcoming boondoggle of an integration between Fb Ig and WhatsApp identity systems supposedly to align them on strong privacy via end to end encryption
And personally I'd add the pursuit of end to end encryption at scale, which sounds like it'll lead to a disaster of moderation. (Ie don't count me in as the supporter of e2ee in the first place.)
> Do we really think we can stem "fake news" by expecting Facebook to bear the responsibility for everything that is communicated on their platform(s)?
I am not sure what the alternative you propose is. Dig your head deeper into sand?
> are we truly confident that hamstringing Facebook with a "break-up" is really going to lead to a better future? Do I really want my kids to be using TikTok/WeChat/Telegram or some other foreign controlled platform over which my government has much less oversight
I am not sure which is your government but it would probably have some power as long as we're talking about commercial operations, which tend to have some substantial presence in the jurisdictions where they're operating at scale. Or, even just blocking...
I don't know about breaking up Facebook, but I'd be happy if Instagram were spun back out (though, the founders are gone).
I'm someone who deleted their Facebook account.
In short, the reach and impact they have, given their proclivities, are not a net positive. When they make mistakes, or when they pursue questionable business practices, it has global impact.
That kind of power should be held by more accountable organizations, and with Zuck controlling more than 50% of the voting rights of the stock in FB, there's no chance.
At this point, they're tripping into a future, commensurate with the ignorance (or unconcern) of their user base, that I can't support.
If you're asking, "What do they do that's so bad?"... well, it's well documented.
Education is no doubt one of the problems underlying credulity but it is not the only one. I have seen very well-educated people repeatedly fall for misinformation on social networks that played to their preconceptions.
e2ee is just a modern way to enforce secrecy of correspondence. The contents of letters is protected from prying eyes for now 300 years in Germany and France, and to my understanding also in the US for the last 140 years. If we suddenly have an urgent fake news problem I would suspect it's caused by some new development, not a centuries old policy.
Different people complain about different things. When you try to put it all together it's not going to be coherent. And yeah, people attack Facebook as a proxy for broader trends, but that's somewhat fair when you're as dominant as Facebook is. They have the power to set trends.
Any form of communication will be used to spread misinformation, but this isn't email or town gossip. Facebook is actively pushing this stuff onto people as a direct result of choices they have made in designing their algorithms. Facebook has taken a problem that has always existed and actively made it much worse than it would be if Facebook didn't exist. At the very least that's worth thinking about.
Is it really so crazy to think that these companies should bear some responsibility for what happens on their platforms? I know the longstanding tech world answer to this has been a resounding yes, but I don't think it's that simple. It shouldn't be okay to build a product designed to suck people in and then wash your hands of any and all consequences.
I don't think breaking up Facebook (whatever that means) will do anything useful, but I do sympathize with the criticisms.
There's a myth that SV is more moral because they are "smarter" because they know technology and computer things and are so smart with their novel incentions. They are morally superior to traditional good ol' boy companies like big oil or auto they just inherited a big boring uninnovative company.
I post unpopular opinions from my account all the time. I'm fine with people downvoting it, as the current state of our society is an overton window that I know I simply, at times, fall outside of.
That being said, I've seen some powerful arguments against facebook already: the newsfeed algorithm, for example, is my most important one. It should be required, by law, to be a dumb feed.
Outside of that we have to look at the evil companies that you speak of otherwise, and why we don't speak out against them, and I would draw that the reality and rules that society lives under are distorted by the existence of social media, which isn't necessarily a bad thing, many changes need to occur, however I argue the things that make facebook addictive (getting likes), and thus valuable, are the things that encourage the worst societal behaviors and extremism, making it impossible enough for society to compromise enough to do something about those bad actors. You demonstrate the power of getting likes by the fact that you used a throwaway account for your comment.
I don't think that social media can effectively work unless it is stripped of the profit motive.
Regarding "using a throwaway account", please see my response to piffey below. In short, I don't have a main - I'm usually always just a reader. I actually wholeheartedly disagree with Karma-type systems in general and the concept of leveraging previously acquired "reputation". Sadly, one requires an account to comment and new accounts get throttled/rate-limited quite quickly (I've just realized).
Re: the newsfeed algorithm, like I just replied to
jeromegv, I agree and think promoting emotionally engaging posts, no matter the content (which I think we all sort of agree seems to be the way Facebook operates), is obviously problematic. Whether dumb (a.k.a. "rank by new") is the way to go, I don't know. Might just favor the loudest/most talkative, but perhaps you're right.
Also I never made the claim that TikTok/WeChat/Telegram were evil. My point with that statement was more like that proverb: "Better the devil you know than the devil you don't know".
I'm not against karma systems generally, but they really need to be carefully designed so they don't encourage cheap sensationalist crap. Reward insight, nuance, demonstrated domain expertise, and respectful dialogue.
> I know this is a highly unpopular opinion (and hence the throwaway),
Using a throwaway account makes sense if you're sharing secrets or making yourself vulnerable. Creating one to share an unpopular opinion (presumably, to avoid the terrible risk of... having a few strangers downvote you on the internet?) makes you lose credibility.
> What exactly is it about the platform that makes it so "evil"? Do we just think Zuckerberg is innately corrupt and trying to slip some sort of mastermind plan of internet domination without us noticing?
Um... yes. Have you not noticed a pattern of bad behavior coming from them for the last, 15 years or so?
> Lastly, are we truly confident that hamstringing Facebook with a "break-up" is really going to lead to a better future?
Um... Yes. The root of this problem is a lack of integrity amplified by no accountability and monopolistic power. Break up the monopoly, and things will improve. We're not going to magically fix human nature overnight, but there will be room for competitors with a different vision will at least have a chance to gain traction. Right now they have no chance.
I don't know whether one could "stop where the internet is going". I think the internet is what it is. It has its merits and it has its flaws. I'm just saying we seem to be pointing the finger at Facebook as the source of our problems whereas Facebook is really just a reflection of ourselves as a society.
A paranoid and overly Orwellian idea for sure, but interesting to play around with!