Take a community with AI moderation like Reddit, I've been a participant for years. With the recent push to AI autocorrect and moderation, you can see the changes in language. New words, new ways of speaking, unconsciously editing yourself because you don't want to draw the eye of the bot. It doesn't feel subtle. It feels Orwellian.
It's particularly egregious on youtube, where people frequently use words like "unalived" or "self-deleted" instead of murder or suicide, lest they incur the wrath of the almighty algorithm.
That seems to me to be an example where the language is forced to change but the thoughts remain the same. Sure, people are using the "safe" terms, but they're using them to continue to subvert the rules, not to bow to them.
The problem is when that vernacular extends into regular life. I haven’t noticed it yet with unalive, but I’m sure there will come a day. Eventually if the censors continue suppressing the word suicide, we will end up with unalive taking suicide’s place both online and offline. Then, the censors will censor unalive, and a new word will be coined, and the cycle continues.
> On Friday, a social media user tweeted an image from the Nirvana exhibit at the Museum of Pop Culture in Seattle. A placard dedicated to the “27 Club” read, “Kurt Cobain un-alived himself at 27.”
I'm not fully comfortable with the shift in language either, but my point is that, even if the language is changed, the thoughts will remain. To use 1984 (is there a Godwin's law equivalent for this now?), the party taught that 2 + 2 = 5, which is changing thought. Social media is trying to do that, but failing. The danger is if it's one day effective, but to date it hasn't been.
Youtube comments is a separate genre itself. Due to youtube moderation policy - music video comments are all the same, same tired jokes, patterns. Not an AI slop per se, but feels the same.
I recently had a comment removed by reddit. It wasn’t even against the rules. It was anti establishment is all. I insulted the billionaire class in that comment. Class division style comments are now banned. Wouldn’t want revolution on a for profit forum now would we?
I can hear the lawyers huddled around a conference table rolling the bones and chanting the sacred words to come up with that "get out of trouble free" card. It told your son he had terminal cancer and should kill himself... sorry, it clearly says for Entertainment Purposes only.
Considering that they aren't properly separating the two groups, I don't see this "response" as anything but a weak excuse to do what they wanted to do anyway.
Core to the problem is that Roblox’s social media features allow pedophiles to efficiently target hundreds of children, with no up-front screening to prevent them from joining the platform.
For example, in 2018, prior to Roblox going public, a 29-year-old was caught by police with 175 hours of video footage of him grooming and engaging in explicit behavior with 150 minors using online platforms, namely Roblox.
Media and non-profit exposés from 2020 to July 2024 revealed digital strip clubs, red light districts, sex parties and child predators lurking on Roblox. The National Center on Sexual Exploitation in 2024 labeled Roblox “a tool for sexual predators, a threat for childrens’ safety”.
Numerous criminal indictments from 2019-2024 allege that sexual predators groomed children in-game, ranging from 8-14 years old, then kidnapped, raped or traded sexual content with them.
Following years of scandals, we performed our own checks to see if the platform had cleaned up its act. As a test, we attempted to set up an account under the name ‘Jeffrey Epstein’…only to see the name was taken, along with 900+ variations.
Many were Jeffrey Epstein fan accounts, including “JeffEpsteinSupporter” which had earned multiple badges for spending time in kid’s games. Other Jeff Epstein accounts had the usernames “@igruum_minors” [I groom minors], and “@RavpeTinyK1dsJE” [rape tiny kids].
We attempted to set up a Roblox account under the name of another notorious pedophile to see if Roblox had any up-front pedophile screening: Earl Brian Bradley was indicted on 471 charges of molesting, raping and exploiting 103 children. The username was taken, along with multiple variants like earlbrianbradley69.
After we found a username, we listed our age as “under 13” to see if children are being exposed to adult content. By merely plugging ‘adult’ into the Roblox search bar, we found a group called “Adult Studios” with 3,334 members openly trading child pornography and soliciting sexual acts from minors.
We tracked some of the members of “Adult Studios” and easily found 38 Roblox groups – one with 103,000 members – openly soliciting sexual favors and trading child pornography.
The chatrooms trading in child pornography had no age restrictions. Roblox reports that 21% of its users are under the age of 9, a number that is likely underestimated given that Roblox has no age verification aside from users seeking 17+ experiences.
Registered as a child, we were also able to access games like “Escape to Epstein Island” and “Diddy Party”. We found over 600 “Diddy” games, including “Survive Diddy” and “Run From Diddy Simulator”.
Since September 2nd, 2024, third-party monitor ‘Moderation For Dummies’ has reported ~12,400 erotic roleplay accounts on Roblox. These include everything from “rape/forceful sex fetishes” to underage users “willing to do anything for Robux”.
Users seeking sexual experiences on Roblox are so pervasive that there are thousands of Roblox sex videos on porn sites, inviting users of unknown ages to make explicit content on the platform.
We tested out Roblox’s experiences to see what else kids were being exposed to. We quickly encountered images of male genitalia and hate speech in Roblox’s “school simulator” game, which had registered 28.9 million visits with no age restrictions.
If this goes within the Ad Tech industry and knowing how Ad tech industry is, I don't feel quite surprised if we might see foreign adversarial nation buying the Social Security data from Ad tech/ (this Doge person in general either directly or through multiple layers) even in secretive manner at this point.
Either way this data is definitely going to spread behind closed doors.
I disagree - it's 100% a factor of how much money you have to pay in legal fees.
Zuck would be happy to take that data, and because he's worth a cool $350 billion, he'll do whatever the fuck he wants with that data, and we'll thank him by cutting his taxes.
Nobody wants to fuck with PII, platforms will blackball you in a second if they think you have sensitive data. If you haven't worked in adtech, be quiet and do even the most trivial research before spouting nonsense.
charitably, i think the choices one makes to enter into that profession belie a lack of consideration for the broader good of humanity in order to profit a select few - choices that necessarily include misdirection and manipulation of actual people. choices that that lead me to take behavioral advice from such folks as essentially worthless.
> As long as the penalties for data breach are a slap on the wrist and buying everyone one year of credit monitoring, no one will.
And, of course, that one year is totally useless when one is subject to multiple breaches per year. Throw in the fact that so many breaches aren't even with a company that affected individuals have a direct relationship with, and it becomes virtually impossible to fix this.
At this point, I'd be in favor of making any company that handles personal data pay in advance for the monitoring, and get refunded when they prove that that OR THEIR PROVIDERS haven't had a data breach.
> I'd be in favor of making any company that handles personal data pay in advance
How about we start with some strict data privacy and handling laws? Make it so you straight up just can't collect & store personal information without proving that it's required and without it your business would not work (and no, data harvesting for advertising/marketing doesn't count).
Security is the problem, but it would be less of a problem if everyone wasn't trying to hoard as much data as possible from their customers for seemingly no reason at all. Take a scroll through the Play Store/App Store and look how many really simple apps request permissions for camera, microphone, location, local network, etc. for something like a metronome app that needs none of that.
There is a reason for hoarding data: it’s an asset on the balance sheet. So long as it is legal to liquidate data for cash, there will be incentives to collect and keep it.
Or at least make it a liability on the balance sheet rather than an asset. Sure, you can store as much user data as you want. Oh, what's that, if it leaks you owe each user $10,000 under the new law?
What about making them put up a hefty bond proportional to the sensitivity and scale of the data collected, which is forfeit to any potentially affected users in the event of a breach.
How about pay the user whose data has been collected. It's their data. If we are the product, we should get paid for being used! And we should get paid a whole lot more (multiples) for the exposure of a leak.
The real riches are in starting a credit monitoring company. Vibe coded, of course, and if you have a data breach, then it's a perpetual motion machine.
The fact that the average joe can't start their own credit monitoring company as competition and the incumbents get away clean everytime they screw up says a lot about "capitalism" as we practice it
Monitoring is a joke. We need legislation with real teeth. Companies which don't protect the user data they've been entrusted with should go bankrupt, to make way for those who actually care.
I think that's definitely true to a degree, but I think the think more companies are worried about is the reputational damage from the terrible press. Look at Solarwinds (not a data breach, but similar press around it). It erased hundreds of millions in shareholder value and the company was taken private at pennies on the dollar in the aftermath. There's real risk there.
> I think the think more companies are worried about is the reputational damage from the terrible press.
I don't think companies care all that much about reputational damage from the terrible press. Some of the most profitable wealthy corporations on the planet are also the most hated. We have profitable corporations that have committed serial killings, infanticide, and mass poisonings. There's press about companies whose products and profits come from the use of literal child slaves. There is "terrible press" out there right now explaining how you are currently being hurt by companies who put profit over human life, but they aren't going out of business because of it.
Do you know how many companies have had bad press about data breeches and security issues, but are still around and making money? I'm pretty sure it's all of them. Including solarwinds.
Companies don't care if you like them or not. They care only about money. Until the cost of not securing people's data is likely to be higher than what they'll save ignoring security risks corporations aren't going to bother to give us anything but security theater, promises, and the occasional check for $10 and a year of "identify protection services" after another pointless class action lawsuit.
> Companies don't care if you like them or not. They care only about money.
To put a slightly finer point on it, many only care about whether investors think their stock price will go up, either by acquiring money despite being hated or else because other investors [0] are going to invest.
For every Solarwinds, there are hundreds of breaches that never get more that a cursory reporting (if that). And Solarwinds is still in business (and some would call "taken private at pennies on the dollar" as a feature not a bug, but I digress), as are vastly more consequential examples (Equifax, anyone?).
Yes...reputational damage is a thing, but in my experience (sitting in the decision making meetings, as a participant, many, many times in my career) it's a second-tier player at the end of the day. This is especially true of data breaches...I cannot count the number of times (in the last decade particularly) where the decision point was "What reputation damage? Everyone and their mother has had a data breach. No one cares.". I don't think they're wrong.
This, like many issues of security and risk, is the consequence of the vast majority of the customers not caring. How many users dropped Facebook in 2019, or LinkedIn in 2021 (or 2012)? How many swore off Ticketmaster? Marriott? Adobe? eBay? And that's just ungodly massive breaches. So why would the average business give a steaming crap?
In my dark little heart of hearts I sometimes think "what would it take for the average person to actually care", and then I realize what that looks like, and I don't sleep well for a couple of nights. Cheers!
For people to care of would have to be like healthcare. The Change Healthcare breach cost 2B+ and led to a huge loss in market share. Or like AMCA, which went bankrupt after the breach (Labcorp's billing company). If you're a health tech company you can no longer insure your way out of the problem over you reach a certain size.
The reality is that we need data breaches to be painful but maybe not company ending events unless it really is sensitive data. As patio11 likes to say the right level of fraud is not zero. There's a middle ground where we can increase company liability or reduce the damage caused by a beach.
Optum360, still in business. HCA Healthcare, still in business. Excellus Healthcare, still in business after paying something like 50 cents per breached user. AMCA went out of business because their biggest customers said "damage control dictates we cut ties with you so we don't look complacent" (that is, like I said, the customers have to care to make a difference). And did anyone stop going to LabCore (after their own data breach, not AMCAs) or got a different doctor because the healthcare group they're part of got breached? Not likely. I don't think healthcare is ahead of the game here.
But yes, until it becomes actually painful to companies and the people who run them, it won't get better. If a corp death penalty is off the table (I don't think it should be), I guess would be either/both proportionate fines (fines equaling a couple of hours of revenue don't cut it) or making some of the leadership personally accountable, a la SOX fines, asset forfeiture and criminal responsibility for responsible C-level execs. Hate on SOX all you want, it sure made finance executives care about what is going on in their organization.
reply