Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> outside narrow limits such as incitement to violence

Why can’t those narrow limits include “flat out knowingly lying with the specific intent of deceiving people?” Sure it’s a very human definition but it’s one with built-in limits on its scope. You can’t use it to ban wrongthink because it has to be from people who know that they’re lying.

> turning people’s feeds into an ever more extreme echo chamber

So yes but also this is done voluntarily. Those algorithms are keying on to the fact that I do not want specific kinds of content. If given the option I’ll even explicitly make my preferences known — I’ve blocked probably a thousand subreddits just to make my /r/all tolerable; Twitter is only usable if you confine yourself to niches. It’s #general or barrens chat that’s the cesspool of nonstop screaming.



Even if there would be a simple way to define "lying" in this context and a simple 100% effective way to proof it. It would only shift the problem not solve it. You can already "lie" under oath if you formulate something as opinion if there is nothing that contradicts your statement, its that simple. If people can be sentenced for the writing words online if they intentionally lied that just puts a target on normal people an make professional writers team up with lawyers to avoid ever writing anything that could be deemed a lie. That solve no problem at all. People find a way to tell you that the earth is flat anyway. Putting wrong speaking closer to wrongdoings is a very dangerous idea in general. we should want more speak not less and we get that if speech is tolerated.

The "inciting violence" thing is already very very close to breaking the concept of free speech. And it can also be defeated simply by linguistic tricks. "Kill the ...." would incite violence but "I think we should kill the ..." expresses an opinion. Also this very example here used the same words as something that in fact could incites violence but clearly my post isn't. Now do we really want an AI to detect de difference? Or maybe real human? Moderators who are almost certainly not qualified to judge because a content moderator isn't a judge and should not be.


Who determines what is true and what's a lie? Why do you trust them to make the right call?


Precisely -- and let's be clear here: the disinformation being discussed here breaks down along partisan lines.

We can barely get republicans and democrats to agree on a budget, what makes anyone think that they could reasonably come to an agreement on objective standards of truth in media? Let alone a process by which those standards are enforced? This is way, wayyyyy outside the realm of reality.


I agree with your point, though I think your example is a bit flawed: I think it's reasonable to disagree on what should be in a budget; there's no one "correct" budget where all other budgets are wrong.

On the other side, facts are facts. Assuming you actually have all the facts (which often we don't), there is only a single truth.


What people call a “fact” for these purposes is a lot broader than what epistemologically qualifies as a “fact.” You can see this with a lot of “fact checking” websites. The second item on the fact-check.org website is whether reduced wind power caused the Texas electrical outages: https://www.factcheck.org/2021/02/wind-turbines-didnt-cause-...

The percentage drop in window power megawatts is a “fact.” What “caused” the Texas power outage is a multi-variable system analysis that produces a conclusion, not a fact, under certain specified assumptions. (This is obvious to an engineer: the NTSB spends months investigating what “caused” a plane crash, and issues a report with conclusions, not facts. The notion that some journalist can in a day or two perform a similar analysis on a complex system like a power grid, and report the result as a “fact” blinks reality.)


There's not only the issue of incomplete information, there's the issue of salience. There are an infinite number of true statements. Which ones do you focus on? Which ones are the right ones to focus on? You can detect bias in reporting not only based on what is said, but about what is not even mentioned.

The new york times won't run a story sympathetic to liberal individuals pushing back against the excesses of critical race theory. Fox news won't report on how even though there were anomalies in the election, none justified stopping the transfer of power to the Biden administration. Both are bullshit.


Yes, the "fact-checkers" we already have should give us a hint at what "lie-checkers" would do.


Then don’t have them. Having lie checkers on the internet is a moronic idea. This rule is to stop organized coordinated disinformation campaigns. It’s to take down sites who’s whole purpose is to literally make up news stories, present them as fact, and spread them on social media.


It was mean sarcastic. In case you haven't figured out how awful the fact-checkers are.

An no, if you ban "disinformation" you ban free speech. There is no way to figure out if a flat earther publishes something for disinformation purpose or if he really believes what he writes. Disinformation is best frighted by debunking it, not by removing it. Most people have heard form the flat earth but most dont believe it. Because they can inform themself. That's how it should be. No need to "save" everyone trough authoritarian measures. The risk here is way higher than having to deal with some forever flat earther.


> Disinformation is best frighted by debunking it

I used to believe that as well. Then we did real world practical experiments over the last decade. It's clear most people don't give a shit about informing themselves and will readily believe just about anything.

Not saying the solution is regulating what can / cannot be said, but this idea that free speech is the ultimate thing isn't working when you have groups that can spend troves of cash making their disinformation legitimate enough for the masses.

Both you and i probably believe at least one, maybe more of those things, by the way. It's not all outlandish nonsense, sometimes it's reasonable enough to believe at first glance and you don't bother looking it up afterwards.


I accept this as unavoidable reality. The only way to fight this problem is education. I'm not worried much about the fact that everyone "believes" some stuff that is actually not true. This is and was the case for all time humans where alive. In time where people had the opportunity to debate the different "truths" humanity made progress. In times or societies where this was not possible progress was slow. we dont need and will never get the absolute truth. but wee need freedom to search for it. there is no guarantee that we will find it and even the opinion of the majority can be wrong and often is but it will self-correct as long as pointing out the wrong is allowed. There will always come a time where the wrong becomes obvious to the majority. Pointing out the wrong will be disinformation if the people who decide are in the wrong. We can not have that risk.

So back to the start. You cant have an authority who decide. whats right or wrong has to be proofed/debunked. And it can not be removed afterwards as this would invalidate the debunking. This is a slow and inefficient process we should probably focus on making it better because it works, it just does not work as good and as fast as the modern world would require. The shortcut "solutions" however will most likely not work at all and potentially case more harm than good.


I fully agree with you that education is the ultimate solution, but in the mean time, wtf do we do about the entities that have wealth and power, and are able to influence millions of people on just about whatever the fuck they want?

What do we do when the things they choose to influence the population on are no longer just "the rich getting richer" but become actual existential threats? When they get dictators elected, make climate change worse, endanger lives by producing healthcare misinformation, etc?

Does it matter that, over the long term, there are more idealistic freedom of speech ideals if we don't live long enough to even get to the long term?


I dont know a solution to solve this all either. But I'm worried we make it worse with bad solutions.

Certainly you dont want to give these powerful people the tools to become more powerfully by implementing an authoritarian system against disinformation. Its rather obvious that if these powerful people can not circumvent that system they will become the system. If they can manipulate millions they sure can manipulate or replace the few "decision makers" too. Now you have powerful people spreading disinformation who also have the power to remove any critics simply by labeling it disinformation.


Wait no. That’s not how this works. There’s no determination of fact. It doesn’t matter whether what you said is true or false — this isn’t a rule against being wrong. It’s a rule against someone speaking something they know and believe a priori to be false with the intent to mislead people.

Like it’s literally the same ideas as fraud but applied to misinformation. If you believe that climate change is a hoax then you’re fine, tell the world. But if you make up a study and data “disproving” climate change and then circulate it in Facebook then you’re not.


Unless you have a mind-reading device, there is no way to be sure what somebody believes.


> Unless you have a mind-reading device, there is no way to be sure what somebody believes.

In general, we are comfortable doing this in at least some contexts. The legal system in almost every case attempts to ascertain intention to satisfy the mens rea of a criminal act. They don't have mind reading devices, but they do have expert witnesses such as psychologists and doctors, and testimony.


> It’s a rule against someone speaking something they know and believe a priori to be false with the intent to mislead people.

If one doesn’t believe in the Holocaust, yet publishes erudite webpages with the intent to mislead others (at least from his POV) into thinking it really happened, would that be a problem?

If yes, you are consistent, albeit a bit crazy.

If no, your rule reduces, once again, to a focus on the falsity of the communication as opposed to the writers intent.


It would be a problem! I don’t think people would care as much because it’s the same as stealing a balloon on free balloon day but you still have a guilty mind and had the intent to deceive people regardless of your success at it.

It becomes a bigger issue if the evidence on the site is made up but I won’t assume that.


I don’t understand this. Can you give an example where the truth is some how elusive or difficult to obtain?

Human knowledge is pretty advanced and accessible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: