Zuckerberg’s Holocaust conundrum — and how Facebook can escape from it

Photo: Recode

Mark Zuckerberg got himself in hot water by explaining why he couldn’t kick Holocaust deniers off of Facebook. That sounds indefensible until you try to figure out how you’d do it. It’s a hard problem, but I have a solution.

Here’s what Mark Zuckerberg told Recode in an interview with Kara Swisher:

Zuckerberg: I’m Jewish, and there’s a set of people who deny that the Holocaust happened.

Swisher: Yes, there’s a lot.

Zuckerberg: I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong, but I think …

Swisher: In the case of the Holocaust deniers, they might be, but go ahead.

Zuckerberg: It’s hard to impugn intent and to understand the intent. I just think, as abhorrent as some of those examples are, I think the reality is also that I get things wrong when I speak publicly. I’m sure you do. I’m sure a lot of leaders and public figures we respect do too, and I just don’t think that it is the right thing to say, “We’re going to take someone off the platform if they get things wrong, even multiple times.” What we will do is we’ll say, “Okay, you have your page, and if you’re not trying to organize harm against someone, or attacking someone, then you can put up that content on your page, even if people might disagree with it or find it offensive.” But that doesn’t mean that we have a responsibility to make it widely distributed in News Feed. I think we, actually, to the contrary …

Later he clarified it with this note to Swisher:

I enjoyed our conversation yesterday, but there’s one thing I want to clear up. I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that.

Our goal with fake news is not to prevent anyone from saying something untrue — but to stop fake news and misinformation spreading across our services. If something is spreading and is rated false by fact checkers, it would lose the vast majority of its distribution in News Feed. And of course if a post crossed line into advocating for violence or hate against a particular group, it would be removed. These issues are very challenging but I believe that often the best way to fight offensive bad speech is with good speech.

Why the challenge of liars on Facebook is so problematic

Naturally, Zuck’s remarks caused an uproar. And the context is important; people are wondering why conspiracy theorist Alex Jones of Infowars retains a Facebook page.

So let’s do a thought experiment. You are now the CEO of Facebook, with all the powers that Zuck has, and you can do whatever you want with it. Let’s fix the despicable Facebook page problem right now!

OK, first step, we kick the Holocaust deniers off, right now.

Well, wait a second. We want a policy that we can apply across the board. We need to be able to kick off all the hateful liars, not just Holocaust deniers. So let’s craft a policy.

First draft:

If you have a page that is based on lies, we will kick you off Facebook.

Ah, that sounds very satisfying.

Now, we have to be careful. After all, Stephen King makes his living telling lies — that’s what fiction is. The Onion tells lies too — that’s satire. We don’t want to kick them off. So let’s modify things a bit.

If you have a page that is based on lies and is hurtful, we will kick you off Facebook.

Great. Stephen King and The Onion can stay since they’re not hurtful. (Well, you might quibble about The Onion, but let’s keep going.) What about The Daily Show? They say stuff that hurts the feelings of Trump supporters all the time. But of course they’re not based on lies.

Infowars and our Holocaust deniers are hurtful liars. But they don’t think they’re liars. So how will we determine if they are liars?

Should we have a panel of impartial judges? Hmm, that doesn’t scale. There might be thousands of sites every day that need checking. Judge Judy and Alan Dershowitz are too busy to spend all day doing that.

Hey, it’s a social network. Let’s ask the users to flag hateful content!

Well, that might work as a first step, for indicating pages that need to checking. But who will check them?

Could we just make it majority rule? We’ll take a vote — if a majority of voters say it’s got to go, then we trash it.

But what happens if Trump riles up a bunch of his followers and has them vote against The New York Times? Would we take that down?

Well, we could exempt major news organizations. (If the Times stays, I guess Fox News has to stay, too.)

So here’s the policy so far:

If you have a page that is based on lies and is hurtful, we will kick you off Facebook. Any user can indicate that your site needs checking, and then we’ll take a poll about it and kick it off if a majority say it’s a violator — unless you’re a major news organization.

That’s still not perfect. Some small but legitimate site (say, withoutbullshit.com) might get kicked off just because a bunch of people gang up on it.

Well, perhaps “majority” isn’t fair. Let’s set the threshold at 90%.

So we kick the site off if 90% of the users polled decide it’s a lying, hateful site.

Of course, lying, hateful sites have fans. They can probably muster enough of their fan base to generate 10% of the vote against removing them. (I bet Infowars could.)

Seems like basing this on a percentage of the vote is not going to work.

Perhaps our approach should be based on a more objective standard. Then we could hire minimum wage people offshore to check the site against the standard, since they could identify sites worth kicking off Facebook.

So what should be in our standard?

  • Hate speech. If you advocate violence or racially motivated hatred, we kick you off.
  • Illegal content. If it’s against the law, we kick you off. (For example, pump-and-dump stock promoters in the US, and Holocaust deniers in Germany, where denying the Holocaust is against the law.)
  • Obscenity, as defined in whatever country you’re viewing the content in (this is a subset of illegal content).
  • Nudity. (That wasn’t part of the definition, but hey, let’s be prudes and throw it in.)

If you do any of those things and someone complains about your page, we show the page to our offshore reviewer corps and they check for those things. If you do them, we kick you off.

So now the policy looks like this:

If you have a page that is based on lies and is hurtful, we will kick you off Facebook. Any user can indicate that your site needs checking, and then we’ll have our staff check for hate speech, illegal content, or nudity, and if you have them, we kick you off.

What about just kicking people off for lies pretending to be truth? Well, objective truth is very hard to define. And people on Facebook lie all the time. They also make mistakes. The New York Times publishes plenty of retractions. And smaller pages might make statements like “Barack Obama is secretly homosexual” that are very hard to disprove.

Well, couldn’t we use objective fact-checkers like Politifact.com and Snopes? Well, they don’t check every site. And they aren’t infallible, they can be wrong, too. But they’re a pretty good indication of falsehood. While we can’t give them absolute power due to their fallibility and lack of universal coverage, we can give them influence. So we’ll make it harder to spread things that they think are false.

So now we’ve gotten to this:

If you have a page that is based on lies and is hurtful, we will kick you off Facebook. Any user can indicate that your site needs checking, and then we’ll have our staff check for hate speech, illegal content, or nudity, and if you have them, we kick you off. But if you’re a liar according to established fact-checkers, we may not kick you off, but we’ll tweak the algorithm so your content won’t spread easily.

Well, my friends, you have just pretty nearly duplicated Facebook’s existing policy.

This policy doesn’t kick off Holocaust deniers or Infowars. It also doesn’t kick off slanted sites like Occupy Democrats. It makes their content harder to spread, which is the best it can do.

Why can’t we do better?

The reason is simple. The definition of “hateful liar” depends on who’s doing the judging. And until we have an unbiased entity that can do the judging, hateful liars are here to stay. There’s just no effective way to kick them off the platform without opening it up to bias lawsuits. More importantly, for every Holocaust denier you can come up with, there’s a Mother Jones or Rolling Stone who other people hate just as much. Who picks what stays?

There is a way out of this conundrum

I think there is a way to solve this problem. But it would be difficult.

Facebook should make it possible for individuals to apply to be “citizen fact-checkers.”

If you suggested yourself for this volunteer position, you could mark sites for review. You would have to do work. You would have to identify which elements of the site were false and which were hateful.

In the beginning of your work in this position, Facebook would not give your effort much weight.

But over time, Facebook would review your selections.

Facebook knows which sites are popular with different group. For example, they know which sites Trump voters like, and which sites liberals prefer.

Facebook would review your selections for balance. If you tended to mark only conservative sites for deletion, your vote would not count. The same applies to people who marked only liberal sites.

But if you were balanced in your selections, your vote would have more weight.

Facebook would remove sites that were marked for deletion by large numbers of citizen fact-checkers who have a balanced perspective, and were able to identify clearly false and hateful elements.

If this sounds familiar, it’s because it’s similar to the way that Wikipedia works, where respected volunteer editors apply objective rules to make decisions.

Creating such a system would take time and effort. But it could restore Facebook’s credibility and reputation.

Mark Zuckerberg should do this. It’s the only way out of his current predicament, where Holocaust deniers get access to the platform along with everyone else.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

3 Comments

  1. Let’s divide FB posts into two broad categories: personal (what I ate last night, my friend’s BD party etc.) and political (anything to do with public affairs).

    The first category needs little monitoring (unless the BD party has an Aryan theme for example).
    The second is de facto citizen journalism and needs to be subject to some degree of human editorial scrutiny to minimize issues.

    FB is extremely reluctant to admit this due to cost and the fact that they are not and do not aspire to be a member of the fourth estate. But in effect they are a vox populi sans editor.

    Computers aren’t good BS detectors because the function is illogical and emotionally driven. Lagos don’t feel; people do.

    The magnitude of the task is not as intimidatuas it might seem. My guess is that the 5% rule applies: 5% of FB members are the pseudo journalistas; the other 95% are forwarders at most and readers only at least. Focus on the frequent posters and you’ll handle almost all the potential troublemakers.

    The good news is that the human editors required will in effect be a huge jobs program.

  2. If Facebook can decide something is “hate speech” and kick out a user, why cant it judge also if something is “holocaust denial” and also kick the user out?

    If anything, “Holocaust Denial” is easier to evaluate than “Hate”.