Why it matters that high-profile bigots no longer have a safe harbor on Facebook

Photos: AFP/Associated Press/Austin American-Statesman via LA Times

Facebook and Instagram have banned these hateful people as “dangerous” individuals: Milo Yiannopoulos, Alex Jones, Louis Farrakhan, Laura Loomer, Paul Nehlen, and Paul Joseph Watson. Opponents of bigotry are cheering. But it’s time to look deeper into what this ban means for the platform.

I will not shed a tear for any of these people. They’re basically out to legitimize hatred based on race or religion. For years, liberal thinkers have been asking why Facebook and other social networks have provided them with platforms to build their bases for bands of like-minded bigots.

In banning these people, Facebook cited its Policy on Dangerous Individuals and Organizations. That reads, in part:

Policy Rationale

In an effort to prevent and disrupt real-world harm, we do not allow any organizations or individuals that proclaim a violent mission or are engaged in violence, from having a presence on Facebook. This includes organizations or individuals involved in the following:
  • Terrorist activity
  • Organized hate
  • Mass or serial murder
  • Human trafficking
  • Organized violence or criminal activity

We also remove content that expresses support or praise for groups, leaders, or individuals involved in these activities.

We do not allow the following people (living or deceased) or groups to maintain a presence (for example, have an account, Page, Group) on our platform: . . . 

Hate organizations and their leaders and prominent members

A hate organization is defined as:
 
Any association of three or more people that is organized under a name, sign, or symbol and that has an ideology, statements, or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability.

Who could have a problem with that?

But for a moment, put that aside and let’s talk about the “safe harbor provision.”

Are social networks communications platforms or editorial platforms?

The Communications Decency Act of 1996 created what’s known as the “safe harbor” provision for network platforms. It says “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Basically, this provision means that if someone uses a communications platform to do something illegal or hateful, it’s not the platform’s problem.

Let’s examine what this means. Let’s assume that I wanted to call for a violent race riot (I assure you I don’t, but this is a hypothetical). Here are some ways I could do that. Is it the communications provider’s job to identify it and stop me?

  • I send a group text to friends on my mobile phone, which is on Verizon. Is Verizon liable?
  • I create an email list and email a bunch of my friends from my Gmail account, using MailChimp. Is Gmail liable? Is MailChimp?
  • I create a group of my friends on WhatsApp or Facebook Messenger and call for the riot there. Is Facebook liable?

In all of these cases it’s seems clear that we don’t hold the communications provider responsible for private communications among people using its network.

At the other end of the spectrum, consider editorial organizations. Imagine any of the following:

  • I write an op-ed calling for a race riot in The Boston Globe. Is the Globe liable?
  • I write a similar op-ed and post it on my Forbes contributor blog. Is Forbes liable?

In those cases, I think you’d agree that those organizations are (or ought to be) exercising editorial judgment, and should not be complicit in hosting calls for riots and bigotry.

But between organizations exercising editorial judgment (like the Globe) and those that just pass messages through without reviewing their content (like Verizon) is a middle ground. Consider some of these cases:

  • I create a private group on Facebook for race riot enthusiasts. Facebook would ban this. But how is it different from the private WhatsApp group of my friends? It’s not publicly viewable. Only my friends can see it.
  • I create a hateful riot fans blog using WordPress. I host it on BlueHost. I register my domain on GoDaddy. Should WordPress, BlueHost, or GoDaddy block me as a matter of policy? If you have to become a member and register to view the content, does that change anything?
  • I maintain and distribute race riot-inciting merchandise and stream videos from my site, hosted in Russia. I also maintain a fan site on Facebook, but don’t post any riot incitations there. (However, my fans are continually coming on and posting links to it on my Russian site.) Should Facebook ban me?

To block incitement to hate, it seems like these platforms should ban these cases. And Facebook has come around to this point of view, including the last case listed here, which is almost exactly what has happened with these individuals. And yet Twitter, which is in an almost analogous position, doesn’t consistently enforce such bans. (It does enforce them inconsistently, including banning an individual who called for killing long-dead civil war general Robert E. Lee.)

Facebook’s recent push for a more private platform is certainly related to these challenges — they want to seem more like Verizon than The Boston Globe so they’ll have less of an issue hosting hateful stuff. But they still have many years as a public platform ahead of them.

What’s different now

Facebook has had a policy banning people who traffic in hate and violence for a while now. But in banning these high-profile individuals, something has changed.

Before this, Facebook’s enforcement was, for the most part, algorithmic. It set policies and then let algorithms do most of the work . . . along with outsourced, crowded, abusive hives of wretched minimum-wage drones.

But no matter what the company says, it’s clear that banning these individual was an executive editorial decision. No algorithm flagged them and them alone. Somebody high up in Facebook decided the PR cost of hosting these bigots was not worth the effort, and specifically decided to ban them. I don’t think it’s a coincidence that one is black (Farrakhan), one is gay (Yiannopoulos), one is a woman (Loomer), and one is based outside the US (Watson). Facebook wants to show there is no bias in its bigot bans.

However, given that this is an editorial and policy decision, Facebook looks less like a platform and more like a media company. There are certainly hundreds of other similar bigots on the platform. Will there be some department at Facebook (not a policy, not an algorithm, not a hive of minimum-wage outsourced workers but an actual department of trained people exercising judgment on behalf of the company) charged with finding the worst bigots and booting them?

I’m sure that Jack Dorsey and the management at Twitter are watching this and saying “This is a big mistake — they should have just used algorithms like we do and let most of the bigots stay.”

At this point, I don’t see how Facebook can play it both ways. Despite the shift to “private,” they’re acting like a publisher. This will suck them deeper and deeper into a quagmire of exercising judgment and others judging how they judge bigotry.

That safe harbor isn’t looking so safe any more.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

7 Comments

  1. One must parse the statute. “The Communications Decency Act of 1996 created what’s known as the “safe harbor” provision for network platforms. It says ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’ ”

    In any statute, the definition of terms is key. Poor term definition makes for poor legislation and cascades to poor regulation, enforcement, and ajudication.

    In this case, the definitions of “provider”, “user” and “interactive computer service” are at issue.

    This sort of ambiguity can be found in many statutes, including the CAN-SPAM Act of 2003. In my email marketing career, I have witnessed and participated in many debates regarding the definition of the “sender” of an mail. In my view, the correct interpretation is the entity (generally a corporate person) on whose behalf the message is sent, and who directs the content, audience, and transmission of the message to the receiver.

    Under this view, it is not the company that provides the email list, the email service provider, the maker of the hardware and software, ad Infinium. The responsibility and liability is with the instigator and beneficiary of the message, not with the various messengers and providers of instrumentality.

    However, as a businessperson providing services to senders, I reserved the right to limit my clientele to customers based on my own business policies and practices, which were guided only by applicable laws and regulations and my own view of my best interests.

    FB is a private business, It asserts no fourth estate role and claims no 1st amendment privileges other than commercial ones. Therefore, I think it is arguable that they can make business decisions about who to do business with and maintain their liability shield.

    Nonetheless, becoming the arbiter of political correctness is a fateful step. When Pilate asked Jesus “What is Truth?”, even He did not answer.

  2. That’s an interesting way to think about it, Terry. I would add that while FB is a private business, its “users’ data” are its “product” (Advertisers are its customers.)

  3. Moron editor! George Orwell’s 1984, read it.

    The real hate organizations are the “elite 0.00000004%” owned organizations engineering behavior to a liberal and satanic slant. You know that as your probably funded by them. We are on to you and we will not stop listening to Alex Jones or Milo. I don’t follow Farakahn but he deserves his free speech also. Free speech includes speech some people “hate.” It’s called tolerance. So keep publishing your trash articles that we must scroll past & at times, comment on; we will push forward.

    1. Anger is an addictive drug; hate is a side effect. Keeping sucking on that acid and see if it doesn’t erode your soul.

      I’m funded by authors who want my help with business books. If you saw my criticism of Facebook in the Boston Globe you wouldn’t make the mistake of believing they funded me.

      I’m pleased that I can write what I want here and you can comment in any way you see fit (as long as you are civil). I make the rules in my space. Facebook makes the rules in its space. It’s not a public utility. If you don’t like its policies or mine, find somewhere else to post.

  4. He’s allowed his free speech. He can say anything he wants whenever he wants. Free speech however does not mean he must be provided a platform from which to spew and spread his hate. It does not mean that Facebook as a company (however it’s defined) must allow him to participate. It’s that simple.

  5. Josh, You have done a wonderful job of articulating the challenges in the space between the edited commentary in a responsible newspaper, and a platform with no editing like FB.
    However, there is another dimension that could do with your thoughtful analysis.
    Psychologists who understand the mechanics of brain chemistry will argue that there are elements of the digital tools that dig into the drivers of that chemistry in ways that deliver ‘addictive’ outcomes in some people. These outcomes are more powerful psychologically than those that come with a newspaper, book or magazine.
    I suspect we will have to wait for the maturation of another generation before we see the real effects of some of these platforms, both good and bad, by which time Mr Zuckerberg will be wielding a zimmer -frame.

  6. Shanie Dang Doe ,
    Gee, I kind of agree with your point, but not the obviously hateful way you make it. I pity you.