Facebook and Instagram have banned these hateful people as “dangerous” individuals: Milo Yiannopoulos, Alex Jones, Louis Farrakhan, Laura Loomer, Paul Nehlen, and Paul Joseph Watson. Opponents of bigotry are cheering. But it’s time to look deeper into what this ban means for the platform.
I will not shed a tear for any of these people. They’re basically out to legitimize hatred based on race or religion. For years, liberal thinkers have been asking why Facebook and other social networks have provided them with platforms to build their bases for bands of like-minded bigots.
- Terrorist activity
- Organized hate
- Mass or serial murder
- Human trafficking
- Organized violence or criminal activity
We also remove content that expresses support or praise for groups, leaders, or individuals involved in these activities.We do not allow the following people (living or deceased) or groups to maintain a presence (for example, have an account, Page, Group) on our platform: . . .
Hate organizations and their leaders and prominent members
Who could have a problem with that?
But for a moment, put that aside and let’s talk about the “safe harbor provision.”
Are social networks communications platforms or editorial platforms?
The Communications Decency Act of 1996 created what’s known as the “safe harbor” provision for network platforms. It says “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Basically, this provision means that if someone uses a communications platform to do something illegal or hateful, it’s not the platform’s problem.
Let’s examine what this means. Let’s assume that I wanted to call for a violent race riot (I assure you I don’t, but this is a hypothetical). Here are some ways I could do that. Is it the communications provider’s job to identify it and stop me?
- I send a group text to friends on my mobile phone, which is on Verizon. Is Verizon liable?
- I create an email list and email a bunch of my friends from my Gmail account, using MailChimp. Is Gmail liable? Is MailChimp?
- I create a group of my friends on WhatsApp or Facebook Messenger and call for the riot there. Is Facebook liable?
In all of these cases it’s seems clear that we don’t hold the communications provider responsible for private communications among people using its network.
At the other end of the spectrum, consider editorial organizations. Imagine any of the following:
- I write an op-ed calling for a race riot in The Boston Globe. Is the Globe liable?
- I write a similar op-ed and post it on my Forbes contributor blog. Is Forbes liable?
In those cases, I think you’d agree that those organizations are (or ought to be) exercising editorial judgment, and should not be complicit in hosting calls for riots and bigotry.
But between organizations exercising editorial judgment (like the Globe) and those that just pass messages through without reviewing their content (like Verizon) is a middle ground. Consider some of these cases:
- I create a private group on Facebook for race riot enthusiasts. Facebook would ban this. But how is it different from the private WhatsApp group of my friends? It’s not publicly viewable. Only my friends can see it.
- I create a hateful riot fans blog using WordPress. I host it on BlueHost. I register my domain on GoDaddy. Should WordPress, BlueHost, or GoDaddy block me as a matter of policy? If you have to become a member and register to view the content, does that change anything?
- I maintain and distribute race riot-inciting merchandise and stream videos from my site, hosted in Russia. I also maintain a fan site on Facebook, but don’t post any riot incitations there. (However, my fans are continually coming on and posting links to it on my Russian site.) Should Facebook ban me?
To block incitement to hate, it seems like these platforms should ban these cases. And Facebook has come around to this point of view, including the last case listed here, which is almost exactly what has happened with these individuals. And yet Twitter, which is in an almost analogous position, doesn’t consistently enforce such bans. (It does enforce them inconsistently, including banning an individual who called for killing long-dead civil war general Robert E. Lee.)
Facebook’s recent push for a more private platform is certainly related to these challenges — they want to seem more like Verizon than The Boston Globe so they’ll have less of an issue hosting hateful stuff. But they still have many years as a public platform ahead of them.
What’s different now
Facebook has had a policy banning people who traffic in hate and violence for a while now. But in banning these high-profile individuals, something has changed.
Before this, Facebook’s enforcement was, for the most part, algorithmic. It set policies and then let algorithms do most of the work . . . along with outsourced, crowded, abusive hives of wretched minimum-wage drones.
But no matter what the company says, it’s clear that banning these individual was an executive editorial decision. No algorithm flagged them and them alone. Somebody high up in Facebook decided the PR cost of hosting these bigots was not worth the effort, and specifically decided to ban them. I don’t think it’s a coincidence that one is black (Farrakhan), one is gay (Yiannopoulos), one is a woman (Loomer), and one is based outside the US (Watson). Facebook wants to show there is no bias in its bigot bans.
However, given that this is an editorial and policy decision, Facebook looks less like a platform and more like a media company. There are certainly hundreds of other similar bigots on the platform. Will there be some department at Facebook (not a policy, not an algorithm, not a hive of minimum-wage outsourced workers but an actual department of trained people exercising judgment on behalf of the company) charged with finding the worst bigots and booting them?
I’m sure that Jack Dorsey and the management at Twitter are watching this and saying “This is a big mistake — they should have just used algorithms like we do and let most of the bigots stay.”
At this point, I don’t see how Facebook can play it both ways. Despite the shift to “private,” they’re acting like a publisher. This will suck them deeper and deeper into a quagmire of exercising judgment and others judging how they judge bigotry.
That safe harbor isn’t looking so safe any more.