Facebook has joined Apple, Youtube, and Disqus in banning Alex Jones and yanking his InfoWars pages. Jones and his supporters are livid, claiming free speech violations, while liberals are wondering what took so long. The challenge here is that lying on Facebook is perfectly fine; only “hate speech” will get you banned.
Let’s take a close look at Facebook’s statement on why it banned these pages, and how it will apply its policies in the future. In the excerpts that follow, the commentary is mine.
Commentary: Facebook is smart to describe this as enforcement of a clear policy rather than focusing specifically on Alex Jones.
How do you distinguish between fake news and content that breaks your Community Standards?
People can say things on Facebook that are wrong or untrue, but we work to limit the distribution of inaccurate information. We partner with third-party fact checkers to review and rate the accuracy of articles on Facebook. When something is rated as false, those stories are ranked significantly lower in News Feed, cutting future views by more than 80%.
When it comes to our Community Standards, they’re focused on keeping people safe. If you post something that goes against our standards, which cover things like hate speech that attacks or dehumanizes others, we will remove it from Facebook.
Commentary: This is clear, although you may not agree with it. Lying on Facebook is acceptable; the penalty for fake news is simply that it spreads less quickly. Hate speech is tough to judge. The American Bar Association defines it this way: “Hate speech is speech that offends, threatens, or insults groups, based on race, color, religion, national origin, sexual orientation, disability, or other traits.” It depends on denigrating a group. So if I say that Tim Cook is stupid, that’s not hate speech, but if I say that gay people are stupid, it is. Applying this distinction creates a grey area where hateful speech is fine until it’s racist or sexist or homophobic, and where personal attacks are harder to prosecute. It will also encourage haters to use more code words and dog whistle insinuations to attempt to escape enforcement.
Free speech vs. hate speech
Let’s start with the First Amendment and free speech issues here. Because of the safe harbor provisions of copyright law, Facebook is not legally responsible for what appears on its site. (You can’t sue AT&T for someone who sends a murderous threat using text messages on its network; similarly, you can’t sue Facebook for hosting threats.) But Facebook is allowed to determine whatever policies it pleases regarding posts and pages on its site and app. In the same way that your landlord can evict you for having pets or loud parties, Facebook can evict you for whatever violates its policies.
This is not a First Amendment issue. No one is preventing Alex Jones from speaking or writing or publishing his own newsletter. Facebook is under no obligation to host what he says, any more than the New York Times would be responsible for refusing to publish an op-ed that he submitted.
So the real question is, “Are Facebook’s policies sound, and are they enforcing them appropriately?”
I personally find Facebook’s policies odd. For example, you can get banned for showing, horrors, a female nipple. I also think Facebook could more effectively enforce policies that prevent the spread of fake news. But Facebook has chosen to back off of banning lies, which is a challenge, while concentrating on banning threats of violence and hate speech. This is at least consistent.
In contrast to Facebook’s policies, which are transparent, its enforcement is opaque. It will not say how many complaints must happen before a page is banned. It’s not clear why InfoWars is suddenly gone, and wasn’t gone six months or a year ago. (Was it any less hateful back then?) There are plenty of other hateful pages; are they all going to be banned? Alex Jones questions, for example, why anti-Semitic statements on Louis Farrakhan’s Nation of Islam page haven’t caused Facebook to ban it.
Silicon Valley platforms have become a tool for hate. Algorithms have difficulty recognizing and banning hate; it takes humans to do that, which is inefficient and subject to accusations of bias. The benign, algorithmic machine isn’t working so well. It’s going to be a long, hate-filled struggle before the leaders of these platforms can once again establish the safety of their platforms.