Facebook’s advertisers are bolting — check out the activity on the #StopHateForProfit hashtag. The company’s VP of Communications, Nick Clegg, says Facebook is doing a good job stopping hate and doesn’t profit from it. Is his defense believable?
I wrote about this on Monday and published an op-ed in the Boston Globe about it today. Kara Swisher says she’s dumping Facebook due to its slow response, and Charlie Warzel says the problem is endemic and can’t be fixed.
Given the massive backlash, let’s take a look at what Nick Clegg said whether it’s effective.
Analyzing Nick Clegg’s defense of Facebook in terms of pollution
Imagine for a moment that Facebook wasn’t a social network, but a factory. The factory made products that people liked. Unfortunately, it also pumped cancer-causing chemicals into the air. Now the head of PR for the factory says that they’re making progress. Should we believe him?
Once you see Facebook in that light, you realize how vacuous Clegg’s statements are.
Statements below, with my translation and analysis below.
Facebook Does Not Benefit from Hate
July 1, 2020
by Nick Clegg, VP of Global Affairs and Communications, Facebook
When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society — with more than 3 billion people using Facebook’s apps every month, everything that is good, bad and ugly in our societies will find expression on our platform. That puts a big responsibility on Facebook and other social media companies to decide where to draw the line over what content is acceptable.
Translation: It is a polluted and dirty world, and that’s why our factory is also polluting and dirty. We need to figure out just how toxic to keep it.
Analysis: This is a terrible start. The issue is not whether there are ugly sentiments in billions of humans, and Facebook is just “a mirror” for it. The issue is whether Facebook is inflaming those sentiments.
Facebook has come in for much criticism in recent weeks following its decision to allow controversial posts by President Trump to stay up, and misgivings on the part of many people, including companies that advertise on our platform, about our approach to tackling hate speech. I want to be unambiguous: Facebook does not profit from hate. Billions of people use Facebook and Instagram because they have good experiences — they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it.
Translation: Environmentally concerned people said bad things about us. We don’t profit from polluting — people love our products. We have no incentive to pollute.
Analysis: Starts with a typically passive statement about coming in for criticism. (This passive approach is very much typical for British political speech; I guess I shouldn’t be surprised to see it from Clegg, the former deputy prime minister of the UK).
The part about not profiting from hate is bullshit. Facebook profits from traffic. Hate generates traffic. So Facebook profits from hate. Regulating hate speech is expensive and difficult, but as has now become clear, it’s necessary.
More than 100 billion messages are sent on our services every day. That’s all of us, talking to each other, sharing our lives, our opinions, our hopes and our experiences. In all of those billions of interactions a tiny fraction are hateful. When we find hateful posts on Facebook and Instagram, we take a zero tolerance approach and remove them. When content falls short of being classified as hate speech — or of our other policies aimed at preventing harm or voter suppression — we err on the side of free expression because, ultimately, the best way to counter hurtful, divisive, offensive speech, is more speech. Exposing it to sunlight is better than hiding it in the shadows.
Unfortunately, zero tolerance doesn’t mean zero incidences. With so much content posted every day, rooting out the hate is like looking for a needle in a haystack. We invest billions of dollars each year in people and technology to keep our platform safe. We have tripled — to more than 35,000 — the people working on safety and security. We’re a pioneer in artificial intelligence technology to remove hateful content at scale.
Translation: Most of what comes out of our smokestacks isn’t poisoned. When we detect some toxic fumes, we try to stop them. But we err on the side of keeping the factory running. We have lots of safety staff. Some of them observe the smokestack and think about ways to keep it still running, but cleaner.
Analysis: Why is this problem so hard? Because conflict, anger, and outrage generate activity, and the algorithm loves activity. That’s a feature, not a bug. So long as that is true, even thousands of safety and security staff along with AI investments can’t hold back the tide.
And we’re making real progress. A recent European Commission report found that Facebook assessed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter. Last month, we reported that we find nearly 90% of the hate speech we remove before someone reports it — up from 24% little over two years ago. We took action against 9.6 million pieces of content in the first quarter of 2020 — up from 5.7 million in the previous quarter. And 99% of the ISIS and Al Qaeda content we remove is taken down before anyone reports it to us.
Translation: We stop 95.7% of the pollution going up our smokestack. You’re only breathing a small proportion of the carcinogens we create.
Analysis: This record is nice but insufficient. It obscures the reason that #StopHateForProfit exists — which is because of the incendiary statements from Donald Trump and other leaders. Until last week, Facebook did nothing about those. Its record on catching and blocking hate speech from Donald Trump was 0%, not 95.7% — it didn’t block or label any of it.
We are getting better — but we’re not complacent. That’s why we recently announced new policies and products to make sure everyone can stay safe, stay informed, and ultimately use their voice where it matters most — voting. We understand that many of our critics are angry about the inflammatory rhetoric President Trump has posted on our platform and others, and want us to be more aggressive in removing his speech. As a former politician myself, I know that the only way to hold the powerful to account is ultimately through the ballot box. That is why we want to use our platform to empower voters to make the ultimate decision themselves, on election day. This Friday every Facebook user of voting age in the US will be given information, prominently displayed on the top of their News Feed, on how to register to vote. This will be one step in the largest voter information campaign in US history, with a goal of registering 4 million voters. We have also been updating our policies to crack down on voter suppression. Many of these changes are a direct result of feedback from the civil rights community — we’ll keep working with them and other experts as we adjust our policies to address new risks as they emerge.
Translation: We recently announced we are sponsoring the local little league team, because we love youth athletics.
Analysis: Facebook’s problem isn’t about voting. Irrelevant.
Of course, focusing on hate speech and other types of harmful content on social media is necessary and understandable, but it is worth remembering that the vast majority of those billions of conversations are positive.
Look at what happened when the coronavirus pandemic took hold. Billions of people used Facebook to stay connected when they were physically apart. Grandparents and grandchildren, brothers and sisters, friends and neighbors. And more than that, people came together to help each other. Thousands and thousands of local groups formed — millions of people came together — in order to organize to help the most vulnerable in their communities. Others, to celebrate and support our healthcare workers. And when businesses had to close their doors to the public, for many Facebook was their lifeline. More than 160 million businesses use Facebook’s free tools to reach customers, and many used these tools to help them keep their businesses afloat when their doors were closed to the public — saving people’s jobs and livelihoods.
Importantly, Facebook helped people to get accurate, authoritative health information. We directed more than 2 billion people on Facebook and Instagram to information from the World Health Organization and other public health authorities, with more than 350 million people clicking through.
And it is worth remembering that when the darkest things are happening in our society, social media gives people a means to shine a light. To show the world what is happening, to organize against hate and come together, and for millions of people around the world to show their solidarity. We’ve seen that all over the world on countless occasions — and we are seeing it right now with the Black Lives Matter movement.
Translation: Our factory makes many useful things. It is good for humanity.
Analysis: We didn’t say Facebook wasn’t doing anything good. We said it wasn’t doing enough to stop the spread of things that were toxic.
We may never be able to prevent hate from appearing on Facebook entirely, but we are getting better at stopping it all the time.
Translation: We reduce pollution every year. It’s getting less toxic all the time.
This statement is pathetic
If you are concerned about hate speech, do you feel better after reading this?
It is a mealy-mouthed mixture of patting one’s own back and misdirection. It completely misses the point. It doesn’t even mention the change of position on posts from Trump.
Forget the dishonesty. It is ineffective. No one who is criticizing Facebook will change their perspective after reading this.
Fix it or admit it’s unfixable. This bloviation isn’t helping.