Facebook, AI, evil, and human creativity

The intersection of human evil and Facebook algorithms has created the evil-amplifying social networks we now share. Facebook says AI can solve the problem. So far, it can’t. If it ever does, heaven help us.

The New York Times published a revealing profile of Mike Schroepfer, Facebook’s CTO. He’s the man whom Mark Zuckerberg has tasked with applying AI to solve Facebook’s content screening problem. So far, his efforts are falling short. Some revealing excerpts:

In March, a gunman had killed 51 people in two mosques there and live streamed it on Facebook. It took the company roughly an hour to remove the video from its site. By then, the bloody footage had spread across social media.

Mr. Schroepfer went quiet. His eyes began to glisten.

“We’re working on this right now,” he said after a minute, trying to remain composed. “It won’t be fixed tomorrow. But I do not want to have this conversation again six months from now. We can do a much, much better job of catching this.”

The question is whether that is really true or if Facebook is kidding itself.

For the past three years, the social network has been under scrutiny for the proliferation of false, misleading and inappropriate content that people publish on its site. In response, Mark Zuckerberg, Facebook’s chief executive, has invoked a technology that he says will help eliminate the problematic posts: artificial intelligence. . . . 

But the task is Sisyphean, he acknowledged over the course of three interviews recently.

That’s because every time Mr. Schroepfer and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up — and are thus not caught. The task is made more difficult because “bad activity” is often in the eye of the beholder and humans, let alone machines, cannot agree on what that is.

In one interview, Mr. Schroepfer acknowledged after some prodding that A.I. alone could not cure Facebook’s ills. “I do think there’s an endgame here,” he said. But “I don’t think it’s ‘everything’s solved,’ and we all pack up and go home.”

Facebook’s AI team has chalked up success after success. It identifies nude images and pictures of marijuana and suspected fake accounts. It has algorithms that identify terrorist propaganda.

According to Schroepfer, Facebook now automatically removes 96% of nudity and 65% of hate speech. It’s getting better. But new problems are always popping up. It didn’t identify the Christchurch shooter’s live stream because it was shown from a first-person viewpoint, a type of offending video the system hadn’t seen before.

Why Facebook won’t win this battle . . . until we all lose the war

Consider the AI problem that Schroepfer and Facebook are tackling.

They are attempting to identify evil with an algorithm. (And nudity, but put that aside for a moment.)

Do you know what evil is? If I showed you a video or a post, could you determine if it was evil?

Like all machine algorithms, the evil detection algorithm uses a massive corpus of test data. Researchers feed a bunch of posts into it. They say “These ones are evil. These ones are not evil.” The evil posts include hate speech, shootings, suicides, incitements to violence, white supremacy, and hateful lies. The algorithm is supposed to find the hallmarks of such speech.

That is really hard. For example, a very liberal friend of mine recently got banned for a sarcastic comment. This was his smartass remark: “I mean . . . Immigrants are terrorists, so yeah. White dudes are patriots. They can’t be terrorists if they aren’t Muslim.” And Facebook suspended his account for a week.

But the system will get smarter and smarter. The amount of computing power available to solve this problem is growing. The researchers are getting smarter. The source material is vast.

If the problem was to stop a particular kind of evil, there’d be a higher chance of success. But consider the global and varied nature of evil. There are millions of psychopaths. They are testing the algorithm. They are highly creative. So each time the system figures out how to block something, some evil person figures out how to evade the pattern — just like the first-person shooting in Christchurch.

The most likely outcome is that the system continues to block nearly all of the evil content, and the evil content creators continue to innovate new ways to evade the blocks.

But let’s imagine, for a moment, that the AI wins. That it develops the ability to make decisions that are difficult for humans, and in a split second. That we develop an AI algorithm for detecting evil, and that is fool- (and psycho-) proof.

As sick as it is, evil is a form of creativity. It is the application of human intellect to move people and create emotion.

When AI can identify it, it will have developed the ability to outsmart human creativity. It will know more than we do about how we create and what we think.

A system that can identify evil could also create it, just as easily as it can now spoof the presidents tweets. That’s bad enough.

But if a system can identify and create evil, can it also identify and then create entertainment?

Can it identify and then create beauty?

Can it identify and then create passion and love?

When that happens, what’s left to being human? There is no point in creating art when machines can create it better.

So don’t root so hard for AI to win the fight against evil. Because once AI wins, creativity itself may be threatened.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

6 Comments

  1. “As sick as it is, evil is a form of creativity. It is the application of human intellect to a move people and create emotion.” Josh, it sounds like you are describing advertising…

  2. I have made a very conscious and so far successful effort to avoid watching the Christchurch murderers videos, so therefore the still image on your post was the first image of this that I have seen. I really wish I hadn’t visited your site to read this article now.

    Please consider removing it – don’t be part of the problem Josh.

  3. It’s like TSA checkpoints and cybersecurity and everything else: the bad guys do something, the good guys react, the bad guys do something new. The best we can hope for is keeping the gap between bad guys and good guys as small as possible.