Why Facebook is filled with hate, anger, and conflict

The Wall Street Journal says Facebook’s algorithm change in 2018 rewarded outrage. It’s no surprise what happened next. But if you think about it, you’ll see that its evolution towards rage is completely predictable.

The article, by Keach Hagey and Jeff Horwitz, is titled “Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead.” It reveals information from internal memos about an algorithm change that scored posts based “meaningful social interactions.” The score include likes, but placed a higher value on shares, reactions other than likes, and comments. Posts with higher scores were more visible and therefore spread more.

Any of us who use the platform could have guessed something like this was in place. If you want a lot of people to see your post, it helps if it attracts lots of comments, shares, and “wows” or “angry” reactions.

But media companies like Buzzfeed noticed that their posts that were controversial and divisive, like “21 Things That Almost All White People are Guilty of Saying,” got more visibility than other content. It seemed as if the site was designed to spread anger.

Why Facebook makes hate spread

Let’s be analytical here. Assume the following:

  • Facebook posts are generally short.
  • Some feature photos or videos.
  • Posts that get more interaction will get more visibility. (Facebook is generally going to behave this way, although the specific algorithm will define what “more interaction” means.)

Think about short posts people that people react to on Facebook. Such posts have to generate emotion very quickly. Unlike a movie or a TV series or a novel that has time to draw you in, a post on Facebook has about 3 seconds to capture your attention and generate a reaction quickly. What generates a quick reaction? Posts that are:

  • Funny (like people doing stupid things on video)
  • Cute (like puppies and babies)
  • Sexy (like a photo of a scantily clad actor or actress)
  • Personal (like a wedding photo or an announcement of taking a new job)
  • Controversial (like a political post)

There may be other categories that I’ve missed. But think about what sort of reaction these posts get.

The funny posts will get a bunch of likes and hahas and some shares. But there’s not that much to say about them.

The cute posts will get likes, loves, and shares.

The sexy posts will get likes, loves, and shares.

The personal posts will get likes, loves, and congratulatory notes.

But the controversial posts will generate not just shares and reactions, but a lot more comments. If I post about why not wearing a mask is stupid (or, why wearing a mask is stupid, either way), people will agree with me and disagree with me. They’ll take the time to tell me why I’m right or wrong. More importantly, they’ll take the time to tell the people who post comments why they’re right or wrong. Some of those comments will be nasty. And those will generate nasty replies.

Facebook will promote such interaction-heavy posts above other posts. More people will see them. More people will hate on them. More people will hate on each other. And the hate will spread across the social network.

It’s not true that positive, fascinating, or thoughtful content can’t get engagement. The point is that if you have to create a post that is short and you seek engagement, that short post is likely to be controversial, and is likely to become a locus for anger and hate as it rises in the algorithm.

A factory that spreads noxious pollution isn’t designed to be harmful, but nonetheless, it is. Similarly, even if Facebook says that it is designed to prioritize social interaction, its effluent is a constant stream of hate. Hate engages, which is why it spreads.

Facebook cannot be any way other than this. That’s why it makes you feel filthy and guilty to be part of it. It’s why it’s bad for the world. And it’s why public policy tinkering or breaking it apart into smaller social networks is unlikely to change it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

6 Comments

  1. There is a reason why “If it bleeds, it leads.” And Facebook not only has it built in, but, as you say, makes it worse.

  2. “We have met the enemy, and it is us.”–Pogo

    Facebook reveals unadulterated, largely unfiltered human nature. I would say about 5% or less of the posts on my feed trigger my intense negative emotional reactions. A very small number of perpetrators are responsible for almost all these posts. Facebook might be well advised to segregate these agent provocateurs into a Facebook Public Affairs channel, which users could choose to view as part of their feed, as a separate tab, or not at all.

    This would diminish the reach of irritants without resorting to censorship.

  3. “Facebook cannot be any way other than this.”

    People respond to provocative content. Facebook cannot fix the problem because the problem is with people and what they respond to.

    That is very different than the factory analogy. Here people are WANTING to consume that “pollution” from other FB posters (they give that content high rates of engagement). Nobody actually wants the pollution from a factory.

    Through trial and error determined posters will find a way to game the system.

    It is not that Facebook (and other Social Media) are not doing anything about it.

    The trouble is the feasibility at editorially moderating that volume and variety of content both in the how and the what.

    Because even AI capabilities may not be so precise to actually do the moderation – I’d prefer they do something that may be easier to accomplish…

    Have a meter that rates how provocative the content MAY be, and give the user a “volume” control to dial in how high they want their feed to be.

    This gives the users the option to filter some of that content. This is similar to the “SafeSearch” feature in Google and other Search Engines, but with a finer dial vs binary choice.

    The platforms can still work to improve their editorial capabilities. But, having users self identify this way can teach the AI a better sense of the finer points between the different levels.

    That won’t fix the people but that gives them an off ramp without having to outright quit the platform.

    The platforms could also have a program to slowly lower the threshold for each of those levels, as their AI gets better at defining those edge points – which, I think, is a weakness today at addressing this.

  4. Scanning my Facebook feed, I do not see any angry posts. I have 42 Facebook friends, but only 6 are regularly active accounts. I subscribe to groups talking about programming on METV, and nudists. Nudists are generally happy folks.

    I am at a loss here, someone explain this to me please.