To Mark Zuckerberg: How a Facebook Truth Icon could cure fake news

zuckerberg
Photo: Jason McELweenie via Wikimedia Commons

An open letter to Mark Zuckerberg, CEO and Sheryl Sandberg, COO, Facebook:

False news is out of hand. People read lies and obliviously share them on Facebook.

You can solve this problem. Not only that, you can solve it in a scalable way, without censoring anything and without changing your main sharing algorithm. Let’s see how.

First, the scope of the problem. The Web is full of sites that just make things up. I’m talking fake news, not obvious satires like The Onion. Many of my otherwise rational friends share outrageous but believable news from sites like the misleadingly named nbc.com.co, which falsely announced, for example, that a pastor in Vermont was jailed for refusing to perform a gay marriage. There are hundreds of sites now making up fake news.  A few are trying to be funny, some are trying to cause trouble, and all are using Facebook to generate ad revenues.

The flood has overwhelmed the Washington Post’s Caitlin Dewey, who gave up on her column “What was fake on the Internet this week.” As she put it:

If you’re a hoaxer, it’s [now] more profitable. Since early 2014, a series of Internet entrepreneurs have realized that not much drives traffic as effectively as stories that vindicate and/or inflame the biases of their readers. Where many once wrote celebrity death hoaxes or “satires,” they now run entire, successful websites that do nothing but troll convenient minorities or exploit gross stereotypes.

It only takes a minute to check a story on Snopes.com or in a reputable news source. But a minute is too much. Most people would rather share an outrageous story than take a minute to check it.

Facebook optimizes its algorithm to generate as many views as possible. Since members like to share fake news, Facebook’s algorithm elevates its visibility. I know that you have finely tuned the algorithm to work well for your company, your advertisers, and your users. I wouldn’t ask you to change that.

Instead, you can solve the problem by creating an parallel algorithm, an automated “Truth Algorithm,” to help Facebook users evaluate content. The Truth Algorithm includes three parts, all of which need to work together:

  1. Each link post includes a Truth Icon. Any Facebook user can click on the Truth Icon and report the post as real, fake, or satire — just as anyone can “like” a post.
  2. The Truth Algorithm rates each user’s reliability. If a user frequently report items as false when other reliable users rate it as true, or vice versa, then that user’s reliability is low. If their reports track an identifiable bias (for example, rating all negative items about Obama as false), their reliability is also low. But if their reports are similar to those of other reliable users, then their reliability is high.
  3. The Truth Algorithm rates each post and shows the appropriate Truth Icon based on the reports of reliable users: A smile for true content, a frown for fake content, a laughing face for satire, or a question-mark when there is not enough information.

Truth Icon

Some readers of this open letter may believe there is a chicken-and-egg problem here — you can’t rate items until you rate users, and vice versa. But as experts on social algorithms, Mr. Zuckerberg and Ms. Sandberg, you know better. Truth ratings for both news items and users will be an emergent social phenomenon. Each click on the Truth Icon will give Facebook more information about both posts and users. Users’ behavior will rapidly fall into patterns: for example, those who rate only conservative items as false, or those who align with other users who have checked Snopes or the original source. Similarly, the ratings on the news items will fall into patterns. The algorithm will recognize those patterns and classify both the users and the news items. Given the billions of article reads that take place every day, I expect the Truth Algorithm to quickly become accurate.

This algorithm does not require Facebook to make judgments about truth from content, which would subject it to all sorts of criticism regarding bias. It does not censor anything. And since it is an algorithm, it scales. It uses the intelligence of Facebook and its users to provide information to all of us about what we’re reading. Like all efficient social algorithms, it uses the activity of a few socially active news readers (those who carefully check the veracity of news items) to benefit the mass of people who just read and share news items.

Mr. Zuckerberg, you just agreed to give away 99% of your shares to charitable organizations. I know you have a conscience, you understand that this is important, and you know this will work. Facebook is now the primary way that people share news. You are responsible for helping us to evaluate this news and understand its truth, if you can. So please implement the Truth Algorithm and the Truth Icon.

Thank you.

Josh Bernoff, president of WOBS LLC, concerned American, and loyal Facebook member.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

12 Comments

  1. Oh, I already can see how people would start gaming this.

    I love the idea, in theory, but it becomes a cat/mouse/cat game all too quickly. Google does it because that is its business, entirely. For Facebook, there is no real money to be made in your Truth Index – but there is enormous downside for reputation.

    I do share your pain, though. I’ve had people wave the “Snopes hasn’t debunked it!” flag like that was somehow proof — and I even documented an example of a faked Trump tweet that was not yet Snoped, and not even far enough along to index. (http://www.occamsrazr.com/2015/10/16/objective-reality-rip/)

    But, yeah. Such a system will get gamed. Not by the fake news sites, but by partisan organizations out to take each other down. An army of the Left “downvoting” the veractiy of a Breitbart link, and a platoon on the Right scrapping Kos. (I hear that Ezra Klein leans too hard to the left, let’s drop his rating while we’re at it.)

    I have a bad feeling…

    1. Thanks for this, it’s what I was hoping for: a thoughtful critique.

      Regarding the gaming question, that’s why I included the ratings of users and not just sites. If you always vote down Breitbart, your vote won’t count for much. If it’s your first time, your vote won’t count for much. But if your history shows that you vote carefully and not in a partisan way, you’ll matter.

      The best parallel I can come up with is the editors on Wikipedia, who are (relatively) unbiased and keep the partisans in check.

      1. Allow me to take this in a slightly different direction…

        I have a few apps on my phone that I dearly love. They are the ones that I absolutely endorse, and are the first I install. I wish there were a way I could give them a sixth star, out of five.

        So why not give me the ability to give six stars? The catch is that I could only do it five times, and to do it again I would have to revoke one of the previous ones.

        So what if each Facebook user got five “downvotes” — not on stories, but on websites? So if you want to tell your friends that nbcnews.com.co is bogus, go right ahead. If your friends are falling for The Daily Currant, then ding that site. If too many of your friends are falling for Clickhole.com (which is owned by The Onion, and is a riot!), you can use it there.

        If you want to spend one of your limited downvotes on Kos or on Heritage or something else partisan, you can do so. But limiting the votes takes away the spam factor. And it would probably zero in on your intended targets of fake sites, and not be as much of an issue for partisan opinion sites.

        1. Sounds great . . . but I’d like to allow the thoughtful people to vote as much as they want. As for the stupid or partisan people, they’ll vote in a way that makes it easy to Facebook to invalidate all their worthless votes.

  2. It’s a great idea. (One of those “why hasn’t Facebook done this yet” ideas). However, there’s a caveat: A Harris poll in 2013 found that 42% of Americans believe in ghosts. Many Americans believe global warming is a hoax. Crowdsourcing the truth might not lead to the genial consequences you’ve posited — it might lead to a variety of parallel truth-systems.

      1. Still concerned. Take evolution — fewer than half of Americans believe in it. How will reputation accrue to believers if they are in the minorty?

        1. Because the smart ones — the thoughtful ones — will stand out for having decisions to critique questionable posts on all sides of the ideologically spectrum.

          If all the evolution deniers rate a post on evolution as false, then their votes won’t count, specifically because ALL the evolution deniers voted together.

  3. And while we’re at it, how about a ‘Click Bate’ button? To flag posts that, altho they may contain a grain of truth, are clearly banking on our fear-addicted society to generate revenue through clicks….