A browser extension for truth: Verytas

verytas screen shot
Image: Verytas

Lies masquerading as truth on the Web are a real problem.  Now Sam Mallikarjunan is going to try to solve the problem.

Mallikarjunan and his partner Andrei Oprisan are creating a browser extension called Verytas that changes the background color of stories you read: green for true, red for false, purple for satire. How does Verytas know? The same way Wikipedia knows — it taps the power of the crowd to rate stories and link them to reliable sources (or reliable debunkers). Here’s my interview with Mallikurjunan about his plan, which he’s funding on Indiegogo. I’ve posted his video at the end of this post.

“How does Verytas work?”

Verytas is a browser extension that helps you know if what you see in social media is true.

If a user sees a questionable post in social media they can research it. After doing so, they use Verytas to link the research they have done to the post. If the Verytas algorithm rates their research as credible, everyone who sees that post in the future will also see their research. They’ll know immediately if that article is true.

Why do you feel we need a browser extension that verifies the truth of what we read?

Evaluating information is a time-consuming process, even for professionals. Misinformation and satire has fooled politicians, celebrities, and even professional journalists. People have lost their jobs. Lives have been ruined. Even if a story is proven false and later retracted, it can continue to cause damage.

Most online content is so poorly backed up that even a lenient high school teacher would give it an F. We want to help the internet get to at least a C-.

sam mallikarjunan
Sam Mallikarjunan

What do you think false content spreads so rapidly now?

Because it “sounds right”. It jives with what we currently believe, and feels comfortable with our worldview. We don’t feel responsibility because we didn’t write it — we “only” shared it.

What is the definition of “truth” according to Vertyas?

Verytas rates something as true if credible users can back it up by adding citations from credible sources.

You will be tapping a community of “verifiers” (or “verytasers”) whose collective opinions will determine what you mark as true. But now you have another problem — can we trust the verytasers themselves? A liberal could easily mark a conservative article as false, and vice versa. How will you prevent bias here?

People attempt to vandalize Wikipedia every day. Yet it’s one of the most accurate collections of encyclopedic knowledge in the world. On Wikipedia, the technology and the community work together to ensure accuracy. On Verytas, the algorithm and community prevent misinformation or spin from making it into the feed.

As I’ve reviewed articles and links in my feeds, I’ve seen all sorts of not-true material. These include:

  • Material that is obviously satire (The Onion, Borowitz)
  • Material that is marked as satire, but intended to fool people (nbc.com.co)
  • Material that is based on inaccurate “facts” (stuff that Politifact would mark as false)
  • Hoaxes
  • Fiction
  • Speculation (“This is what really causes cancer”, “This is what really happened to Flight 370”)
  • Poorly written crap.
  • Opinion
  • How will Verytas deal with these sorts of shades of non-truth?

If you’re saying something is true and someone can prove it is, you’re green.

If you’re saying something is true and someone can prove it isn’t, you’re red.

If you’re cracking the sharp whip of satire and you’ve openly admitted it is, you’re purple.

Otherwise, you’re yellow.

In a couple of sentences, what are the biggest things you have learned already?

Solve one problem at a time.

This is a complex problem with no clear solution, but we can solve one problem at a time. We can “just” flag satire or bring citations directly into social media and still make a big impact. We’re okay with not solving every problem on the internet right now. We’re only solving one problem — letting people see what has citations to back it up, and what doesn’t.

Imagine that you succeed. Describe the online world with verytas on all of our browsers. What will it be like?

We want to see a world where online discourse rises to at least the level of a high school classroom.

You don’t get away with just making stuff up. You don’t get away with claiming something says something it doesn’t. Verytas will help people help themselves be better, not just more, informed.

I support Verytas. This is a very complex issue — it won’t be easy to solve it, and there will be inevitable challenges. And I don’t buy the argument that “responsible people can look things up themselves” — I’ve seen plenty of responsible people posting stuff that’s they haven’t checked. Why not tap the crowd to help verify content and make things easier for all of us? Verytas, or something like it, is just what we need to keep the Web and Facebook from devolving into a pit of undifferentiated stupidity.

If you’d like to support Verytas, here’s the link to contribute or learn more.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

13 Comments

    1. Thanks Chris! You can install the current extension at Verytas.org and it will automatically update as we launch new features.

      Please consider sharing the Indiegogo campaign as well. Some of the backend tech gets a little pricey with the volumes we’re processing 🙂

      We look forward to your feedback!

    2. Agreed. Without knowing all of code, I suspect that there are a lines like this:

      >If story contains ‘Trump’, then background color = red.
      >Else, probably green.

      1. Haha not quite. The algorithm evaluates the credibility of the user and the source, it doesn’t have granular topical functionality like that.

        The code will be open source, anyways. So you can peruse through and make sure we’re not up to any funny business due to our own biases.

  1. It sounds interesting, but I’d always be wondering how long it would be before Verytas got gamed or hacked. Can “truth” be abstracted from algorithms? I’m not so sure.

    I worked for Reuters for 17 years (before Thomson screwed it up), where truth and accuracy trumped timeliness. All news stories had to be verified by two independent sources before they were published on Reuter’s news feeds. I assume (but don’t know for sure) that this still happens. Perhaps I’m in a shrinking minority who still prefer truth and accuracy over timeliness – not everything that is instant is good or right!

    1. Your last sentence says it all, but that’s the reality journalists have to deal with these days. We have in mind for the future tools to help them confirm facts faster as well as what we’re building here.

      In reference to your first question, people will inevitably try to spam us. They’ll keep trying new ways and we’ll keep stopping them. It’s a never-ending process.

      For now, our algorithm doesn’t have to decide on a philosophical definition of truth — it just decides whether to show the user-added citation or not based on the history of the contributor and the strength of the source they’re using mixed with some machine learning and stochastic ordering. That’s a manageable technological challenge.

      As we grow and tackle more complex challenges things will get harder, but we have to start somewhere! Even just flagging satire could have saved Anderson Cooper some embarrassment 🙂

  2. Great idea! Now if you can incorporate this into the presidential candidate’s microphones (along with some Vader-like voice changing technology when they stray) maybe we can start getting some straight answers from our “representatives”. Good luck guys.

  3. Sam, how will you fund this on an ongoing basis? I love the concept (I’m constantly referring people to SNOPES for various nonsense scams), but there are certainly times when it can be challenging to know when someone is pulling the wool over our collective eyes. And as for the comment about the devolution of social media: You’re too late, it’s already a morass of falsehoods. So good luck to you, Sam! I’ll check out the app and the indiegogo as well.

    1. Thanks for your support Jennifer!

      We have some pretty interesting ideas for future monetization with this. Others have tried similar technology and failed because they couldn’t run on donations forever. I’m s big believer that the best way to drive change is to find some way to make it self-sustaining.

      If you’d like to read the details on that just check out blog.verytas.org!

  4. And your going to tell us what the truth is eh? Let us know what Misinformation and Facts are eh? Sound like another Georgo Soros funded attempt to hide truths, and this is a test bubble. Thank GOD for free will, and the ability given to most enlightened people to have the FREEDOM to look at both truth and lies, and decide for themselves what they want to believe.

    1. Man, if George Soros is funding this and no one told me I’m gonna be pissed. Where’s my cut?!

      In all seriousness, what we’re doing is connecting content to citations. We’ll flag stuff based on if it has supporting evidence, refutations, or conflicting evidence, but at the end of the day it’s up to the end user to decide what they actually believe. We can link to evidence that water is potable but we can’t make people drink.

      There’s a lot of variability in this, obviously. We can prove that Macaulay Culkin isn’t dead (that spreading across the internet was what originally inspired this project) but if Romney says “our Navy is the smallest since 1917” or something, that could be a debatable statement if he’s referring to the number of ships and not overall budget, personnel, power, or other metrics. In which case we highlight it yellow, link to the evidence for both sides, and the user can evaluate both.