Lies masquerading as truth on the Web are a real problem. Now Sam Mallikarjunan is going to try to solve the problem.
Mallikarjunan and his partner Andrei Oprisan are creating a browser extension called Verytas that changes the background color of stories you read: green for true, red for false, purple for satire. How does Verytas know? The same way Wikipedia knows — it taps the power of the crowd to rate stories and link them to reliable sources (or reliable debunkers). Here’s my interview with Mallikurjunan about his plan, which he’s funding on Indiegogo. I’ve posted his video at the end of this post.
“How does Verytas work?”
Verytas is a browser extension that helps you know if what you see in social media is true.
If a user sees a questionable post in social media they can research it. After doing so, they use Verytas to link the research they have done to the post. If the Verytas algorithm rates their research as credible, everyone who sees that post in the future will also see their research. They’ll know immediately if that article is true.
Why do you feel we need a browser extension that verifies the truth of what we read?
Evaluating information is a time-consuming process, even for professionals. Misinformation and satire has fooled politicians, celebrities, and even professional journalists. People have lost their jobs. Lives have been ruined. Even if a story is proven false and later retracted, it can continue to cause damage.
Most online content is so poorly backed up that even a lenient high school teacher would give it an F. We want to help the internet get to at least a C-.
What do you think false content spreads so rapidly now?
Because it “sounds right”. It jives with what we currently believe, and feels comfortable with our worldview. We don’t feel responsibility because we didn’t write it — we “only” shared it.
What is the definition of “truth” according to Vertyas?
Verytas rates something as true if credible users can back it up by adding citations from credible sources.
You will be tapping a community of “verifiers” (or “verytasers”) whose collective opinions will determine what you mark as true. But now you have another problem — can we trust the verytasers themselves? A liberal could easily mark a conservative article as false, and vice versa. How will you prevent bias here?
People attempt to vandalize Wikipedia every day. Yet it’s one of the most accurate collections of encyclopedic knowledge in the world. On Wikipedia, the technology and the community work together to ensure accuracy. On Verytas, the algorithm and community prevent misinformation or spin from making it into the feed.
As I’ve reviewed articles and links in my feeds, I’ve seen all sorts of not-true material. These include:
- Material that is obviously satire (The Onion, Borowitz)
- Material that is marked as satire, but intended to fool people (nbc.com.co)
- Material that is based on inaccurate “facts” (stuff that Politifact would mark as false)
- Speculation (“This is what really causes cancer”, “This is what really happened to Flight 370”)
- Poorly written crap.
- How will Verytas deal with these sorts of shades of non-truth?
If you’re saying something is true and someone can prove it is, you’re green.
If you’re saying something is true and someone can prove it isn’t, you’re red.
If you’re cracking the sharp whip of satire and you’ve openly admitted it is, you’re purple.
Otherwise, you’re yellow.
In a couple of sentences, what are the biggest things you have learned already?
Solve one problem at a time.
This is a complex problem with no clear solution, but we can solve one problem at a time. We can “just” flag satire or bring citations directly into social media and still make a big impact. We’re okay with not solving every problem on the internet right now. We’re only solving one problem — letting people see what has citations to back it up, and what doesn’t.
Imagine that you succeed. Describe the online world with verytas on all of our browsers. What will it be like?
We want to see a world where online discourse rises to at least the level of a high school classroom.
You don’t get away with just making stuff up. You don’t get away with claiming something says something it doesn’t. Verytas will help people help themselves be better, not just more, informed.
I support Verytas. This is a very complex issue — it won’t be easy to solve it, and there will be inevitable challenges. And I don’t buy the argument that “responsible people can look things up themselves” — I’ve seen plenty of responsible people posting stuff that’s they haven’t checked. Why not tap the crowd to help verify content and make things easier for all of us? Verytas, or something like it, is just what we need to keep the Web and Facebook from devolving into a pit of undifferentiated stupidity.
If you’d like to support Verytas, here’s the link to contribute or learn more.