Google CEO says YouTube is too big to fix. So what’s reasonable to expect?

Google CEO Sundar Pichai said on CNN that YouTube will never be free of offensive content. Fine. But let’s get some appropriate principles for content in place.

Here’s some of what Pichai said:

We’ve gotten much better at using a combination of machines and humans. . . . So it’s one of those things, let’s say we’re getting it right 99% of the time, you’ll still be able to find examples. Our goal is to take that to a very, very small percentage well below 1%. . . .

Any large scale systems, it’s tough . . . Think about credit card systems, there’s some fraud in that. … Anything when you run at that scale, you have to think about percentages.

He’s right, of course. No system is 100% fool-proof, none could possibly be. But that is no excuse for social media systems to be so bad at content moderation.

Here are some principles. Every platform should sign onto them. If not, tell me why you’re so special you can’t do these simple things (simple to describe, if not always to implement).

  1. Each platform must have a content officer in charge of identification, regulation, and blocking of offensive content.
  2. The list of prohibited content should be clear and visible.
  3. The list of rules for punishing posters of prohibited content should also be transparent.
  4. The platform must commit to enforce these rules without bias.
  5. The automated systems that block and identify content must be available for audit for third parties.
  6. When automated systems are unable to effectively detect a category of prohibited content, the platform must use human intervention.
  7. The online community for the platform may identify a new category of content that may be considered offensive. Platforms must create a system that allows people to suggest new categories to prohibit. When the number of complaints in such a system hits a threshold for a new category (say, 1,000 complaints), the platform must study the issue and return a recommendation for regulation within three weeks.
  8. The platform’s content officer must hold regular public briefings and take question on content policies.
  9. The platform must publish quarterly reports on its efforts to block offensive content, including current and new categories.
  10. Regulators must impose fines when a platform is unable to block at least 99.5% of prohibited content (by its own rules) for two consecutive quarters.

If you think this is inappropriate or inadequate, I look forward to hearing your suggestions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

5 Comments

  1. Agree with a bunch of this. One issue that can be foreseen here is the ‘pitchfork’ mentality which would see bands of similarly motivated people (for good and for bad…) piling on and generating 1000+ complaints in order to have something they disagree with removed. I think the threshold for removal should absolutely take into account the community’s view, but not as a brightline test.

    Not sure if you’ve seen the text of the Christchurch Call that emanated from over here in New Zealand – it encapsulates a bunch of principles in a moral rather than legal code: https://www.christchurchcall.com/

  2. Who decides what is offensive?

    In a free speech society, what part of speech should not be free?

    Have you ever heard of Title 18 of the Federal Code? Why are porn sites allowed to exist when it is a felony to distribute pornography? How can ICANN allow rape, hate and death sites to exist, but YouTube has to decide what is “offensive” .

    Who decides what is offensive?

    Polls? Do we take polls?
    * How many people will find oysters offensive?
    * How many people will find obesity offensive?
    * How many people will find this or that religion offensive?
    * How many people will find red hats offensive?
    * How many people will find …

    you get the picture.

    Abraham Lincoln once said
    . . . . “Those who deny freedom to others deserve it not for themselves.”

    1. Fred, I hear you. My problem is with the inconsistency. If the platforms wanted to say “We will allow anything to appear on our platform” that would be fine. If they choose to regulate, though, they really ought to be consistent about the rules they follows.