Google CEO Sundar Pichai said on CNN that YouTube will never be free of offensive content. Fine. But let’s get some appropriate principles for content in place.
Here’s some of what Pichai said:
We’ve gotten much better at using a combination of machines and humans. . . . So it’s one of those things, let’s say we’re getting it right 99% of the time, you’ll still be able to find examples. Our goal is to take that to a very, very small percentage well below 1%. . . .
Any large scale systems, it’s tough . . . Think about credit card systems, there’s some fraud in that. … Anything when you run at that scale, you have to think about percentages.
He’s right, of course. No system is 100% fool-proof, none could possibly be. But that is no excuse for social media systems to be so bad at content moderation.
Here are some principles. Every platform should sign onto them. If not, tell me why you’re so special you can’t do these simple things (simple to describe, if not always to implement).
- Each platform must have a content officer in charge of identification, regulation, and blocking of offensive content.
- The list of prohibited content should be clear and visible.
- The list of rules for punishing posters of prohibited content should also be transparent.
- The platform must commit to enforce these rules without bias.
- The automated systems that block and identify content must be available for audit for third parties.
- When automated systems are unable to effectively detect a category of prohibited content, the platform must use human intervention.
- The online community for the platform may identify a new category of content that may be considered offensive. Platforms must create a system that allows people to suggest new categories to prohibit. When the number of complaints in such a system hits a threshold for a new category (say, 1,000 complaints), the platform must study the issue and return a recommendation for regulation within three weeks.
- The platform’s content officer must hold regular public briefings and take question on content policies.
- The platform must publish quarterly reports on its efforts to block offensive content, including current and new categories.
- Regulators must impose fines when a platform is unable to block at least 99.5% of prohibited content (by its own rules) for two consecutive quarters.
If you think this is inappropriate or inadequate, I look forward to hearing your suggestions.