Twitter announced yesterday that it would allow people to remove unwanted photos or video of themselves. But there are lots of ill-defined exceptions, and the policy as a whole will be unwieldy to enforce.
Analyzing the new policy changes
Twitter already bans posting of people’s private information (“doxing”). Here’s the start of the blog post by “Twittter Safety” about the new policy, with my comments in brackets:
Expanding our private information policy to include media
As part of our ongoing efforts to build tools with privacy and security at the core, we’re updating our existing private information policy and expanding its scope to include “private media.” Under our existing policy, publishing other people’s private information, such as phone numbers, addresses, and IDs, is already not allowed on Twitter. This includes threatening to expose private information or incentivizing others to do so.
There are growing concerns about the misuse of media and information that is not available elsewhere online as a tool to harass, intimidate, and reveal the identities of individuals. Sharing personal media, such as images or videos, can potentially violate a person’s privacy, and may lead to emotional or physical harm. The misuse of private media can affect everyone, but can have a disproportionate effect on women, activists, dissidents, and members of minority communities. When we receive a report that a Tweet contains unauthorized private media, we will now take action in line with our range of enforcement options. [These options range from hiding the tweet to suspending the account of the person who posted it.]
While our existing policies and Twitter Rules cover explicit instances of abusive behavior, this update will allow us to take action on media that is shared without any explicit abusive content, provided it’s posted without the consent of the person depicted. This is a part of our ongoing work to align our safety policies with human rights standards, and it will be enforced globally starting today.
What is in violation of this policy?
Under our private information policy, you can’t share the following types of private information or media, without the permission of the person who it belongs to:
* home address or physical location information, including street addresses, GPS coordinates or other identifying information related to locations that are considered private;
* identity documents, including government-issued IDs and social security or other national identity numbers – note: we may make limited exceptions in regions where this information is not considered to be private;
* contact information, including non-public personal phone numbers or email addresses;
* financial account information, including bank account and credit card details; and
* other private information, including biometric data or medical records.
* NEW: media of private individuals without the permission of the person(s) depicted.
The following behaviors are also not permitted:
* threatening to publicly expose someone’s private information;
* sharing information that would enable individuals to hack or gain access to someone’s private information without their consent,e.g., sharing sign-in credentials for online banking services;
* asking for or offering a bounty or financial reward in exchange for posting someone’s private information;
* asking for a bounty or financial reward in exchange for not posting someone’s private information, sometimes referred to as blackmail.
When private information or media has been shared on Twitter, we need a first-person report or a report from an authorized representative in order to make the determination that the image or video has been shared without their permission. Learn more about reporting on Twitter.
This sounds straightforward. It’s obviously abuse if you post someone’s social security number or phone number. And banning blackmail makes obvious sense. Now, if you post their picture or video including someone, then they can request that the media be taken down. Twitter will eventually suspend an account that repeatedly violates this policy.
It’s the exceptions that will make this a hopeless tangle of complexity
Take a close look at the next section of policy, and you’ll see where the problem is:
Sharing private media
When we are notified by individuals depicted, or by an authorized representative, that they did not consent to having their private image or video shared, we will remove it. This policy is not applicable to media featuring public figures or individuals when media and accompanying Tweet text are shared in the public interest or add value to public discourse.
However, if the purpose of the dissemination of private images of public figures or individuals who are part of public conversations is to harass, intimidate, or use fear to silence them, we may remove the content in line with our policy against abusive behavior.. Similarly, private nude images of public individuals will continue to be actioned under our non-consensual nudity policy.
We recognize that there are instances where account holders may share images or videos of private individuals in an effort to help someone involved in a crisis situation, such as in the aftermath of a violent event, or as part of a newsworthy event due to public interest value, and this might outweigh the safety risks to a person.
We will always try to assess the context in which the content is shared and, in such cases, we may allow the images or videos to remain on the service. For instance, we would take into consideration whether the image is publicly available and/or is being covered by mainstream/traditional media (newspapers, TV channels, online news sites), or if a particular image and the accompanying tweet text adds value to the public discourse, is being shared in public interest, or is relevant to the community.
So here are the exceptions, and the problems with them. How will moderators decide these questions?
- Public figures. So you can post a picture of Kim Kardashian or Joe Biden. Is basketball player Kyrie Irving a public figure? Is author Malcolm Gladwell? What about me — I sold 150,000 books and accumulated 3.8 million blog views, does that make me a public figure? What about Donald Trump’s youngest son Barron, age 15, or Kyrie Irving’s girlfriend, or my daughter? Where do you draw the line on this public figure thing?
- Media shared in the public interest. How do we define the public interest? If I film someone robbing an ATM or running over a pedestrian, is that in the public interest? If they enter a store without a mask, is a video of that in the public interest? Who decides?
- Adds value to public discourse. I have no idea how you’d actually decide what adds value to public discourse.
- But these exceptions don’t apply to harassment, intimidation, or nude pictures. So there are exceptions to the exceptions.
- Helping someone in a crisis. Who’s going to decide when this exception applies? If you post a picture to “help” someone — say, if they’re a victim of domestic violence — and the person in the picture objects, whose word should we take? Is “I was just trying to help” an actual exception, because if it is, everyone will claim it.
- In the aftermath of a violent event. So if I’m shot or stabbed or beaten or staggering around after an explosion, you can post my picture even if I object?
- As part of a newsworthy event. Sure, but what’s “newsworthy?” My desire to keep my picture private might seem newsworthy to you — how are we going to decide?
- If the image is publicly available. So if a picture violates my privacy, but it’s on Instagram, you’ll keep it up? Well, suppose it was posted on Twitter, and then picked up and posted on other media (which happens all the time). According to this policy, it would then stay up. That means if you want to violate someone’s privacy, all you have to do is post the picture on Twitter and then share it, or get others to share it, off of Twitter.
- It’s on the news. Makes sense — can’t stuff that toothpaste back in the tube.
- Is relevant to the community. Which is about as broad and vague as you can get.
This policy will fail disastrously
Twitter already fails at policing extremism, hate speech, trolls, vaccine misinformation, and foreign influence. The flood is just too great, and automated tools can’t easily spot all the problems.
But that’s just text. If I tweet that vaccines kill people or Hunter Biden is an alien, that’s pretty easy for an AI to detect.
Photos and video are a much harder problem. To get the takedown request correct, you must first make sure that the person filing the complaint is actually the person in the photo or video — which is not so easy. (What happens if I look like somebody else, and file a complaint to get pictures of the other person taken down — how will Twitter determine whose picture it is?)
So many of the exceptions are judgment calls. What is public discourse? Who is a public figure? What is newsworthy? Did the image originate on Twitter, or elsewhere? Is it relevant to the community? How could any moderator (let alone the offshore minimum-wage people Twitter will doubtless use) apply these squishy and ill-defined categories?
Mike Masnick is on the right track here:
Twitter CEO Jack Dorsey just quit. And it’s no wonder. This new policy is going to swallow up the company’s time and efforts for years to come. The intention makes sense. But the enforcement is going to be impossible to get right.