For a decade, Facebook (now known as Meta) used facial recognition to tag millions of photos with the identities of the people in them. Now the company says it’s ceasing the algorithm and deleting a billion face prints. Here’s why I don’t trust them.
This is a fraught moment for Meta/Facebook, given the leaked documents that led to a flood of horrifying revelations about the company’s practices. So from a public relations perspective, it’s time for them to be taking unforced good faith actions. The facial recognition news falls into this category.
Deconstructing the Facebook statement
Facebook described its actions in a blog post. I’ve included the text of that post here, with some commentary about why I’m skeptical.
An Update On Our Use of Face Recognition
November 2, 2021
By Jerome Pesenti, VP of Artificial Intelligence
* We’re shutting down the Face Recognition system on Facebook. People who’ve opted in will no longer be automatically recognized in photos and videos and we will delete more than a billion people’s individual facial recognition templates.
* This change will also impact Automatic Alt Text (AAT), which creates image descriptions for blind and visually-impaired people. After this change, AAT descriptions will no longer include the names of people recognized in photos but will function normally otherwise.
* We need to weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules.
As I read this summary, my main questions relate to the meaning of the words “shutting down,” “automatically,” “delete,” and “facial recognition templates.” Will they be getting rid of all the code and all the data, or just storing it for later use? Let’s see what the rest of the statement promises.
In the coming weeks, Meta will shut down the Face Recognition system on Facebook as part of a company-wide move to limit the use of facial recognition in our products. As part of this change, people who have opted in to our Face Recognition setting will no longer be automatically recognized in photos and videos, and we will delete the facial recognition template used to identify them.
This change will represent one of the largest shifts in facial recognition usage in the technology’s history. More than a third of Facebook’s daily active users have opted in to our Face Recognition setting and are able to be recognized, and its removal will result in the deletion of more than a billion people’s individual facial recognition templates.
Here’s a little context on facial recognition at Facebook. First of all, the statement about “people who opted in to” Face Recognition is misleading. At its launch in 2010, everyone was included in facial recognition automatically. The option to turn it off was buried deep in privacy settings.
This feature was hugely effective in promoting interaction. You’d post a photo with somebody else in it. The facial recognition would spot the other person and tell them they’d been tagged in a photo. They’d go look at the photo and interact with the post. According to the Washington Post, “Early Facebook employees have said in interviews with The Post and other outlets that photo-tagging was one of the greatest ‘growth hacks’ Facebook engineers had ever developed, because it was hard for users to resist notifications that they were showing up in other people’s pictures.”
Years later, they made opting out easier. And in 2019, nine years after the feature debuted, they finally required people who wanted to continue using the feature to opt in. By then, the algorithm had done its job of training people to accept the idea that Facebook recognized their faces and that when you were recognized, the appropriate thing to do was to go look at the photo you were tagged in.
Making this change required careful consideration, because we have seen a number of places where face recognition can be highly valued by people using platforms. For example, our award-winning automatic alt text system, that uses advanced AI to generate descriptions of images for people who are blind and visually impaired, uses the Face Recognition system to tell them when they or one of their friends is in an image.
For many years, Facebook has also given people the option to be automatically notified when they appear in photos or videos posted by others, and provided recommendations for who to tag in photos. These features are also powered by the Face Recognition system which we are shutting down.
This is the usual Facebook spin. When called out on potentially harmful effects of an algorithm, Facebook always falls back on citing the positives. It’s great that Facebook can help blind people know when their friends are in a photo. But the facial recognition system uses the the faces of the 97% of people who aren’t blind, and did so without permission for nine years. I didn’t ask for my face to be tagged for the benefit of blind people. Did you?
Looking ahead, we still see facial recognition technology as a powerful tool, for example, for people needing to verify their identity, or to prevent fraud and impersonation. We believe facial recognition can help for products like these with privacy, transparency and control in place, so you decide if and how your face is used. We will continue working on these technologies and engaging outside experts.
Translation: We’re keeping the code. We can turn it back on it any context at any time if we need to.
But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole. There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.
This includes services that help people gain access to a locked account, verify their identity in financial products or unlock a personal device. These are places where facial recognition is both broadly valuable to people and socially acceptable, when deployed with care. While we will continue working on use cases like these, we will ensure people have transparency and control over whether they are automatically recognized.
In other words, we should trust Facebook to know what’s best for us in how it uses technology. This hasn’t worked out well in the past. The Cambridge Analytica scandal, to cite just one example, was a case where Facebook’s decisions regarding our data turned out to have negative consequences they didn’t anticipate — the harvesting of data on 87 million people by a private company. I don’t trust Facebook to make decisions about our data any longer. Do you?
But like most challenges involving complex social issues, we know the approach we’ve chosen involves some difficult tradeoffs. For example, the ability to tell a blind or visually impaired user that the person in a photo on their News Feed is their high school friend, or former colleague, is a valuable feature that makes our platforms more accessible. But it also depends on an underlying technology that attempts to evaluate the faces in a photo to match them with those kept in a database of people who opted-in. The changes we’re announcing today involve a company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.
Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices. This method of on-device facial recognition, requiring no communication of face data with an external server, is most commonly deployed today in the systems used to unlock smartphones.
We believe this has the potential to enable positive use cases in the future that maintain privacy, control and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can best serve people’s needs. For potential future applications of technologies like this, we’ll continue to be public about intended use, how people can have control over these systems and their personal data, and how we’re living up to our responsible innovation framework.
This sounds a lot like a company that still has warm and enthusiastic feelings about face recognition, and is reluctantly backing away from it in specific cases for PR purposes. Facebook is not completely walking away from face recognition. It’s just turning it off in one specific use case that turned out to set off alarm bells for privacy activists, since it affected billions of people.
Ending the use of our existing Face Recognition system means the services it enables will be removed over the coming weeks, as will the setting allowing people to opt into the system.
This will lead to a number of changes:
* Our technology will no longer automatically recognize if people’s faces appear in Memories, photos or videos.
* People will no longer be able to turn on face recognition for suggested tagging or see a suggested tag with their name in photos and videos they may appear in. We’ll still encourage people to tag posts manually, to help you and your friends know who is in a photo or video.
* This change will also impact Automatic Alt Text (AAT), a technology used to create image descriptions for people who are blind or visually impaired. AAT currently identifies people in about 4% of photos. After the change, AAT will still be able to recognize how many people are in a photo, but will no longer attempt to identify who each person is using facial recognition. Otherwise, AAT will continue to function normally, and we’ll work closely with the blind and visually impaired community on technologies to continually improve AAT. You can learn more about what these changes mean for people who use AAT on the Facebook Accessibility page.
* If you have opted into our Face Recognition setting, we will delete the template used to identify you. If you have the face recognition setting turned off, there is no template to delete and there will be no change.
What does “delete the template used to identify you” mean?
Facebook feels about deleting data the way you feel about throwing dollars or euros into a trash can. Data is valuable. And you never know when you might need it.
I believe that Facebook will remove those templates from its active storage. But the idea that they’re gone forever — that makes me skeptical. Do you think they’ll back up this data before deleting it?
Every new technology brings with it potential for both benefit and concern, and we want to find the right balance. In the case of facial recognition, its long-term role in society needs to be debated in the open, and among those who will be most impacted by it. We will continue engaging in that conversation and working with the civil society groups and regulators who are leading this discussion.
This is more mealy-mouthed “let us know what we should do” waffling. Facebook already spent a decade using this technology to generate profits, without considering what the broader implications were. But now, once they’ve wrung most of the value out of it and facial recognition is coming under fire, they’re backing away. It’s a tiny and transparently self-serving step in a positive direction.
Here’s what this statement doesn’t say.
- What happens to the facial recognition code? It’s not going away. They can turn it back on at any moment in any application for any reason, or for no reason at all.
- What happens to the facial recognition templates that were deleted? For example, if law enforcement issued a subpoena for them for use in a criminal case, would Facebook comply? Could they bring back the data from a backup?
- If the algorithm still exists and the base data (photos that were already tagged) still exists, could Facebook re-create this feature in the future? Will they promise not to do that?
Some companies deserve the benefit of the doubt, because they’ve proven themselves to be responsible stewards of our data.
Facebook/Meta is not one of those companies.