Why Meta’s decision to remove ad targeting categories won’t make much difference

Under intense heat due to whistleblower leaks, Meta has agreed to remove ad targeting categories like health, race, and political affiliation from its properties Facebook and Instagram. Here’s why that will do very little about the algorithmic decisions that drive extremism, partisanship, and mental health issues on these social networks — and won’t even slow smart advertisers from finding ways to target the same groups as before.

Why Meta made this change

It’s obvious why Meta made this change now — to attempt to telegraph that it’s listening and moving in the right direction, and because Apple’s app changes will soon be interfering with its ability to track people. But why this particular change?

Here’s part of the Meta announcement, written by Graham Mudd, VP of Product Marketing for ads

We strongly believe that the best advertising experiences are personalized. They enable people to discover products and services from small businesses that may not have the ability to market them on broadcast television or other forms of media. They also enable non-profits, social causes and organizations to reach the people most likely to support and benefit from them, such as connecting people to fundraisers for charitable causes they care about.

At the same time, we want to better match people’s evolving expectations of how advertisers may reach them on our platform and address feedback from civil rights experts, policymakers and other stakeholders on the importance of preventing advertisers from abusing the targeting options we make available. Today, we’re announcing a difficult decision that balances these important considerations.

Starting January 19, 2022 we will remove Detailed Targeting options that relate to topics people may perceive as sensitive, such as options referencing causes, organizations, or public figures that relate to health, race or ethnicity, political affiliation, religion, or sexual orientation. Examples include:

Health causes (e.g., “Lung cancer awareness”, “World Diabetes Day”, “Chemotherapy”)

Sexual orientation (e.g., “same-sex marriage” and “LGBT culture”)

Religious practices and groups (e.g., “Catholic Church” and “Jewish holidays”)

Political beliefs, social issues, causes, organizations, and figures

It is important to note that the interest targeting options we are removing are not based on people’s physical characteristics or personal attributes, but instead on things like people’s interactions with content on our platform. However, we’ve heard concerns from experts that targeting options like these could be used in ways that lead to negative experiences for people in underrepresented groups. We routinely review, update and remove targeting options to simplify our ads system, provide more value for advertisers and people, and reduce the potential for abuse.

As Mike Isaac and Tiffany Hsu pointed out in their article on the announcement, “In the past, these features have been used to discriminate against people or to spam them with unwanted messaging.” But:

Before the Jan. 6 storming of the U.S. Capitol, for example, advertisers used targeting tools to direct promotions for body armor, gun holsters and rifle enhancements at far-right militia groups on Facebook. In 2020, auditors concluded that Facebook had not done enough to protect people who use its service from discriminatory posts and ads.

In 2019, the Department of Housing and Urban Development sued Facebook for allowing landlords and home sellers to unfairly restrict who could see ads for their properties on the platform based on characteristics like race, religion and national origin. And in 2017, ProPublica found that Facebook’s algorithms had generated ad categories for users interested in topics such as “Jew hater” and “how to burn jews.”

When asking why Meta/Facebook makes a change, always consider it, not from the point of view of profit, but from the point of view of the algorithm. Then you can understand moves like this.

This decision has the potential to decrease Meta’s profit, because advertisers will miss out on capabilities that allow them to narrow their targets. For example, at least in theory, Republicans and Democrats will no longer be able to narrowly target only their own supporters.

But from the point of view of the algorithm, this move is a plus. It removes some regulatory heat. And the algorithm doesn’t care about ads — it cares about users and content. The algorithm is free to target unpaid content in exactly the same way as before. So the algorithm is indifferent about this change. It’s a win for Meta: they look like they’re doing something, but it doesn’t upset the core of what makes Facebook and Instagram work.

What this change won’t fix

Do you really think that ads are what is driving extremism on Facebook?

Here are some reasons that they aren’t.

If people are congregating on Facebook to join groups that, say, glorify the Trumpian point of view, the algorithm will continue to recommend other similar groups and news sources to those people. Their narrow worldview, free from facts that would contradict their perspective, will continue just as before.

And the same is true for those congregating around fact-free Marxist groups, who will not see anything contradicting their point of view either.

Anger continues to drive Facebook engagement. And as a result, angry posts — or those ridiculing people that you disagree with — will continue to spread the fastest in the Facebook algorithm. It will continue to be a nasty place.

And this change won’t even fix the ad-targeting algorithms. If you want to reach Black people, you still can, you just have to figure out what non-racial categories are most likely to work as a proxy for them. The same applies to political progressives, or people with guns, or people with kidney problems or depression.

Do you really think an advertiser, seeking to target people by a specific category, can’t figure out how to use proxies to identify people in that category to advertise to? Advertisers are pretty smart about data, in my experience. If they can hit 80% of their audience with a proxy, they’ll call it win.

So the truth is, Meta’s latest move won’t change how posts get shared and viewed, and it won’t put much of a roadblock in the way of advertisers who want to do targeting.

Its only real purpose is a PR whitewash. And based on how Facebook observers are reacting, it’s not even doing that very well.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

3 Comments

  1. Josh, I think there’s another aspect or two to this change: laziness and maybe incompetence. For years, the ads shoved at me on Facebook were so far removed from any of my interests that I wondered what data the robots were using. After a few hours digging in carefully hidden and obfuscated settings, I saw that Facebook had literally hundreds of assumptions about me based on what appeared in my feed, but not necessarily what I responded to.

    In the interest of science, I disallowed all the permissions taken that I didn’t consciously grant to see if the ads would be any further off target. They were—but all that means is instead of being pitched garbage, I was offered sewage. Cambridge Analytica claimed to have between 4,000 and 5,000 data points on every user. If that’s true, then despite its reputation, Facebook has neither the intelligence nor the will to use that information effectively. It suggests the company promises targeting to its advertisers, then delivers their content indiscriminately.

  2. “what appeared in my feed, but not necessarily what I responded to.”

    One set of the data points collected is the amount of time spent with feed content – even just which posts are paused at longer than others.

    If you are using Facebook daily, then it gathers a pretty good idea what in your feed captures your attention.

    Another set of data points is what your friends (whose posts appear in your feed) are also engaging with.

    Another fact is that many advertisers use overly broad targets.

  3. “If you want to reach Black people, you still can, you just have to figure out what non-racial categories are most likely to work as a proxy for them.”

    Getting rid of the “proxies” as that linked article implies – sure. Should we get rid of location targeting too? After all, there are neighborhoods that are predominantly Black.

    The challenge is such a broad approach then creates a problem for the Black owned business targeting Black customers. It will cost them significantly more to get in front their preferred audience – because their targeting becomes very diluted. Then we will hear how FB and other platforms discriminate against a minority owned business.

    So, we “fix” one problem but then create another.

    How should we solve for both?

    Few articles/postings on this ever have something realistic.

    Often, like the problem with police abusing their power, we hear “Defund the Police”, which only throws the baby out with the bathwater.

    We all like to think that there are easy answers to problems when there isn’t.