AI algorithms are biased because reality is biased. But the algorithms can be fixed.

The Apple Card, backed by Goldman Sachs and managed by algorithms, is apparently sexist. It’s giving men higher credit limits than women. The problem is in the “training set” of data — and it’s a fixable problem.

In a Twitter thread that has caught fire, Silicon Valley David Heinemeier Hansson explains how despite the fact that his wife and he share tax returns and financials and have similar credit scores, his Apple Card credit limit is 20 times hers. Customer service reps blamed the problem on “the algorithm” and said they couldn’t fix it (until someone actually fixed it by changing a setting manually). Apple cofounder Steve Wozniak has chimed in with a similar story.

Does the Apple Card discriminate against women? Here’s the response from Goldman Sachs.

We wanted to address some recent questions regarding the Apple Card decision process.

With Apple Card, your account is individual to you; your credit line is yours and you establish your own direct credit history. Customer do not share a credit line under the account of a family member or another person by getting a supplemental card.

As with any other individual credit card, your application is evaluated independently. We look at an individual’s income and an individual’s creditworthiness, which includes factors like personal credit scores, how much debt you have, and how that debt has been managed. Based on these factors, it is possible for two family members to receive significantly different credit decisions.

In all cases, we have not and will not make decisions based on factors like gender.

Finally, we hear frequently from our customers that they would like to share their Apple Card with other members of their families. We are looking to enable this in the future.

– Andrew Williams, Goldman Sachs Spokesperson

This is basically “It’s the algorithm’s fault,” with a small helping of “We designed it this way” on the side.

Forrester’s privacy expert Fatemeh Khatibloo had what I consider to be the best potential explanation for what’s happening here:

In other words, Goldman is technically correct when it says that its algorithm is not discriminating explicitly on gender, but it may still be discriminating on the basis of other factors that are highly correlated to gender. There is not necessarily bad intent here, but the result is the same: women get a worse deal than men do.

Fixing sexist algorithms

This is far from the first example of algorithmic bias. Bias in hiring, for example, is a particular problem. Miranda Bogen of Upturn shared a study that showed that Facebook showed ads for supermarket cashiers to an audience of 85% women, and for taxi companies to an audience that was 75% black.

Why does this stuff happen? Is tech evil?

No, tech is just not as careful as it ought to be.

In any AI-based algorithm — which, these days, includes a vast number of algorithms at work in the real world for targeting and scoring — the algorithm operates based on a “training set.” The training set is a corpus of data, often hundreds of thousands or millions of records, that includes hundreds of variables along with the correct “answer” for each record — whether the candidate got hired, or how creditworthy the consumer actually was. The AI uses plentiful computing power to conduct was is essentially a massive correlation exercise. At the end, what pops out is an algorithm — an efficient way to take any future record and calculate what the answer should be.

The problem is bias, all right — bias in the real world. If fewer black people are getting hired, then the algorithm is going to find variables that correlate with race and use them to generate a “don’t hire” signal, even if “race” is not a variable in the data set. As Khatibloo has suggested, Goldman’s algorithm is identifying variables like having one’s own phone plan or owning the title in one’s house and using them to set a credit limit, even though those variables may themselves be highly biased according to gender.

The real world is biased against women and many minorities. It may also be biased in different ways for other groups. It favors men. There are probably proxy variables that correlate to being Jewish or Asian that may influence credit decisions, or college admission decisions, or hiring decisions.

We can work to address the bias in the real world. But must we replicate it in the algorithms that are increasingly determining our futures?

AI algorithms are just now starting have this level of vast influence, and denials like Goldman’s are valueless. “The algorithm” is not some god to whose caprices we must bow obediently. It’s a creation of humans. And humans must fix it.

The responsibility for checking algorithms for sexist, racist, ageist, anti-LGBT, and other biases falls on those who create them. Every algorithm needs an audit. It shouldn’t be launched into the world until the biases are identified and algorithm is tweaked to reduce or eliminate them. And once it is introduced, the work is not over. Algorithms must be documented, not black boxes. They must be subject to review by public watchdogs. And the keepers of the algorithms must respond to public criticism.

Algorithms are living, breathing things. They can get less prejudiced, even if reality has biases. And once they do, they may generate actual improvements in the biased world we live in.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.