Can AI’s Racial & Gender Bias Problem Be Solved?

October 17, 2019 - 7 minutes read

Artificial intelligence (AI) algorithms are complex packets of code that strive to learn on given training data. But when this training data is flawed, not well-rounded, or biased, the algorithm quickly spirals into discrimination too.

For women and minorities, these systemic AI issues can quickly become harmful.

Where Does Bias Come From?

Bias in AI algorithms doesn’t only occur because of problems in training data. When you dig deeper, it becomes readily apparent that bias often comes from how an AI developer frames a scenario or problem. In turn, this perception informs how and what type of data is collected and utilized.

Perhaps the AI developer forgot to include a subset of the population for their study. Or maybe the bias arises from the process of humans manually marking and formatting the data for AI system consumption. Either way, by this point, it’s likely that the bias will flourish with repeated training over time and eventually compromise the integrity of the AI system.

Within the AI’s complex algorithm code files reside multiple algorithm layers, which are applied in sequence to analyze a problem and calculate a “probability vector.” This number grouping gives humans a more understandable way to interpret the AI’s analysis. For example, the probability vector might read like: “95% confident the object is a human, 25% confident the object is a fruit”.

Many problems stem from the lack of women and minorities present in the room when an algorithm is being white-boarded and coded. In fact, women and minorities are missing in large numbers from the design, development, deployment, and the regulation of AI. This is underscored by the hard numbers of women and minority representation at large tech companies.

At Facebook and Google, less than 2% of technical roles are filled by black employees. At eight large tech companies, Bloomberg found that only 20% of the technical roles are filled by women. And one government dataset of face images had 75% men, 80% light-skinned people, and less than 5% minority women. We know this to be grossly misrepresentative of the actual human population.

Fighting For Justice

The coded gaze, a term coined by Joy Buolamwini, is the bias in AI that can create discrimination or exclusion.

For many algorithms, there will be humans double- and triple-checking the analysis. For example, Buolamwini was a minority woman graduate student at MIT in 2015. She found that some facial recognition algorithms couldn’t recognize her until she put on a white mask because the software was trained on mostly light-skinned men. Obviously, this is unacceptable for a technology that could be employed around the world.

In 2016, Buolamwini founded the Algorithmic Justice League (AJL) to highlight bias, provide a safe space to report bias, and help developers create practices to avoid algorithmic bias. Buolamwini saw the impact that exclusion overhead (a term defined by Buolamwini as the ultimate cost of systems that don’t learn about the diversity of humanity) has on algorithms.

Many large tech companies are still grappling with the same issues that Buolamwini found in 2015 at MIT. Seattle-based developers like Microsoft, Amazon, and IBM are selling AI systems that are teeming with massive gender and racial biases.

For example, their algorithms always performed better when identifying male faces over female faces. Their male-face error rates were less than 1%, while dark-skinned women-face error rates reached 35%. The worst part is that the algorithms weren’t even trained on famous dark-skinned women — Oprah, Serena Williams, and even Michelle Obama were misidentified as male by Amazon, China’s Face++, and Microsoft.

Making Necessary Changes

With media coverage of AI algorithm failures creating false information, sexism, racism, and predatory advertising, there is more attention on algorithmic bias than ever before. The ACLU (American Civil Liberties Union), AJL, and numerous computer vision experts have put pressure on AI developers to fix their biased algorithms by finding and fixing racial biases in facial recognition and analysis technologies.

Until these are addressed properly, it’s not absurd to think that AI use in business and governmental applications should be halted. Law enforcement, for example, must not utilize facial recognition until the application is showing equally low failure rates for minority women groups as it does for light-skinned males.

Developing AI from the get-go with oversight, regulation, and care can greatly reduce biases. This is especially important when there are lives on the line, such as in AI applications for law enforcement. Because, as Buolamwini mentions, AI bias issues most adversely affect those who cannot be in the room when AI is being developed. It’s time to include those populations in AI teams everywhere, but especially in large tech companies that so often control the path of AI innovation.

Buolamwini recently launched the SAFE Face Pledge to stop abuse and lethal use of facial recognition and analysis algorithms. Safe Face Pledge asks companies who sign the pledge to “Show value for human life, dignity, and rights; Address harmful bias; Facilitate transparency; Embed commitments into business practices.”

It’s Getting Better

The fight has just started, and it’s already making a huge impact on AI developers and the algorithms they create. Of course, this will require not just developers; technology experts, the media, researchers, and legislators will have to work together very closely for many years to improve AI outcomes for all of humanity.

While accuracy is a major factor of any AI algorithm, so is ethics, inclusivity, and humanity’s diversity. By addressing these facets while developing algorithms, we will eventually reduce the exclusion overhead, biased coded gaze, and marginalization of minority groups.

Tags: , , , , , , , , , , , , , , , ,