Humans Are the Cause and the Cure for AI Bias
It was 1968 when Stanley Kubrick’s film, 2001: A Space Odyssey, introduced the world to the concept of artificial intelligence through the movie’s star, and villain, the supercomputer H.A.L. 9000.
In his film review, Roger Ebert says that those who stayed throughout the entire viewing knew they had seen one of the greatest films ever made. But he also admits, “Only when the astronauts fear that HAL's programming has failed does a level of suspense emerge; their challenge is somehow to get around HAL, which has been programmed to believe, ‘this mission is too important for me to allow you to jeopardize it.’”
From this early public representation of AI in popular culture, there has been a warning about the software code behind it. At the time, the consequences were pure Hollywood imagination. Today, however, AI algorithms impact us in ways large and small. AI is powering real-time traffic updates and directions, rideshare services, and the recommendations on our favorite online shopping sites. But the true impact of AI goes far beyond these use cases.
Every day, artificial intelligence is being used to make life-altering decisions, like the early detection and diagnosis of disease. Identifying strokes? There’s an FDA approved app for that. This is not science fiction, but something you can download right now from the App Store.
The impact of AI extends to our financial life as well. According to Deloitte, “AI is changing how financial institutions get and keep customers. Even as it commoditizes traditional points of differentiation, AI offers the opportunity for significant market innovation. The one certainty is that firms must adapt their products and services for the day when AI automates customers’ financial lives—or much of it, anyway—and improves their financial outcomes.”
Just as in 1968 with H.A.L 9000, the software powering the AI makes all the difference in the outcomes. The problem for us in 2020 is that while the use of AI is growing here on earth, the quality of the data it is using to make life-altering decisions can vary greatly depending on the AI and its use case.
Improving AI
It is in everybody’s interest to improve AI outcomes going forward, but how? For many, that would start with a robust regulatory regime. In January of this year – seemingly many lifetimes ago due to the COVID-19 pandemic – Alphabet’s CEO wrote an op-ed in the Financial Times on the need for AI regulation. The issue was to become one of the year’s top tech topics until the coronavirus struck.
Despite being sidetracked by a global pandemic and subsequent economic crisis, the issue of AI regulation remains a serious one that deserves our attention and a multi-dimensional response. Regulation alone, however, is never enough. The reason for that is human nature, and that is also the reason why we must be looking for bias in AI early and throughout the development of the underlying software algorithms.
Whether we try to avoid it or not, bias is generally built into every AI engine because bias is generally built into every human, and humans, of course, are responsible for building the AI. So it stands that AI can only ever be as good as the humans who program it. Before we can address this AI bias, it is first important to understand why bias occurs and where it is most commonly injected into algorithms. From there, we can prioritize identifying AI bias and removing it.
Shift-Left Mentality
For years, application security professionals have been urging developers to “shift left” and incorporate security earlier into the development process to spot issues quickly and correct them before vulnerable software is deployed. The same push/pull scenario impacts AI development. An AI outcome is only as good as the underlying data. If the foundational data is flawed or biased, the outcomes will be flawed or biased as well. It is difficult for algorithms to unlearn patterns, so it is important that biases are not built into the algorithm from the earliest phases of implementation.
As Kubrick warned in 1968, it is what is under the hood of the AI that matters.
Removing Bias in AI
As in life, the origins of bias are often subtle and difficult to detect. They can come from all directions, from race and gender to education and income. Bias seeps into AI through this mix of conscious and unconscious biases that we as human beings all struggle with. More regulations cannot fix a problem that is so inherently human. Only humans can.
Here’s how: Minimize the impact of bias in the AI model with AI testing and training from early in the development cycle. As with application security, the earlier this testing is done by a large, diverse team of developers and testers, the greater the chance to identify and eliminate biases in AI. After all, people are the source of bias in AI, but to find and root it out, they are also the secret weapon.
The Wisdom of Crowds
Identifying bias cannot be done in a static QA lab environment. It takes a large and diverse set of testers to get appropriate representation within the data sets. By leveraging a vetted, crowdsourced, testing model, you gain access to a broader mix of languages, races, genders, locations, cultures, hobbies and everything else that goes into an AI algorithm.
A diverse mix of crowdsourced testers can provide continuous, real-time feedback that is then incorporated into the AI algorithm, enabling iterative improvements throughout the development process. These updates can then be retested across the crowdtesting community, ensuring that the adjustments are representative of the real world, and not perpetuating inequalities.
Humans created AI, and it is humans who can fix AI. It starts by acknowledging the inherent conscious and unconscious biases that are present in all of us as imperfect human beings. Once we admit that these exist, we can set about putting processes in place to identify and remove biases whenever we see them within AI software.
It takes vigilance. It takes the wisdom and diversity of the crowd. While everything about AI has changed since 1968, especially its capabilities and consequences, our struggle to identify and remove biases from software, and society, continues.
About the Author
Kristin Simonini is the vice president of product for crowdsourced, digital testing platform vendor, Applause. She held previous product leadership roles with EdAssist, Brainshark, Deploy Solutions and Webhire. She earned a bachelor’s degree in communication studies from Northeastern University.