One such truth is smbolised by the global #BlackLivesMatter movement, which has once again highlighted the embedded biases in our interconnected social fabric, forcing us all, to re-evaluate long standing notions of morality, fairness and ethics.
It is worth taking pause, to consider whether the exponential technological progress is not also amplifying some of the very same challenges we are trying to overcome, as a global society.
As we strive to meet the needs of customers, we continuously look towards technology. We see leading companies globally investing heavily in technologies such as cloud computing, internet of things, advanced analytics, edge computing, virtual and augmented reality, 3D printing and of course artificial intelligence. And it is AI, which many experts tout as one of the most transformational technologies of our time,in terms of sheer impact on humanity.
Global use of AI has ballooned by 270% over the past five years, with estimated revenues of more than $118-billion by 2025. AI powered technology solutions have become so pervasive, a recent Gallup poll found that nearly 9 in 10 Americans use AI based solutions in their everyday lives.
And yet, a dark side of AI is surfacing with alarming frequency as AI engrains itself in our daily lives.
Bias in the machine
There are ample examples of algorithms displaying forms of bias.
In 2018, reports emerged of Gmail’s predictive text tool automatically assigning “investor” as “male”. When a research scientist typed “I am meeting an investor next week”, Gmail’s Smart Compose tool thought they would want to follow up with the question: “Do you want to meet him?”
That same year, Amazon had to decommission its AI-powered talent acquisition system after it appeared to favour male candidates. The software seemingly downgraded female candidates if their resumes included phrases with the word “women’s” in them, for example “women’s hockey club captain.”
Many of the large tech firms battle with diversity, with men much better represented than women in most major tech companies. Having gender bias embedded in algorithms designed to support the hiring process presents a significant risk to efforts at achieving greater diversity: Mercer’s Global Talent Trends report for 2019 highlights that 88% of companies globally already use AI powered solutions in some way for HR.
Persecuted by an algorithm
Errant algorithms can be responsible for greater harm than just a few missed employment opportunities.
In June 2020, the New York Times reported on an African American man wrongfully arrested for a crime he didn’t commit after a flawed match from a facial recognition algorithm.
Recent studies by MIT found that facial recognition software, used by US police departments for decades, work relatively well on certain demographics, but is far less effective on other demographics, mainly due to a lack of diversity in the data that the developers used to train these algorithms.
Microsoft and Amazon have halted sales of their facial recognition software until there is a better understanding and mitigation of their impact, on especially vulnerable or minority communities. IBM has even gone as far to halt offering, developing or researching facial recognition technology.
How bias enters our algorithms
McKinsey supports the view that it is actually the underlying data that is the culprit in perpetuating bias, more so than the actual algorithm itself. In a 2019 paper, the firm argued that algorithms trained on data containing human decisions have a natural tendency toward bias. For example, news articles could instil the common gender stereotypes we find in society simply due to the nature of the language used.
Many of the early algorithms were also trained using web data, which is often rife with our raw, unfiltered thoughts and prejudices. A person commenting anonymously on an online forum arguably has more freedom to display prejudices without much consequence. Any algorithm trained on this data is likely to assimilate the embedded biases.
As Princeton researcher Olga Russakovsky observes: “Debiasing humans is a lot harder than debiasing AI systems.”
One example of this is Microsoft’s well-intentioned experiment with its chatbot, Tay. Tay was plugged directly into Twitter, where users across the world could interact with it. Users of the popular social media platform promptly got to work teaching the bot racist, misogynistic phrases. Within one day, the bot started praising Hitler, forcing Microsoft researchers to pull the experiment.
The lesson: algorithms learn precisely what you teach them, consciously or unconsciously. And because algorithms learn from data, data matters.
Web data is also not fairly representative of society at large: issues with access to connectivity and the cost of smartphones and data could exclude many – especially minorities – from engaging with online content. This means that data collected from the web is naturally skewed to the demographics that make most use of websites and social media.
Combating bias in our AI solutions
One of the biggest challenges for the creators of AI algorithms trying to eliminate bias, besides merely identifying it, is knowing what should replace it. If fairness is the opposite of bias, how do you define fairness?
Princeton computer scientist Arvind Narayanan argues there are at least 21 different definitions of fairness, the problem this creates is that one person’s fairness could be another’s discrimination.
There is arguably a need for greater diversity in the development rooms where AI algorithms are created. A cursory glance at the demographics of the big tech firms shows a disproportionate gender and demographic bias. More must be done to accelerate the synthesis of diverse and inclusive perspectives in the AI creation process, so that AI algorithms and the data they are trained on embody a broad range of perspectives, allowing them to drive more optimal outcomes for all those represented in society.
What can we do to mitigate bias in the AI solutions we increasingly use to make potentially life changing decisions, such as arresting someone or hiring someone? Greater awareness of bias can help developers see the context in which AI could amplify embedded bias and guide them to put corrective measures in place. Testing processes should also be developed with bias in mind: AI creators should deliberately create processes and practices that test for and correct bias. Design should always keep bias in mind.
Finally, AI firms need to make investments into bias research, partnering with other disciplines far beyond technology such as psychology or philosophy, and share the learnings broadly to ensure all the algorithms we use can operate alongside humans in a responsible and helpful manner.
Fixing bias is not something we can do overnight. It’s a process, just like solving discrimination in any other part of society. However, with greater awareness and a purposeful approach to combating bias, AI algorithm creators have a hugely influential role to play in helping establish a more fair and just society for everyone.
This could be one silver lining in the ominous cloud that is 2020.