Tackling Human Bias in AI, Are you building Lazy AI Applications?

Arvind Mehrotra
5 min readSep 5, 2022

As artificial intelligence (AI) becomes increasingly more common in both consumer and B2B apps, the question of bias is a very real one. A 2022 IBM survey found businesses to be significantly concerned about AI bias and ethics, with most citing it as a factor driving their purchase decisions. In addition, 75% of survey respondents believe ethical AI is a source of competitive differentiation.

79% of CEOs today are preparing to embed ethics and bias prevention into their AI practices, up from only 20% in 2018.

Yet, AI bias continues to be limited to academic and boardroom decisions. Even as real-world instances of discrimination (such as Twitter’s racist AI algorithm) emerge, the focus is more on combatting bias post-facto rather than figuring out how to implement bias-free AI altogether. ‘

What is AI Bias: Understanding the Systemic, Human, and Data Dimensions

The first step toward building ethical and bias-free AI is understanding what AI bias is. Artificial intelligence mimics human cognition, behaviour, and action, which means that, technically, it will recreate human foibles. But unfortunately, one of the biggest hindrances to our decision-making process is the presence of bias — the complete list of which is quite extensive.

For example, confirmation bias is our tendency to make decisions based on data that meets our presumed notions. Recency bias is our general inclination to rely on the most recently heard information.

These cognitive biases take on a different dimension when combined with social prejudices. As a result, human beings — and, by association, AI — begins to judge entire demographics by a sweeping set of rules. Instead, the need of the hour is to normalise f the bias often supported by an isolated group of conditions.

Systemic AI bias refers to inequitable, unfair, and often incorrect decision-making by machine tools due to the context of its deployment. Data bias is the prejudice that creeps up when the datasets used to train the AI model are biased. Finally, human biases occur when an AI user, trainer, or technical stakeholder brings their presumptions and prejudices to the table.

The Risks Arising from Human Bias in AI

At the outset, one should recognise that bias will always be implicit in any decision-making system that relies on human cognition to learn. The human brain is powerful but is subject to limitations. For example, humans develop a bias as they classify easily and remember objects in the world around them. It is a cognitive bias.

A cognitive bias is a systematic error in thinking that occurs when people process and interpret information in the world around them. It affects the decisions and judgments that they make. Cognitive biases are often a result of our brain’s attempt to simplify information processing. It leads to developing an anticipatory categorisation mechanism. That will help to understand good from bad, harmful from harmless, tasty from bland, and desirable from undesirable.

When this basic tendency is added to a complex socio-political environment that does not align with rapid and anticipatory decision-making, bias causes dangerous stereotypes.

Similarly, AI bias leads to decisions and actions coloured by prejudice, which can cause irreparable damage to internal ethics, market reputation, and the commercial consequences of poor decision-making. For example, in 2020, Twitter publicly apologised for its racist photo cropping algorithm that focused on caucasian faces overwhelmingly above the faces of darker-skinned people. This bias applied to cartoon characters, pets, and stock models.

In 2021, Facebook’s AI algorithm erroneously labelled videos featuring black men when a user looked up videos about “primates” due to a bias in its recommendation engine.

Such examples of AI bias in the real world are unfortunately too common, caused due to systemic, human, and data challenges. For example, use cases involving facial recognition suffer from data challenges and systemic and human bias. Still, it can impact any system processing large volumes of unique and personalised data, such as Amazon’s AI recruiting tool that proved sexist (experimental).

Opportunities to Eliminate Bias from Fair AI and Better Decision Making

Fortunately, there are plenty of ways to tackle AI bias and improve decision-making, benefitting the company, AI users, and the global population in general. Twitter, for instance, has launched a bug bounty program to find algorithmic errors that would cause bias, mitigating some of its reputational damage.

Next is the idea of learning neural networks that can rewire the foundations upon which AI is built — i.e., the Prior. Theoretically, one should be able to create a neural network system that can proactively modify the “Prior” to ensure that the AI bases its learning activity on fair assumptions. Another approach to this process is bias regularisation, where certain factors are added to the bias parameter to help the model perform better.

Finally, companies must look at AI development’s human and data components. In terms of the former, appropriate training can help root out conscious and unconscious prejudices from those who handle and build AI regularly. By preventing bias at the early stages of conceptualisation, you can prevent AI from scaling its effects — like when six healthcare AI algorithms impacted a population of 60–100 million and prioritised the care of white patients over black people.

So let us finally examine some real-life problems:

· At an abstract level, there is no set definition of what constitutes fairness in many domains and industries. The absence can encourage either overly cautious or reckless behaviour.

· At the practice level, even established practices like segmenting training data don’t ensure that the chosen segment won’t be biased. In this way, training data can infuse production data with bias.

Ultimately, for bias mitigation to be effective, we have to get out of the mindset of “lazy AI” — i.e., artificial intelligence that analyses an MVP dataset to arrive at the fastest and most profitable decision. For instance, using thousands of parameters to assess each healthcare recipient can offset the impact of bias.

The more we train our neural networks on various datasets, the more accurately they can recognise patterns in an informed but unbiased manner.

Did you find this article interesting? Let me know in the comments below. You can also join the conversation by emailing me at Arvind@AM-PMAssociates.com.

--

--

Arvind Mehrotra

Board Advisor, Strategy, Culture Alignment and Technology Advisor