Date Published: 23 April 2022

AI Bias Can Put Women at Risk!

Artificial Intelligence (AI) strives to mimic human activities, including human intelligence. Self-driving cars, virtual doctors, facial recognition, and Alexa are a few examples of how AI transforms the way we live. As AI tries to mimic humans, it also has the potential to carry over human biases and discrimination. In some cases, these biases get amplified by AI and potentially might harm human lives. Such bias and discrimination are termed AI bias. We can define AI bias as a phenomenon where an algorithm produces systemically prejudiced results due to erroneous assumptions in the machine learning process or pre-existing bias in the data sets on which the machine is trained.

As AI becomes part of our daily lives, and if we as a society leave the AI bias unchecked, AI can potentially endanger human lives, such as being denied medical treatment or facing discrimination based on colour, race, or gender. Several cases have shown that AI systems are often biased, especially towards women in recent years. According to Dr Munro, Google and Amazon Web Services failed to recognise the word hers but quickly identified his. In 2020 research by the US government accountability office found that facial recognition algorithms can identify white men but couldn’t identify when applied to women, children, the elderly, and people of colour.

So how does AI bias occur in the so-called intelligent machine? Bias can appear at different AI lifecycle stages like data acquisition, training data selection, algorithm design, and interpretation of the outputs and results. In simple terms, AI is just a reflection of human behaviour, experience and decisions encoded in the data organisations have collected over the years. When these organisations use this data, which includes prejudiced human decisions or reflects historical human or social inequalities, including social, racial or gender bias, the potential AI algorithm trained on this data will show the same traits. Caroline Criado Perez, an author of Invisible Women, highlighted the issue in the design of seatbelts, headrests and airbags in cars, designed according to men’s physique. The crash test for such a car is carried out on data collection based on a male dummy, which avoids women’s body structures, including breasts and pregnant bodies. As a result, the chance of women being seriously injured is 47% higher than men. If an AI algorithm evaluates a car design, it can be biased toward the men’s physique. Thus, its decisions based on inadequate data representation could risk the lives of many women. Similar issues also creep up during the design of an algorithm, so just flawed data is not the only source of AI bias.

Here are a few examples of existing risks for women due to biased AI:

  • Data Gap: The University of Melbourne found that recruiting algorithms display unintentional biases when resumes were anonymised and prefer male candidates over women due to a gap in the data. Another significant health impact is insufficient data on women. A lack of data on pregnant women, menopause, and birth control methods can prevent women from getting the right advice. According to the Irish Heart Foundation, heart attacks and strokes are among the biggest killers of women in Ireland. Due to a lack of female data on cardiovascular diseases, online apps often consider women’s symptoms of pain in the left arm and back due to depression. So, the medical advice suggests the woman see the doctor with a couple of days delay. In contrast, a male user of the app is more likely to be asked to immediately contact his doctor based on a diagnosis of a possible heart attack. This risks a health condition being missed or misinterpreted and poses a significant threat to women’s lives.
  • One-size-fits-Men: Most of the technology is designed to fit men’s needs. For example, voice recognition in smartphones doesn’t recognise the voices of females as well as male voices. A study conducted in 2016 by Racheal Tatman found that Google’s speech recognition software was 70% more likely to be recognised accurately in male speech than in females. Apple launched its health monitoring devices, including all the health trackers except the period tracker. It also launched Siri, which could recognise when you say you have a heart attack but couldn’t understand when you say you are raped. These are a few examples of existing intelligent devices designed particularly for men.
  • Job Risks: The International Monetary Fund projected that 11% more jobs held by women (compared to men) are at risk of being eliminated by AI. This is because women are more often employed in mainly process-based clerical and administrative positions, so they are more vulnerable to job loss by automation than men.
  • Low Representation: A study conducted by the Alan Turing institute on women in data science and AI states that only 22% of data and AI professionals in the UK are women, and this drops to a mere 8% of researchers who contribute to the pre-eminent machine learning conferences. The low representation of women impacts the social and economic aspects and could create a dangerous feedback loop that can create discriminatory outcomes.

So, who is accountable for bridging the gap? Of course, all related stakeholders working in the AI field play a vital role in mitigating existing discrimination both in its approach and perspective. To address gender bias, the public and private sectors and individuals can significantly contribute and ensure a fairer future for all segments of society. The process requires a close partnership between all stakeholders, commitments, and legal protections. Some of the potential steps the three major stakeholders: Government, Organisations, and Individuals, can take are as below:

Government

  • Government should develop AI regulations by bringing policymakers and technical experts together to create a code of conduct to mitigate AI bias and discrimination. The EU Artificial Intelligence Act (AIA) is a potential good step in this direction.
  • Governments can set standards, provide guidance, and highlight practices to reduce algorithmic bias.

Organisation

The focus must be on two aspects, data and policy:

In Data

  • Ensure diversity in the training samples (e.g., use roughly as many female audio samples as male in your training data)
  • Ensure correct labelling of the training data to avoid misclassification and unfairness towards gender bias.

In policy

  • Companies in tech industries must embed intersectional gender mainstreaming in human resource policy by having quotas for women, upskilling, providing training and promoting women at work.
  • Ensure diversity and inclusion in product design, and applying a gender lens can help minimise the gaps.
  • Institute reviews and validations for all AI applications to verify that it does not exhibit bias against protected groups, including women.

Individual

  • Women as individuals need to participate in the development of AI and take advantage of opportunities to develop the skills required for jobs and career paths in the future. Join the AI revolution and shape its development to ensure AI is representative of all of us.

Organisations must create effective and fair AI technology for everyone to build their credibility and achieve stakeholders’ trust. The benefits of AI outweigh the risks if we can address the AI biases and will help us to create a just society for everyone. Now is the time to act and ensure that AI will be fair, unbiased and non-discriminatory against any segment of society.

What next?

AI Bias in AI solutions is a significant issue and risk for your organisation. All AI regulations and frameworks require fairness and non-discrimination. So, to ensure your AI project provides the desired benefits to your organisation, make sure your AI is fair.

Seclea provides tools for your AI stakeholders (Data Scientists, Ethics Leads, Risk & Compliance Managers) to ensure fair and transparent AI solutions that meet relevant AI regulations. We are here to help; email us at hello@seclea.com or fill out the short form.