Is AI Sexist? The Crisis Ahead of International Women’s Day 2024

Artificial intelligence is changing the world. From streamlining workloads to changing the way we communicate, AI and generative tools has fundamentally changed the way we access information. But whether that’s a positive change – or a negative one – could depend on your gender.  

Experts have warned that the AI revolution could disproportionately impact women in a negative way due to factors ranging from underrepresentation in datasets all the way up to the percentages of women leading AI from the top.  

In this International Women’s Day 2024 blog, we’ll be diving into the biases of AI tools and how they may reflect the biases present in today’s society. 

International Women’s Day 2024: Google turns off Gemini’s “sexist” image generation

Google’s AI tool, Gemini, found itself in hot water last week following several viral social media posts highlighting the sensitive issues that surround generated content.

One of the most viral posts, which has received well over 70 million views on X since the date of posting, showed Gemini creating an image of the US Founding Fathers featuring a black man. Other posts showed that Gemini often defaulted to depiction women and people of colour even when generating images of historical figures that have been almost entirely white men, like vikings, popes, and Nazi soldiers.  

Elon Musk reposted several of these screenshots on X, calling Gemini ‘super racist and sexist’, prompting several other prominent conservative voices to highlight the issue. Google has since paused its AI tool and is “working around the clock” to solve the “completely unacceptable” issue, according to CEO Sundar Pichai. 

Read more about our data and AI talent solutions

This is the direct result of Gemini overcompensating for biases that commonly occur in image generators – which have a history of underrepresenting women and people of colour.  

Many of the images on the internet, which Google’s tool is trained from, shows a white, Western bias. There is a strong likelihood that image generators will depict judges, lawyers and doctors as white men, while teachers, nurses and stay-at-home parents will be depicted as women due to the data their trained on. 

“It appears that in trying to solve one problem – bias – the tech giant has created another: output which tries so hard to be politically correct that it ends up being absurd,” reported the BBC. But how did we get here? 

Biased datasets

The explanation on biases in AI tools lies in the enormous datasets that AI tools are trained on. Much of this data is taken directly from the internet, which contains all sorts of biases. 

One example is the fact that LLMs use conversations on social media platforms like Reddit to simulate how humans communicate. But because over two-thirds of Reddit users are men, it’s more often than not simulating the speech patterns of a man.  

Furthermore, it’s no secret that most of our history was written by men, and mostly features men. So when tools like ChatGPT pull historical accounts and generate historical photos, they’re more likely to be male-oriented.  

It sounds innocuous but this can pose serious risks: AI tools are more likely to diagnose health issues and provide medical advice for males as that’s where majority of the training data has been.  

Amazon even had to scrap their new AI-powered recruitment tool back in 2018 due to this – as the tool had been trained on data which came from mostly men, the tool began to penalise CVs which contained female language.  

Google has clearly tried to offset all this messy human bias by skewing the data to feature women more – resulting in the image generation of female popes and female US presidents. In short, it’s backfired precisely because human history and human culture contains nuanced biases which humans understand and machines do not.  

International Women’s Day 2024: Lack of women in AI

Another factor to consider here is the gender imbalance in tech. Women make up just 26% of the American data and AI workforce, and 12.5% of the AI researchers worldwide 

“There are very few women working in the field, very few women having a seat at the table,” says Kristanna Chung, Head of AI & ML Recruitment at Harnham. “And so obviously, this trickles down into the values reflected in the way these AI tools are operated and trained. 

“It’s an ongoing issue in the tech space as a whole, but we need to continue as an industry to champion those female voices and champion those minority voices so that the ethos and the messaging of these AI tools better reflect society. 

“Amazon’s shelved AI recruitment software is a great example of the fact that AI problems are almost always data problems. We need to make sure that training samples are as diverse as possible – that includes gender but also things like ethnicity and age.“ 


At Harnham, our commitment to drive change in the industry is a core priority for us.

But we know this takes action rather than words.

Read more about how we champion diversity and inclusion – both inside and outside of Harnham: https://www.harnham.com/diversity-and-inclusions/