Guide to AI: AI and gender
HOME > RESOURCES & TOOLKITS > THE YOUNG WOMEN’S MOVEMENT GUIDE TO AI

AI & gender
Exploring AI’s impact
In this section, you’ll find deep dives into how AI shapes and sometimes harms our lives. From the way we see ourselves, to the ways it’s used to violate consent, to how it reinforces wider inequalities.
We’ve explored:
- AI and body image
- AI and sexual digital forgeries (deepfakes)
- AI and intersectionality
- AI, gender and the cycle of harm
These topics show the bigger picture and why we need change.
TL;DR
AI isn’t neutral. It reflects the views, choices, and blind spots of the people who build it. With women underrepresented in AI development, these systems often overlook women’s needs, reinforce stereotypes, and cause real harm.
From gendered voice assistants to biased image recognition, AI can repeat and amplify inequality unless diversity and accountability are built in from the start.
Gender bias in AI
It is often assumed that computers and the software that drives them are neutral and objective. As Wellner (2020) points out, this belief is dangerously misleading. While AI systems are designed to analyse data and make decisions, they are ultimately human creations, shaped by the choices, decisions, and limitations of their designers. As a result, AI systems can reflect and reinforce existing gender biases found in society, encoding heteronormative and patriarchal structures into systems that are increasingly shaping modern life.
The consequences of these biases can lead to real-world harm. AI models, especially those used in facial recognition, language processing, and automated decision-making, are often trained on large datasets that reflect dominant social structures. These datasets often reproduce existing inequalities and stereotypes, which the AI then learns, reinforces, and scales. This means discriminatory decisions are not only possible, but are routinely produced, often without transparency or accountability.
A major contributor to this problem is the stark underrepresentation of women in AI development. According to the World Economic Forum, women make up only 22% of the global AI workforce and just 14% of leadership positions. As Syed Shah (2025) outlines, this imbalance has serious consequences: women’s perspectives, needs, and lived experiences are systematically overlooked in the technological design. As a result, AI systems are often built without considering how they may disproportionately disadvantage women, leading to tools that fail to serve them equally or, worse, actively harm them.

One example is the image recognition software. Research shows that some AI systems tend to associate images of women with domestic tasks or caregiving, while linking men to professional roles or outdoor settings. These outcomes are not accidental, they reflect the gendered assumptions in the data used to train the AI. The result is a system that subtly but powerfully reinforces outdated gender roles, shaping how people are perceived and treated by machines.
This issue is also visible in the design of virtual assistants. As Manasi et al. (2022) note, most popular voice assistants, such as Siri or Alexa, have female names and voices by default. This reinforces the stereotype of women as always available, polite, and subordinate. Google Assistant stands out as an exception for not assigning a gendered name and offering multiple voice options, but it remains one of the few examples.
Sylvie Borau (2023) further explains that AI agents are intentionally feminised to make them seem more human. Because society often associates femininity with warmth and service, these agents are designed to appear emotionally responsive and subservient. However, this design choice risks normalising the idea that women, like AI, exist to serve, please, and obey. In doing so, it reinforces harmful gender hierarchies and reduces women to digital caricatures of compliance. This is not mere representation, it is a form of digital dehumanisation.
Even in the physical appearance of humanoid robots, bias is evident. Manasi et al. (2022) report a dominance of Asian or Caucasian features and the frequent hyper-sexualisation of female robots. These choices raise ethical questions about how gender and race are constructed in emerging technology. Such design practices not only objectify women but also reinforce narrow, racialised beauty standards.
In response to growing concern, some companies have made surface-level changes. Microsoft adjusted Cortana’s responses to reduce tolerance for harassment and Apple updated Siri’s replies to sexually explicit remarks with a refusal to engage. While these are steps in the right direction, they do not address the deeper structural issues that allow gender bias to persist across AI systems.
As Schiller and McMahon (2019) observe, AI tools now play a role not only in managing tasks but also in shaping emotional interactions within the home. By managing both “data and emotions”, they reshape how people relate to technology and to each other. This influence extends beyond technology, it shapes social dynamics and can reinforce traditional gender roles.
“Knowledge is power. AI is a tool shaped by creators and users, and when developed ethically it can benefit society enormously. The more women, girls and marginalised groups are engaged in learning about AI, the fairer and stronger it will become. With ethics at its core, AI is an incredible force for problem-solving, social change and inspiration. But to be truly empowering it must be shaped by those most often excluded from conversations. As someone who studied physics at university, I know the power of finding community with others who share your experience as a woman in STEM.”
Patsy Stevenson, Gender Equality and Ethical Technology Campaigner
To address these issues, the AI field must embrace structural change. Increasing the presence of women and underrepresented groups in AI research and development is not optional, it is essential. Without this, AI will continue to mirror and reinforce the inequalities it should help eliminate, perpetuating systems that harm rather than help. The cost of inaction is not just technological failure, it is the deepening of social injustice.
References
Wellner, G.P. (2020) “When AI is Gender-biased: The Effects of Biased AI on Everyday Experiences of Women”, Humana.Mente. Journal of Philosophical Studies, 13 (37), pp. 127-150. When AI Is Gender-biased
Ramos, G. (2022). Why we must act now to close the digital gender gap in AI. [online] World Economic Forum. Available at: https://www.weforum.org/stories/2022/08/why-we-must-act-now-to-close-the-gender-gap-in-ai
Shah, S.S. (2025) “Gender Bias in Artificial Intelligence: Empowering Women Through Digital Literacy” , Premier Journal of Artificial Intelligence, 1, Article 1000088. DOI: 10.70389/PJAI.1000088
Manasi, A., Panchanadeswaran, S., Sours, E. and Lee, S.J. (2022). Mirroring the bias: gender and artificial intelligence. Gender, Technology and Development, 26(3), pp.1–11. doi: https://www.tandfonline.com/doi/full/10.1080/09718524.2022.2128254
Borau, S. (2024). Deception, discrimination, and objectification: Ethical issues of female AI agents. Journal of Business Ethics, 198(1). doi: https://doi.org/10.1007/s10551-024-05754-4
Schiller, A. and McMahon, J. (2019). Alexa, Alert Me When the Revolution Comes: Gender, Affect, and Labor in the Age of Home-Based Artificial Intelligence. New Political Science, 41(2), pp.173–191. doi: https://read.dukeupress.edu/nps/article-abstract/41/2/173/397731/Alexa-Alert-Me-When-the-Revolution-Comes-Gender