Guide to AI: Myth busting
HOME > RESOURCES & TOOLKITS > THE YOUNG WOMEN’S MOVEMENT GUIDE TO AI > WHAT IS AI?

WHAT is Ai?
Myth busting
Checking the facts…
Artificial intelligence is often talked about as if it were neutral or objective, but in reality it reflects the social norms, biases, and power structures of the world in which it is built. When it comes to gender, this can lead to harmful misconceptions, such as the idea that AI treats everyone equally, or that its mistakes are just “technical glitches.” On this page, we unpack some of the most common misunderstandings about AI and gender, and explain why it matters especially for young women navigating an increasingly digital world.

Sexual Digital Forgeries can be regulated with the right policy and by ensuring accountability from policymakers and tech corporations alike.

AI girlfriends are not a form of therapy. This ideology has a societal impact on how women are treated and continuously dehumanised.
Addressing some common misunderstandings:
-
“AI can’t be biased”: AI outputs can be biased, depending on the data, algorithm and circumstances surrounding its development and training. Simply put, if an AI model is fed biased data by an algorithm that already favours certain inputs, or by a group of developers whose bias seeps into the development process, then this model’s outputs will be biased too.
-
“AI can’t be regulated”: AI can be regulated! It will take a lot of work, but certain countries like the US, the EU, the UK, and China are already making strides in this area of policy. (Legalnodes). AI regulation should always be framed from a human rights perspective, and informed by a sensitive, intersectional victim-first approach.
-
“This AI-generated image isn’t depicting anyone specific, so it’s not harmful to anyone”: AI can’t generate images from nothing, so even if the generated person does not exist, their image has been generated from data taken non-consensually from other, real people. It can also be harmful by recreating and reaffirming pre-existing biases and stereotypes, and by using tons of natural resources to perform ‘leisure’ or relatively unimportant tasks.
-
“It’s just ChatGPT”: Nowadays, AI is present in everyday tech like Siri or Alexa, as well as in algorithmic recommendations on streaming platforms, facial recognition software on phones/apps, social media algorithms, online shopping, and much, much more. So, we can infer that everyday AI has evolved beyond ChatGPT. ChatGPT alone has contributed to the endangerment of certain Southern American Black communities, due to the energy and water needed to generate its outputs. So, you can imagine how much more damage will be caused by AI technology seeping into everything else.
-
“AI is just a new fad”: The concept of AI can be traced back to the 1950s, during the early days of modern computing, when technology advanced to create the building blocks of modern AI technology.
-
“We just need to monitor AI!”: Yes, but certain companies use AI-washing to avoid regulations while appearing as ‘ethical’ to the wider public. Setting standards and guidelines is not enough: We need to ensure AI corporations and models are routinely tested and kept up to ethical, intersectional standards (TEI) to ensure AI regulation.
-
“AI is going to take over the world”: While AI will definitely become very present in our world, this pessimistic view does little to help its regulation. Try focusing your energy on learning how to use it ethically and combat its misuses while protecting yourself from perpetrators of harm instead!
-
“AI can solve anything”: AI doesn’t solve problems; it is a tool that uses training data and to present responses learned through Machine Learning.
-
“AI doesn’t have that big of an impact”: A single 100-word response from ChatGPT is estimated to use 1 bottle of water. Now, imagine how much water is used every second by its millions of daily users.
-
“They’re just memes”: AI-generated images and videos can seriously disrupt global conversations, spread misinformation and even sway political elections, which is why we need to be careful with what we generate.
References
Adams, Lucy. (2018) Sex attack victims usually know attacker, says new study. BBC News. Available at: https://www.bbc.co.uk/news/uk-scotland-43128350
Ponomarov, K. (2025). Global AI Regulations Tracker: Europe, Americas & Asia-Pacific Overview. legal nodes. Available at: https://legalnodes.com/article/global-ai-regulations-tracker#:~:text=The%20regulatory%20landscape%20in%20this,significant%20steps%20in%20AI%20governance.
Mahoney, A. (2025). America’s Digital Demand Threatens Black Communities with More Pollution. Capital B. Available at: https://capitalbnews.org/ai-data-centers-south-carolina-black-communities/
The History of AI. (2025). Scottish AI Alliance. Available at: https://www.scottishai.com/news/the-history-of-ai
Verma, P. and Tan, S. A bottle of water per email: the hidden environmental costs of using AI chatbots. Washington Post. Available at: https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/