Guide to AI: ‘Deepfakes’ and Intimate Image Abuse
HOME > RESOURCES & TOOLKITS > THE YOUNG WOMEN’S MOVEMENT GUIDE TO AI > AI AND GENDER

AI AND GENDER
‘Deepfakes’ and intimate image abuse
Content Warning: This section discusses sensitive topics, such as sexual harassment of women and minors, pornographic content, cybercrime, racism, sexism and misogyny.
TL;DR
Sexual digital forgeries are artificially manipulated forms of content which exploit a person’s likeness to create pornographic content without their informed consent. These forgeries are a serious crime, and AI technology’s rapid development is putting their creation at the perpetrator’s fingertips. This is a serious issue with heavily gendered connotations, with 96% of deepfakes being non-consensual pornography, of which 99% features women or girls.
What is a Sexual Digital Forgery?
Sexual Digital Forgeries (SDFs), commonly known as deepfakes, involve the artificial manipulation of a person’s likeness into images or videos of a sexual nature, making it appear they said or did things they never did. Advances in generative AI have now enabled creators to generate entirely new content using a real person’s appearance, tailored specifically to fulfil the creator’s desires (McGlynn and Toparlak, 2025). Alarmingly, this technology has advanced to the point where these fake images are nearly indistinguishable from the real thing.
Most media coverage focuses on deepfakes potential to spread misinformation and inflame social and political tensions, whereas the threat of sexual digital forgeries was largely overlooked until 2018, when high-profile celebrity cases brought mainstream attention to the issue.
The term ‘deepfake’ has become ingrained in everyday language but do you know its misogynistic origins?
‘Deepfakes’ was the username for an account on Reddit which created a thread in 2017, under the same name, for users to post and exchange sexually explicit synthetic media of female celebrities – described as deepfake porn in reports at the time. Much like the term ‘revenge porn’, we prefer not to use deepfake to describe this type of content because of its problematic connotations and the fact it implies a level of consent which was never given.
The term is now being used in mainstream media and policy to describe general deep-learning AI, with its origins being forgotten and the account and community using this thread being immortalised.
While this type of technology has its benefits and the term ‘deepfake’ is widely used, it is a form of violence against women and girls and an injustice which has real life consequences. We want to raise awareness of its origins and offer alternative terms, giving you the power to choose what you feel most comfortable with.
- Sexual digital forgeries (SDF) (McGlynn and Toparlak, 2025)
- Synthetic NCII (non-consensual intimate imagery) (as per this report from the House of Commons)
A gendered issue?
Research shows 96% of sexual digital forgeries (SDFs) are non-consensual pornography, of which 99% features women or girls (Dunn, 2021), while men overwhelmingly dominate as both creators and consumers of sexual digital forgeries (Hussain, 2024). Originating in misogynistic and incel forums, SDF abuse has now expanded beyond its original niche and is being used at a large scale to harass women and girls (McGlynn and Toparlak, 2025). Laura Bates (2025) highlights that a simple Google search can yield dozens of sites offering ‘deepfake’ creation as a ‘service’, alongside hundreds of porn sites dedicated to such content. One of the most popular websites receives approximately 17 million unique monthly visitors (NBC News, 2023). However, an ever larger number of SDFs evade further scrutiny by being circulated by perpetrators on encrypted platforms like WhatsApp.
Social media companies continue to advertise these apps that explicitly target women, often failing to work on male images (Bates, 2025). Unlike traditional image-based abuse, which requires close access to victims, SDF can be generated from a single photo of the victim (Gosse and Burkell, 2020). 63% of men in a recent survey said they most desired to undress “familiar girls” they know in real life, a ‘fantasy’ which is now available to them due to the existence of this technology (Bates, 2025). As such, Professor Clare McGlynn (2024) describes this as an “invisible threat” stalking women’s lives.
Some regular users argue SDFs are not as harmful since they are “not real”. One website owner claimed, “I don’t really feel that consent is required, it’s not real” (BBC, 2022). We want to stress that SDF is a non-consensual sexual violation that infringes on an individual’s right to privacy, dignity and sexual autonomy (McGlynn and Toparlak, 2025). Survivors report profound harm – ranging from shame and humiliation to lasting trauma – especially for those who have experienced prior abuse. Adult performers also suffer exploitation as their bodies are erased, edited and recirculated without their consent (Maddocks, 2020).
The UN notes that multiply marginalised women – such as LGBTQI+ individuals and Women of Colour – are disproportionately affected by SDF and AI technology, which exacerbates pre-existing inequalities (UN Women, 2024). Moreover, Bates (2025) points out that women of colour have been some of the first to call for action on this issue, despite frequently having less access to justice and solutions that protect them. For example, Wei et al. 2025 highlight the additional threats SDFs and AI technology pose to hijabi women and girls due to the specific consequences and kind of ostracism they can face after having their privacy and intimacy violated in such a way.
Young girls are heavily impacted, with over half a million teenagers in the UK having experienced SDF (McGlynn and Toparlak, 2025). Schools often lack the resources or knowledge to respond effectively, allowing perpetrators to go unpunished. Researchers suggest that many young boys creating this content may lack full awareness of the harm they cause, as the widespread access to and normalisation of SDF serve to downplay its seriousness. In short, the rise of AI is very much a gendered issue because it endangers thousands of women and girls every day. This is why we are encouraging young women to improve their AI literacy while calling for ethical, intersectional AI regulation and policy that is informed by lived experience and victim-oriented.
*In 2021 Jodie was sent an anonymous email to a porn website which contained hundreds of photos of her from her private Instagram. The photos were used to create SDFs of Jodie. It wasn’t until later that Jodie found out these deepfakes had been created by her best friend. She now campaigns to Stop Image-Based Sexual Abuse.*
“I’m still full of rage. I try to channel it into raising awareness for other women. There’s a misconception that because it’s online, it’s not real. But for victims, knowing that people can’t tell these images are fake feels just as violating and humiliating as if they were genuine”
Jodie, Image-Based Sexual Abuse Survivor-Campaigner
It’s about power and control
This technology is not inherently harmful, but its misuse is a reflection of the misogynistic culture in which it is being deployed (Gosse and Burkell, 2020). SDFs are all about power and control over women, as synthetic women are depicted as hypersubmissive and inserted into violent situations without their informed consent. For instance, there was once a particular social media trend where the stolen faces of women were artificially ejaculated on (Bates, 2025). SDFs are therefore being used to silence, shame and blackmail women, like Irish Politician Cara Hunter, whose digital images were artificially altered in the final election weeks in an attempt to undermine her politically and tarnish her reputation. If this abuse continues, we also risk completely losing women’s voices from public spaces.
Moreover, the trading, commenting and sharing of SDF content creates a peer environment that normalises and reinforces abusive behaviours and rape culture, predominantly among men. This normalisation, coupled with online community validation, risks desensitising individuals and encouraging them to engage in other forms of harm. Serious concern has already been raised regarding those accessing SDFs of minors, as such material may act as a gateway, increasing the likelihood of escalation to real-life abuse (ICMEC, 2023). Furthermore, some of the source videos originate from actual instances of abuse and rape, thereby prolonging the trauma and exploitation of the victims.
Policy and legal response
The policy and legal responses have lagged significantly behind. Much of the focus has been disproportionately placed on how women can avoid becoming victims or on removing content rather than on proactive strategies that prevent the creation and distribution from the outset. Even when unshared, SDFs pose a serious threat due to ease of dissemination – whether maliciously, accidentally, or through hacking (McGlynn, 2024). We urgently need innovative and adaptable responses that can keep pace with the rapidly evolving technology while ensuring victims receive timely, holistic care and financial reparations. You can check on the status of AI regulation in the UK here.
“Women are being systematically failed by the legal system. The criminal law is full of holes, and women’s experiences are not taken seriously by the police. It is also extremely difficult to get material deleted or taken down from the internet, even after a criminal conviction. For too long, survivors of image-based abuse have been ignored, their experiences trivialised and dismissed. Women’s rights to privacy and free speech are being systematically breached, with society as a whole suffering. Women deserve a holistic, comprehensive response to these devastating and life-shattering harms.”
Professor Clare McGlynn, world-leading expert on image-based abuse
The next steps
Despite these challenges, there has been a distinct shift in the media towards focusing on the survivors (Glamour, 2024). Many dedicated campaigns and activists are working tirelessly to stop image based abuse and spread awareness of its harms, much like this Guide hopes to do. If you’ve been targeted by SDFs or NCII, know you are not alone. Support is available – use our conversation starters to guide you through talking about this with a trusted person and visit our resource page, including Report Remove, and The Revenge Porn Helpline. You can also use our letter template to urge your MSP to join the fight for stronger protections against SDF.
“Online harm is not the cost of being a woman; it is the cost of a government that has failed to act. Every threat, every deepfake, every slur chips away at women’s safety and silences their voices — especially those brave enough to step into politics. We cannot build a democracy where women are harassed out of leadership, or a society where abuse is normalised as ‘the price of participation.’ The time for excuses has passed. We need urgent action, stronger laws, and real accountability — because women belong online, in politics, and everywhere decisions are made.”
Cara Hunter, Northern Irish Politician
Examples of AI legislation around the world
AI has made its way across to the political landscapes with nations fighting it out to be the next AI Superpower. In 2025, Governments such as the UK and USA launched their new AI action plans in an attempt to become a leading nation – but while AI has become another tool for political gains, what legislation is in place to mitigate the risks against the rewards?
The USA
In January 2025, US President Donald Trump threw his hat into the AI race, signing an executive order to remove the barriers and red tape holding up the development and progression of AI programmes in America. This was done to fast-track the process of asserting America’s “global dominance” in the AI industry.
While much of America’s focus falls to investment and infrastructure, in April 2025, Congress passed the ‘TAKE IT DOWN ACT’, aiming to stop the non-consensual publication and sharing of intimate images, including those created by AI.
As the bill made its way through the legislative process, the First Lady attributed the need for such a law to the growth of the use of AI on social media and the way the technology could be “weaponised” at the expense of others. The president also recognised the “countless women” who have been “harassed” by sexual digital forgeries. With the bill officially passed, it is now a federal crime to knowingly publish, or threaten to publish, intimate images without consent, including sexual digital forgeries. The onus will be on social media platforms to remove the images within 48 hours and includes any duplicates of the original. While the law is now in effect, social media platforms will have a year from when it was signed into law to implement the process. However, it is important to recognise that this piece of legislation is a step in the right direction in making sexual digital forgeries created by AI illegal.
The UK
UK Prime Minister Keir Starmer also kicked off 2025 by announcing the AI Opportunities Action Plan, calling Britain one of the next “AI Superpowers”. Like the USA, the plan focuses heavily on increasing AI infrastructure and integrating the use of it into the public sector in order to improve efficiency.
The UK does not currently have a dedicated AI law, and it’s important to note legislation can differ across nations. We will explore the AI legislation that is in place across the UK Government and Scottish Government.
UK Government
The UK Government currently has no AI-specific legislation in place, and relies on existing laws and regulators, such as Ofcom, to implement regulations within their sector. This means that image-based abuse would currently be best covered by Online Safety Act (2023), but there is no specific mention of AI. It is important to note that this bill applies to Wales, Scotland and mostly to Northern Ireland, with a few sections removed due to devolution.
The Online Safety Act makes it illegal to share or threaten to share intimate images of videos without consent, including the non-consensual creation of these images when controlled or coerced. It is important to note here it does not include sexual digital forgeries created by AI. While the onus is on existing platforms and regulators, there is nothing in the law to explicitly state the creation or sharing of such images is illegal, meaning action is often not taken.
Scottish Government
The Scottish Government launched its AI Strategy in 2021, a five year plan again aiming for the country to become a “global leader” in both the development and deployment of AI technologies. The plan aims to promote the use of “trustworthy, ethical and inclusive AI”, which is important to bear in mind, especially in discussion of the use of it in the public sector.
In addition to the Online Safety Act, the Abusive Behaviour and Sexual Harm (Scotland) Act that was implemented in 2016 makes it an offence to share or threaten to share an intimate image. However, again, this does not cover sexual digital forgeries created by AI, leaving a huge gap in the legislation.
The European Union
The European Union passed the first AI regulation, with the European Commission proposing the now EU AI Act in April 2021. The aim of the act was to ensure that systems were safe, transparent, traceable, non-discriminatory and environment friendly, with a heavy focus on programmes being overseen by real people. It aims to balance the growth and innovation of AI while also protecting the fundamental rights of its citizens that use it.
While there is nothing in it explicitly linked to the use of AI in the creation of sexual digital forgeries, this regulation began the conversation on the need to regulate its use and ensure that people are safe during its growth.
Denmark
In June 2025, the Danish government announced an interesting plan to tackle the creation of sexual digital forgeries, planning to give its citizens copyright over their body and voice. The minister explained that they believe everyone has a right to their body, facial features and voice, and if passed, victims would be able to flag images created by AI as copyright to get the removed the same as big corporations do if their logo is used in situation not agreed to.
If agreed to, this law would be a pioneer as the first of its kind in Europe, giving people legal rights to the use of their body even in digital images. By Summer 2025, it has already achieved cross-party support in the Danish Parliament, with plans in place to submit the amendment during the autumn session. The law would be a huge step in the right direction for ensuring people had the rights to the use of their body and protection as AI continues to progress.
Overall, it is clear that legislation covering the use of AI to make sexual digital forgeries has a long way to go in order to catch up with technology. Governments across the world must take action and not rely solely on the platforms themselves to act, moving away from the focus of winning the AI race and on to understanding survivors’ lived experiences.
References
The White House. (2025) Removing Barriers to American Leadership in Artificial Intelligence. The White House. Available at: https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/
Associated Press. (2025). Trump signs Take It Down Act to combat fake images and online exploitation. The Guardian. Available at: https://www.theguardian.com/us-news/2025/may/19/trump-take-it-down-act-bill
- Killion, V. L. (2025) The TAKE IT DOWN Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Images. Congressional Research Service. Available at: https://www.congress.gov/crs-product/LSB11314#:~:text=On%20April%2028%2C%202025%2C%20Congress%20passed%20S.,criminalizes%20the%20nonconsensual%20publication%20of%20intimate%20images
- Department for Science, Innovation and Technology. (2025) AI Opportunities Action Plan. Gov.uk. Available at: https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan
- EU AI Act: first regulation on artificial intelligence. (2023). European Parliament. Available at: https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- Digital Directorate. (2021). Artificial intelligence strategy: trustworthy, ethical and inclusive. Scottish Government. Available at: https://www.gov.scot/publications/scotlands-ai-strategy-trustworthy-ethical-inclusive/
- Bryant, M. (2025). Denmark to tackle deepfakes by giving people copyright to their own features. The Guardian. Available at: https://www.theguardian.com/technology/2025/jun/27/deepfakes-denmark-copyright-law-artificial-intelligence
- Department for Science, Innovation and Technology. (2025). Online safety act: explainer. Gov.uk. Available at: https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer
Abusive Behaviour and Sexual Harm (Scotland) Bill. The Scottish Parliament. Available at: https://www.parliament.scot/bills-and-laws/bills/s4/abusive-behaviour-and-sexual-harm-scotland-bill