Guide to AI: Glossary

Why language matters

The Young Women Leaders have put together this Glossary to empower you in your AI learning journey! Here you will find terms that are commonly used in discussions on AI and your digital rights, as well as examples of AI technology you might use daily and some complex AI jargon.

We hope you can find this helpful when navigating this Guide and Scottish AI Alliance’s ‘Living with AI’ course. If this glossary leaves you wanting more. also recommend that you check out SAIA’s Playbook, The ‘AI Jargon Buster’.

There are some terms present in this Glossary that the YWL are contesting, due to the implications they carry. The terms are only included in this Glossary because the purpose of this Guide is to combat AI illiteracy and advocate for those it affects.

Their phrasing can sometimes contribute to victim-shaming and dismiss perpetrators’ roles in the crimes they commit. Unfortunately, these terms are still the standard. Still, we ask you to consider why it’s essential to assess how we frame specific issues, especially when the language being used has the potential to reinforce the very idea you’re disputing.


On this page


AI 101

  • AI design: The AI Design Process is a structured approach to developing intelligent systems that solve real-world problems using data and algorithms. It guides teams from problem identification to model deployment and ongoing monitoring, generally through three key stages (Raul Tomar):
    • Design phase: Developers identify a problem to be solved by creating an AI system and consider the risks involved and resources needed in advance.
    • Development phase: Developers pre-process their basic data, choose a model for their AI system to follow and begin training and testing it.
    • Deployment phase: Developers monitor their AI system’s performance and behaviour to update, maintain it and improve it. Alternatively, if the system does not work or is no longer needed, the developers may decommission it.
  • AI Models: The trained system that uses data and algorithms to perform tasks. It transforms inputs (data) into outputs (predictions or decisions).
  • AI system: An AI system is a broader and more complex application that integrates one or more AI models to accomplish a specific task. It encompasses not only AI models but also the necessary components to collect, process, and analyze data, as well as to interact with users. In other words, an AI system is a complete solution that implements AI models within an operational framework. (dastra)
  • Algorithm: Within the context of AI, an algorithm is a set of rules that tells a computer or AI system how to behave, process and interpret the data. Outside of the AI context, an algorithm is a defined set of mathematical formulas or instructions designed to complete a task or solve a problem. Algorithms are crucial in computers and programming, instructing the computer step by step. Their main purpose is to make accurate predictions or reach desired results, making algorithms key to efficient software and system creation.

Algorithms vs. models (IBM)
Though the two terms are often used interchangeably in this context, they do not mean quite the same thing.
Algorithms are procedures, often described in mathematical language or pseudocode, to be applied to a dataset to achieve a certain function or purpose.
Models are the output of an algorithm that has been applied to a dataset.

  • Artificial Intelligence (AI): Although there is no universally agreed upon definition for AI because it’s hard to pin down a definition for something that is constantly evolving, this term is used to describe a group of technologies that allow computers to perform tasks previously associated with human intelligence.
  • Artificially generated content: AI-generated content is any type of content, such as text, image, video or audio, which is created by AI models. These models are the result of algorithms trained on large datasets that enable them to produce new content that mimics the characteristics of the training data. Popular generative AI models apply deep learning techniques to generate text, images, audio and video that simulate human creativity. (IBM)
  • Data: The information AI learns from.
  • Datasets: A dataset in machine learning and AI refers to a collection of data that is used to train and test algorithms and models. These datasets are crucial to the development and success of machine learning and AI systems, as they provide the necessary input and output data for the algorithms to learn from. (ENCORD)
  • Generative AI: Generative AI is an AI system that generates text, images, audio, video, or other media in response to user prompts. Generative AI uses machine learning techniques to create new data that shares characteristics with its training data, often producing outputs that are nearly indistinguishable from human-created content.
  • Prompt: An AI prompt is the input submitted to a large language model (LLM) via a generative artificial intelligence (GenAI) platform, like OpenAI’s ChatGPT or Microsoft Copilot. The prompt can be defined as a question, command, statement, code sample or other form of text. Some LLMs also support nontext prompts, including image and audio files. After the input is submitted, the AI platform applies it to the LLM, which uses the input as a foundation for generating an appropriate response. (TechTarget)
  • Scottish AI Alliance (SAIA): The Scottish AI Alliance is a partnership between The Data Lab and the Scottish Government and is led by a Minister-appointed Chair and overseen by Senior Responsible Officers from The Data Lab (CEO) and the Scottish Government (CDO). (SAIA)
  • Training data: The information used to teach AI systems how to perform tasks or make predictions. Training data includes numerous examples that show the learning system what inputs look like and what the correct outputs should be. By analysing these examples, the system learns to recognise patterns and relationships, which it then uses to build a model, or set of rules. This model enables the system to make informed decisions or predictions when encountering new data. The quality and quantity of training data play a crucial role in determining how well the AI system performs its tasks.

Everyday AI

AI vs Computer Vision (InfosenseAI)

Artificial Intelligence (AI) refers to the broader concept of creating machines or systems that can perform tasks requiring human-like intelligence. This can include tasks such as problem-solving, decision-making, language understanding, and pattern recognition. AI encompasses a wide range of techniques, including machine learning, natural language processing, robotics, and more. AI systems can be designed to handle various types of data and tasks, not limited to visual data.

Computer Vision, on the other hand, is a specific subfield of AI that focuses on enabling machines to interpret and understand visual information from the world. It involves developing algorithms and models that allow computers to “see” and make sense of images and videos. Computer vision aims to replicate human visual perception by enabling machines to detect objects, recognize patterns, understand scenes, and extract meaningful information from visual data.

  • AI agent: A system designed to act independently to complete specific tasks or respond to its environment. It can take in information, make decisions based on that input, and carry out actions, often without needing constant human guidance. Many AI agents are narrow in scope, focused on clearly defined goals like sorting emails or navigating a delivery robot. AI agents require human oversight to ensure they work safely and fairly. For example, Apple’s Siri or Amazon’s Alexa.✧ (P)
  • AI boyfriend/girlfriend: An AI girlfriend or AI boyfriend is a virtual avatar programmed with artificial intelligence to simulate a romantic relationship with a human user. This can range from basic conversational interactions to more advanced features (that usually cost money) such as personalized messages and virtual dates. For example, Sophie Thatcher’s character in Companion (2025) or Alicia Vikander’s character in Ex Machina (2014). (gabbNow)
  • AI companion: An AI tool designed to provide emotional support, conversation, or companionship, often through chat or voice. AI companions can support wellbeing and reduce loneliness, but they also blur lines between machine and human interaction. They should never replace human care, and there are risks if users form attachments without understanding the system’s limitations or privacy implications. ✧ (P)
  • Augmented Reality (AR): Augmented reality (AR) refers to the real-time integration of digital information into a user’s environment. AR technology overlays content onto the real world, enriching a user’s perception of reality rather than replacing it. AR devices are equipped with cameras, sensors and displays. This can include smartphones and tablets creating mobile AR experiences or ‘wearables’ like smart glasses and headsets. These devices capture the physical world and then integrate digital content (for example, 3D models, images or videos) into the scene, blending digital and virtual worlds. (IBM)
  • Chain of thought prompting: A technique used with some AI systems to help them reason better by incorporating logical steps – or a “chain of thought” – within the prompt. Instead of asking the AI to jump straight to an answer, the prompt encourages it to explain its thinking step by step, which can improve its performance in tasks that require logic, common sense or symbolic reasoning. While it can improve transparency, the reasoning is not always accurate or reliable, and the steps may still reflect the system’s limitations or biases. ✧ (P)
  • Chatbot: A software application that has been designed to mimic human conversation, allowing it to talk to users via text or speech. Previously used mostly as virtual assistants in customer service, chatbots are becoming increasingly powerful and can now answer users’ questions across a variety of topics, as well as generating stories, articles, poems and more. While the most well known chatbot is currently ChatGPT, chatbots have been used in the health sector for some time. For example, from 2019 to 2022, NHS Western Isles trialled a Gaelic chatbot to support mental well-being for people living in rural areas. ✧ (P)
  • Metaverse: The metaverse is a virtual world where humans, as avatars, interact with each other in a three-dimensional space that mimics reality. For example, VRChat or Mark Zuckerberg’s Metaverse (Cambridge).
  • ‘Not Safe For Work’ (NSFW): Internet slang to describe content, websites or topics that are sexual or mature in nature and therefore should only be looked at in private. (Cambridge)

AI Jargon

Statistical AI vs Symbolic AI (SmythOS)

Statistical AI employs sophisticated pattern recognition techniques to uncover meaningful insights from vast datasets. These systems learn by analysing thousands or millions of examples to detect underlying patterns and relationships. For instance, machine learning algorithms excel at discovering intricate patterns within data across diverse applications, from stock market prediction to medical imaging.

Symbolic AI operates on the principle that human knowledge can be explicitly represented using symbols and rules. Early AI systems used formal logic to represent facts and relationships. For instance, a symbolic AI system might represent the concept ‘bird’ along with rules like ‘birds have wings’ and ‘birds can fly’ to reason about the natural world.

In short, Symbolic AI excels at processing and manipulating concepts through logical rules, much like human reasoning, while Statistical AI shines at finding patterns in vast amounts of numerical data.

  • AI Hallucinations: A phenomenon where AI, in a large language model (LLM)-often a generative AI chatbot or computer vision tool-, perceives patterns or objects that are non-existent or imperceptible to human observers, creating outputs that are nonsensical, fabricated or altogether inaccurate. (IBM)
  • Artificial General Intelligence (AGI): AGI, also known as ‘strong AI’, represents a theoretical AI that can solve problems across all domains, matching or surpassing human intelligence. It’s a popular topic in tech circles and often sensationalised in the media. However, despite the hype, AGI remains purely speculative—there’s no concrete research showing how it might be developed, or even a clear understanding of what “general intelligence” entails.
  • Computer vision: A form of AI used to understand and recognise images and videos and to analyse the elements of the content within them. For example, Google Photos uses computer vision to categorise photo files by their subject matter, grouping pictures of pets, people, landscapes or food together. Facebook also uses a form of computer vision to recognise faces in photographs and prompt you to tag someone. Computer vision can also be used for more complex analysis of images, such as using satellite imagery to map biodiversity by recognising characteristics of the landscape. Space Intelligence and Scottish Wildlife Trust are using computer vision to interpret large volumes of satellite data and map wildlife habitats to help restore, connect and protect Scotland’s natural environment. ✧ (P)
  • Data Scraping: AI systems, particularly large language models (LLMs), rely on vast amounts of data to learn and improve their capabilities. Data scraping is a key method for collecting this information from websites and other online sources, by extracting structured or unstructured data from the internet, either manually or with data scraping tools. This scraped data enables AI models to better understand language, context and real-world knowledge. At the same time, AI techniques are being used to make data scraping more efficient and adaptable. However, this practice raises important privacy and ethical concerns, and the large scale collection of personal data for AI training has led to legal challenges and regulatory scrutiny. ✧ (P)
  • Deep learning (DL): Deep learning is a subset of machine learning that uses multilayered neural networks, called deep neural networks, to simulate the complex decision-making power of the human brain in AI systems. Some form of deep learning powers most of the AI systems and applications in our lives today. (IBM)
  • Large Language Model (LLM): Large Language Models (LLMs) are trained on extensive text data to refine their predictions, allowing them to generate fluent, human-like responses. However, their outputs are based purely on probability, not comprehension.
  • Machine Learning (ML): Machine Learning uses algorithms and statistical models to identify patterns in data and make predictions. There are three main approaches to training an AI via ML: supervised learning (an AI system is trained using labelled data, meaning humans provide examples with known answers), unsupervised learning (the AI system analyses raw data and identifies patterns on its own), and reinforcement learning (the AI system learns through trial and error, receiving positive or negative feedback based on its performance).
  • Narrow Artificial Intelligence (NAI): Narrow AI, also known as ‘weak AI’, is designed to handle specific tasks. All AI systems today fall under this category, using techniques like ML and DL. These systems excel within their specific domains but cannot perform tasks outside their programmed scope. Some examples of NAI include Internet search engines, recommendation systems or facial recognition software.
  • Reasoning models: AI systems designed to follow logical or structured steps to arrive at a conclusion. Reasoning models excel in complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows. While promising, many such models still struggle with consistency and transparency. Overreliance on these models without oversight can create false confidence in their abilities. ✧ (P)
  • Recommendation system: Also known as recommender systems, recommendation systems are used by computer programs to suggest content by predicting what someone will like based on their previous preferences or ratings. Examples include playlist suggestions on platforms like YouTube, Spotify and Netflix and shopping suggestions on Amazon and similar online marketplaces. Recommendation systems make suggestions by drawing on the characteristics of the content (such as music style or film genre) and analysing what other people with similar tastes and online behaviours have liked or bought. Recommendation systems aim to improve the success of their recommendations over time by building a more complete picture of a person’s preferences the more they use a platform and learning from which recommendations have been successful or not. ✧ (P)
  • Visual perception: The ability to recognise and identify objects in an image, similar to how humans perceive and understand what we see. This type of AI is used in medical image processing, facial recognition software (for policing or crowd counting, for example) and self-driving vehicles, among others. ✧ (P)
  • Voice cloning: Voice cloning uses AI to create a digital version of someone’s unique voice. It captures speech patterns, accent, voice inflections and even breathing. From a short audio clip, sometimes as brief as 3 seconds, an algorithm can learn and accurately replicate a person’s voice. Voice cloning can be used in a variety of applications, including immersive learning experiences, helping people with alternative speech patterns and preserving endangered languages. It can also be used to create audio deepfakes for malicious purposes. ✧ (P)

Digital rights and ethics

NCII vs TFGBV (MediaDefence)

Non-Consensual Intimate Image abuse takes place when someone’s sexual images are shared with a wider than intended audience without their consent. It is irrelevant whether the person gave initial consent for the creation of the images or consent for them to be shared with other individuals; any dissemination beyond the initially intended audience can be said to constitute NCII. While NCII can and does affect people of all genders, research indicates that 90% of those victimised are women, although LGBTQ persons and those with disabilities have also fallen victim.

Technology-Facilitated Gender-Based Violence (TFGB) refers to more than just NCII, although NCII is one of the most widely used tactics. TFGBV is a growing phenomenon worldwide, with far-reaching individual, societal, and economic consequences. Governments and civil society organisations around the world need to take urgent action to ensure survivors are protected and perpetrators punished. This will require a massive, multipronged approach – spanning law, policy, human rights, technology, education, and many other spheres of life. The parallels between TFGBV and offline GBV are profound.

Underscoring both are communal motives like sexual harassment and revenge, motivated by long histories of patriarchy and oppression of women and vulnerable groups. This study found that its motives are deeply rooted in socio-cultural norms reflective of patriarchal practices and beliefs and exacerbated by perceived threats to these systems.

For this reason, TFGBV disproportionately affects prominent and high-profile women such as politicians and journalists, feminists, gender equity activists, and LGBTQI+ individuals – as well as children and young people. 

  • AI Ethics: Ethics is a set of moral principles which help us discern between right and wrong. AI Ethics is a multidisciplinary field that studies how to optimize the beneficial impact of artificial intelligence (AI) while reducing risks and adverse outcomes. Examples of AI Ethics issues include data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse. (IBM)
  • AI Regulation: The development of public sector policies and laws for promoting and regulating AI. It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD. Since 2016, numerous AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is deemed necessary to both foster AI innovation and manage associated risks. Furthermore, organizations deploying AI have a central role to play in creating and implementing trustworthy AI, adhering to established principles, and taking accountability for mitigating risks. (Wikipedia)
  • AI washing: A marketing tactic where companies exaggerate or misrepresent the role of AI in their products or services, often to appear more innovative or advanced. This can mislead customers, undermine trust and distract from more meaningful discussions about how AI is actually used and governed. Transparency about what AI does (and doesn’t do) is essential to prevent AI washing. ✧ (P)
  • Alignment: The challenge of ensuring that an AI system’s behaviour matches human intentions, goals and values. Misalignment can lead to outcomes that are technically correct but socially harmful. Alignment becomes especially important in more autonomous systems, where unclear goals or poor training data can result in biased or unsafe outcomes. Aligning AI systems requires not just technical tools, but inclusive and value-led design. ✧ (P)
  • Bias: Artificial intelligence bias, or AI bias, refers to systematic discrimination embedded within AI systems that can reinforce existing biases, and amplify discrimination, prejudice, and stereotyping. (SAP) AI bias can present itself in several ways:
    1. Algorithmic bias: Unfairness that occurs when an algorithm’s process or implementation is flawed, causing it to favour or harm one group of users over another. This often happens because the data used to train the algorithm is biased, reinforcing existing prejudices related to race, gender, sexuality, disability or ethnicity. ✧ (P)
    2. Data bias: Biases present in the data used to train AI models can lead to biased outcomes. If the training data predominantly represents certain demographics or contains historical biases, the AI will reflect these imbalances in its predictions and decisions. (SAP)
    3. Human decision bias: Human bias, also known as cognitive bias, can seep into AI systems through subjective decisions in data labeling, model development, and other stages of the AI lifecycle. These biases reflect the prejudices and cognitive biases of the individuals and teams involved in developing the AI technologies. (SAP)
    4. Generative AI bias: Generative AI models, like those used for creating text, images, or videos, can produce biased or inappropriate content based on the biases present in their training data. These models may reinforce stereotypes or generate outputs that marginalize certain groups or viewpoints. (SAP)
  • Body Dysmorphia: A disorder involving the belief that an aspect of one’s appearance is defective and worthy of being hidden or fixed. This belief manifests in thoughts that many times are pervasive and intrusive. Simply, those who suffer from body dysmorphia suffer a disconnect between their perception of reality and actual reality. They look in an ordinary mirror, but for them, the result is something like what we might experience when looking in a funhouse mirror. There is an inability to recognize the body for what it is. Features seem distorted, and flaws (real or imagined) are perceived as much worse than they actually are. (EndocrineKids)
  • Consent: Consent occurs when one person voluntarily agrees to the proposal or desires of another. It is a term of common speech, with specific definitions used in such fields as the law, medicine, research, and sexual consent. Consent as understood in specific contexts may differ from its everyday meaning. Consent is situational, can always be retracted and depends on both parties being fully aware of the choice to be made and its consequences. Both parties must also be in the right mental and physical conditions to make said choice and be of an adequate age. (Wikipedia)
    • Digital consent: Giving permission to something is sometimes called ‘consent’ – digital consent is what you do and don’t agree to sharing online. (BBC Teach)
    • Sexual consent: Consent is actively saying yes – with both your body and your language. It is enthusiastic, and both partners should understand what they are consenting to. Consent can be withdrawn at any time and you should not feel pressured into consenting into anything that makes you feel uncomfortable. (Bold Girls Ken)
  • Deepfakes: A digitally altered video, sound recording or synthetic image that replaces someone’s face or voice with that of someone else, in a way that appears real. (MET police) This can often be explicit, with the intent to cause distress, harm or spread false information. Created by machine learning algorithms, deepfakes have raised concerns over their use in fake celebrity pornography, financial fraud and the spreading of false political information. Deepfake can also refer to realistic but completely synthetic media of people and objects that have never physically existed, or to sophisticated text generated by algorithms. The YWM, along with many leading scholars, suggests replacing ‘deepfakes’ with ‘sexual digital forgeries’, since this new term better captures the nature and harms of the non-consensual creation, solicitation and distribution of altered and artificially generated sexual and otherwise intimate images.✧ (P)
  • ‘Deepfake pornography’: Sexual images or videos fabricated by imposing someone’s likeness onto sexually explicit images with AI technology. As mentioned, the YWM challenges the widespread use of this term based on its implications, and suggests ‘AI-enhanced intimate image abuse’ or ‘Synthetic NCII’ as a more suitable replacement. (Durham Press)
  • Echo chamber: In this context, this represents on or offline spaces in which people only encounter beliefs and opinions that align with and reinforce their own. Often an environment in which alternative ideas are not considered or respected. (YWL 2024)
  • Ethical AI: Defined by the Scottish AI Alliance as AI that respects and supports the values of a progressive, fair and equal Scotland. Ethical AI aligns with our ambitions for a fairer, greener Scotland and helps us become a more outward looking and prosperous country. For the Scottish AI Alliance, ethical AI is in line with the United Nations Sustainable Development Goals and Scotland’s National Performance Framework, respecting our human rights, environment and communities. Ethical AI is accountable to those affected by it, adheres to national and international laws and upholds the rights of people in Scotland and elsewhere. Thus, SAIA calls for AI systems that are TEI (Trustworthy, Ethical and Inclusive). ✧ (P)
    1. Trustworthy: Trustworthy AI must be transparent so that we can observe how these systems help organisations make decisions, and must be disclosed so that people are aware of the use of AI in their lives. Decisions made by trustworthy AI must be explainable, and we must have confidence in the reasoning behind them. When AI is trustworthy, we can be assured that it will function in a secure and safe way and that its risks are continually monitored and managed, and are protected from attacks or misuse. When AI is Trustworthy, it provides a robust environment from which to deliver the benefits of AI to the people of Scotland.
    2. Ethical: AI that respects and supports the values of a progressive, fair and equal Scotland, and that supports our ambitions for a fairer, greener Scotland while enabling us to become a more outward looking and prosperous country. To SAIA, Ethical AI is aligned to the United Nations Sustainable Development Goals(opens in a new tab) and Scotland’s National Performance Framework(opens in a new tab), respecting and supporting our human rights, our environment and our communities.
    3. Inclusive: When we say Inclusive AI what we mean is that we want AI that includes everyone from across the diverse cultures and communities that make up the population of Scotland. Inclusive AI must be mindful to not exclude any group, particularly our children and youth, and under-represented or marginalised people. Inclusive AI must be shaped by a diverse range of voices in all areas from strategy development, data gathering, algorithmic design, implementation and user impact. Inclusive AI respects our human right to live free from discrimination.
  • Gender Dysphoria: Gender dysphoria is a term that describes a sense of unease that a person may have because of a mismatch between their biological sex and their gender identity. This sense of unease or dissatisfaction may be so intense it can lead to depression and anxiety and have a harmful impact on daily life. (NHS)
  • Gender Sensitive: Policies and procedures that consider their varying impacts on people of different genders. For example, ensuring a policy surrounding housing considers the different, specific impacts it may have on women and girls, as well as men and boys. (YWL 2024)
  • Incel culture: ‘Incel’ is short for ‘involuntarily celibate’. This term usually refers to an online subculture of people -largely men- who struggle to connect with a romantic partner. It often promotes extreme, negative, misogynistic rhetoric where women and girls are blamed, objectified, and vilified for not abiding by patriarchal rules and expectations that centre male sexual entitlement. (YWL 2024)
  • Informed consent: The agreement or permission to do something from someone who has been given full information about the possible effects or results. A cornerstone of ethics and discussions involving sexual and digital rights. (Cambridge)
  • Intersectionality: A social justice-based theoretical framework born from black feminist scholars, coined by Kimberlé Crenshaw, which considers the ways multiple social and political identities intersect together to form a person’s experiences of power, privilege, and oppression in any given situation. (Crenshaw, 1991)
  • Intimate Images: An image that is either sexual, nude, partially nude, or of toileting. These types of images show something that is inherently personal, private and intimate. Some images will fall into these categories but are less inherently private or intimate, such as images of people kissing. Consequently, we recommend that the definition of “intimate” should exclude images where they only depict something that is ordinarily seen on a public street, with the exception of intimate images of breastfeeding which would not be excluded. The definition of “intimate” focuses on what is shown in the image, not what the person was wearing or doing when it was taken. (Lawcom)
  • Image-based Abuse: A better term for describing when a person takes, shares, or threatens to share sexually explicit images or videos of a person online without their knowledge or consent, and with the aim of causing them distress or harm (McGlynn, 2024)
  • Mansplaining: Explaining something to a woman in a condescending/patronising way, thereby assuming she has no knowledge of the subject matter based on her gender identity as a woman. (Cambridge)
  • Misogyny: Hatred of, aversion to, contempt for extreme or prejudice against women. (Cambridge, YWL 2024)
  • Non-Consensual sharing of Intimate Images (NCII): NCII is the act of sharing intimate images or videos of someone, either on or offline, without their consent. This can include artificially manipulated images and sexual digital forgeries. (StopNCII)
  • Rape culture: Behaviour, values and beliefs that normalise, trivialise, or make light of sexual violence and undermine consent. It is rooted in patriarchal power structures that fuel gender inequity. (Survivor’s Network) Some examples of rape culture include:
    1. Slut-shaming,
    2. Believing or contributing to rape myths
    3. Victim blaming
    4. Cyber flashing
    5. Image-based abuse, Non-Consensual Intimate Image abuse, etc.
    6. Misogynistic or homophobic jokes
  • ‘Revenge Porn’: This term is not one we would choose to use, as it perpetuates the toxic normalisation of victim blaming that exists within our culture when it comes to sexual abuse and a range of other crimes committed in the digital sphere. Using the word ‘revenge’ contributes to victim-blaming by implying the victims acted in a way which warranted retribution against them through sextortion, and calling it ‘porn’ restates that these images were created for consumption – instead of denouncing how these images are stolen and distributed without consent for private gain. Instead, the YWM favours phrasings like ‘Non-Consensual sharing of Intimate Images’ or ‘Technology-Facilitated Gender-Based Violence’ to describe when a person takes, shares, or threatens to share sexually explicit images or videos of a person online without their knowledge or consent, and with the aim of causing them distress or harm. It is against the law. (GenderIT)
  • Sextortion: A type of online blackmail. It’s when criminals threaten to share sexual pictures, videos, or information about you unless you pay money or do something else you don’t want to. Anyone can be a victim of sextortion. However, young people aged between 15 to 17, and adults aged under 30, are often most at risk. (MET Police)
  • Sexual and/or gender-based violence (GBV): Gender-based violence is violence directed against a person because of that person’s gender, or violence that affects persons of a particular gender disproportionately. Violence against women is understood as a violation of human rights and a form of discrimination against women and shall mean all acts of gender-based violence that result in, or are likely to result in physical harm, sexual harm, psychological harm, economic harm or suffering to women. It can include violence against women, domestic violence against women, men or children living in the same domestic unit. Although women and girls are the main victims of GBV, it also causes severe harm to families and communities. (European Commission)
  • Sexual digital forgeries (SDFs): Non-consensually generated or fabricated sexual imagery, also known as a ‘deepfake’. In view of the problems with this term, alternatives ones are being used, such as non-consensual synthetic imagery or synthetic pornography. Other proposed terms incorporate ‘deepfake’ in light of it being an internationally recognised term, but adapt it to better reflect the nature of this conduct (deepfake image-based sexual abuse, sexually explicit deepfakes, deepfake sexual abuse…). However, Professor Clare McGlynn suggests deploying the term ‘sexual digital forgeries’ which better captures the nature and harms of the non-consensual creation, solicitation and distribution of altered and AI generated sexual and intimate images. This term emphasizes that generating this imagery involves stealing someone’s likeness and identity, creating a false representation. The word forgery is very clearly associated with wrongful acts, and a breach of a person’s legal rights. (McGlynn, 2024)
  • Synthetic non-consensual explicit imagery (SNCEI): Another term used to describe Sexual Digital Forgeries. (Wei et al, 2025)
  • Synthetic Pornography (SP): Synthetic pornography is different from deepfakes. Deepfakes include actual people’s identifying characteristics on bodies other than their own, while synthetic pornography involves AI-generated, non-existing bodies engaging in sexual activities. Despite SP not showing actual children’s faces, it is important to know that this form of media can still have children’s bodies in it and can potentially be harmful. (AAP)
  • Technology-facilitated gender-based violence (TFGBV): Technology-facilitated gender-based violence, or TFGBV, is an act of violence perpetrated by one or more individuals that is committed, assisted, aggravated and amplified in part or fully by the use of information and communication technologies or digital media against a person on the basis of gender. (UNFPA)
  • Trusted Person/Adult: Someone chosen by a young person as a safe figur that listens without judgment, agenda or expectation, but with the sole purpose of supporting and encouraging positivity within a young person’s life. (YoungMinds)
  • Turing test: A method to determine whether a machine can demonstrate human intelligence. If a machine can engage in a conversation with a human without being recognised as a machine, it passes the test. The Turing test was proposed in a 1950 paper by mathematician and computing pioneer Alan Turing. It has become a key concept in AI theory and development. ✧ (P)
  • Young women and people of marginalised genders: When the YWM use this term, we mean self-identifying young women and trans and nonbinary people who feel comfortable in spaces that centre the experiences of young women and girls. (YWL 2024)