Guide to AI: Conversation starters
HOME > RESOURCES & TOOLKITS > THE YOUNG WOMEN’S MOVEMENT GUIDE TO AI > HELP & RESOURCES

Help & resources
Conversation starters
Talking about AI is a great way to share knowledge and feel more confident in your own understanding, but we realise it can be a hard subject to speak about – especially on the harms of AI for women and girls, or knowing where to find support when you’ve seen or experienced intimate image abuse. To help you approach these topics, we’ve developed some guidance using different scenarios to help you get started.

“I’ve seen an intimate image of someone else. How do I tell them?”
If you’ve seen an intimate image of someone you know, it’s important to tell them, even if it feels uncomfortable.
Whether the image has been produced or edited with AI or not, every young woman deserves control over intimate images of themselves.
Being the person to tell them might feel awkward, but in the long term, it’s better that they know.
Here are a few ways to approach the person:
- Talking to them in person
Invite the person to talk with you in a private space where you both feel safe. You might invite them for a walk, or find a private space at school or at work.
Let them know what you’ve seen. Be as direct as you can be – you don’t want to make them nervous guessing what you mean.
You might want to say things like, “I’m not sure if you’re aware of this already, but I wanted to let you know that I saw this online,” or “I’m sorry to be the one to share this, but if it were me, I’d really want to know.”
You might want to suggest they look at this resource hub, and reassure them that there’s support available for them if they want it.
- Letting them know by phone or text
You might find it easier to tell someone over the phone or via text. If you’re doing that, make sure that they are in a private and safe space. You could do that by saying,
“I’ve got something private to share with you. Are you somewhere safe where we won’t be overheard?”
You might want to send them a link to the image if you’ve seen it online. This can help them to report it and take it down.
Warn them before you send it, and ask them if they’re okay with it first.
“How can I speak to a friend/ family member who is involved in AI deepfakes/ fake news”
Speaking to people you’re close with about issues and behaviour you don’t agree with can be tough and it is completely understandable to feel anxious and uncomfortable about doing so.
Firstly, approach them calmly, avoiding criticism too early as this will make them defensive.
You could say that you’ve seen some of their recent posts – maybe a repost on TikTok or a post on Facebook – and ask them about their perspective and where this idea or behaviour stemmed from. This gives them the opportunity to explain their beliefs and avoids diving into a full-on debate.
Next, it is best to ask questions and not lecture them. Ask them where they heard these things, how they know this information or belief is true or what the source was. Offer them an alternative and reliable source that shows them a different perspective.
If they are sharing sexual digital forgeries, ask them where they found them, what they think the person in the image would think of it, why they feel the need to share it. Explain that these images are extremely damaging to the people in them and that it can also have lasting impacts for them, whether caught by the police, family, friends or colleagues. If comfortable you could tell them it’s not something you agree with and ask them to remove the posts or stop their involvement moving forward.
It is important to set boundaries and if people you know are behaving in an unethical way that goes completely against your morals, it is okay to draw a line. Speak to someone you trust if you are feeling upset or concerned about your relationships with those close to you.


“How can I speak to a young man/ boy if I’m concerned his social media algorithm is encouraging misogyny?”
Before starting the conversation, it’s helpful to brush up on your own knowledge. This will give you the confidence to guide the discussion and answer questions. You’ll find useful links at the end of this section, and our Further Reading and Signposting page has more in-depth resources.
When approaching this topic, keep in mind that your son may become defensive or embarrassed – especially if he feels targeted. Creating a safe, non-judgemental space is essential so he can talk openly without fear of being blamed or shamed.
These should not be one-off, reactive conversations triggered by seeing something concerning on your child’s phone. Instead, aim to have ongoing discussions about misogyny, consent, healthy relationships, and respect for others, these should be woven naturally into everyday life.
Things to consider before you start:
- Age – tailor your language and examples to what’s appropriate for his age
- Experience – consider what he might already be encountering online
- Understanding – think about how much context or explanation he will need.
A good opening is to ask about his experiences and what he already knows. Show that you respect his opinions, even if you disagree – this builds trust and makes it more likely he will listen to your perspective.
Encourage him to think critically about the media and online content he consumes. You might discuss:
- How there are often multiple sides to any story.
- That most news organisations have some form of bias.
- How social media can spread misinformation or exaggeration.
- Why it’s important to check if claims are backed by evidence.
You can also give him practical ways to protect himself from harmful content, such as using “not interested” buttons, talking to friends about what he’s seen, or seeking external support if something is troubling.
Conversation starters:
- “I read that social media often pushes more extreme videos just to keep people watching. Have you noticed your feed changing like that?”
- “Do you have a favourite influencer? What kind of content do they make? Do you think they’re a good role model?”
- “Have you ever been sent something you didn’t agree with? What did you do about it? Did you report it?” (This can open up a discussion about peer pressure to stay silent or go along with certain online cultures.)
Other Useful Links:
https://saferinternet.org.uk/professionals-online-safety-helpline
https://www.theguardian.com/news/2023/feb/20/heres-how-to-talk-to-teenage-boys-about-andrew-tate
“I feel as though I am being fed harmful content to do with my body image. Who can I speak to?”
First, we encourage you to explore our section on Body Image and AI to better understand how AI might be influencing the content you see on social media.
Second, reach out to your peers. As discussed, young women are more likely to be exposed to negative content about body image, so you are definitely not alone. It can be reassuring to remember that it’s not you personally being targeted – unfortunately these patterns are part of how the algorithm works and until social media companies are held more responsible for their actions it is unavoidable.
You can also take small, practical steps to protect yourself:
- Use the “Not Interested” or similar options to hide harmful diet or body-related content.
- Scroll past triggering posts without engaging with them, and avoid liking or commenting on similar content.
- If you still find the content overwhelming, consider taking a break from social media altogether.
If you’re worried that your eating habits are becoming unhealthy, visit Beat Eating Disorders – Do I Have an Eating Disorder? for information and support.
If you feel you might be struggling with an eating disorder or disordered eating, please don’t hesitate to reach out to your loved ones and your GP. You can also follow this helpful guide on how to talk to your GP and those close to you: Beat Eating Disorders – I’m Here for the First Time.
Remember, you’re not alone, and support is available.


“How can I explain to someone that even though a “deepfake” is not really me, it just feels just as harmful?”
It’s completely okay to feel hurt or upset by a deepfake – even if you know it’s not real. Your feelings are valid, and it’s important to acknowledge that this kind of digital manipulation can have a real emotional impact.
First, reassure them (and yourself) that it’s completely okay to feel upset or anxious about deepfakes – even when you know they’re not real. These feelings are valid and important.
Explain:
- A deepfake is a digitally altered image or video made by AI to look like me, but it isn’t actually me.
- Even though it’s fake, it causes real emotional pain because it feels like someone is using my image without permission and misrepresenting who I am.
- The harm isn’t just about the false content but about the anxiety, fear, and loss of control it creates.
- There’s worry about how others might perceive me and potential damage to my reputation.
Suggest they (or you together) read our writing on DeepFakes & Intimate Image Abuse to better understand what deepfakes are and why they’re so harmful.
Emphasise:
- The deepfake itself isn’t real, but the emotional distress and anxiety it causes are very real.
It’s okay – and normal – to feel this way.
“How can I admit or explain that I don’t understand AI?”
Firstly, it’s completely normal to not understand AI or feel wary about it, and know that you’re in the right place to learn more!
The use of AI has dramatically increased in the last few years along with talk about it in the media, where we have seen the word sensationalised without much factual information on what it really is. To begin with, it might be helpful to check out our myth busting section. You can also learn more about the Scottish AI Alliance’s AI literacy course in our help and resources section.
It is best to be honest and admit the gaps in your knowledge, using it as a learning experience.
If you want to bring it up in conversation, you could start by saying you’ve seen a lot about AI in the media recently, even reference an article or interview you’ve heard. Say you are intrigued to know more about what it is and where it is used in our daily lives.
Or if speaking to someone who is more knowledgeable than you, highlight that it’s something you’re really interested in and ask if they can tell you more. When people have spent a lot of time researching something, its likely they will be keen to share what they’ve learnt with others.
Use conversations about AI to learn more and form your own opinions. Admitting you don’t know about AI is not a weakness. AI is evolving everyday so no one does, and never will, know everything about it. If you are keen to know more before diving into your first conversation, take a look through the different sections of this hub to develop your knowledge further!


“I’m a young person who’s experiencing intimate image abuse. How can I talk to a trusted adult?“
Firstly, you’re in the right place. We’re sorry that this has happened, but it’s important to remember that you’ve done nothing wrong, and that there’s support available to you.
Who should I talk to?
Asking for help is a brilliant first step, no matter what age you are.
Here are a few people in your life you might want to consider talking to:
- A parent, carer or someone else in your family
- A teacher, sports coach, or a member of staff at school
- A religious leader, like a priest, rabbi or imam.
If you’re over 18, a friend can be a good person to approach to ensure you’re not dealing with this alone.
What should I say?
Ask your trusted person if you can speak to them privately about something important. You might want to say:
- “I need some advice on something I’m stressed about.”
- “I want to tell you something, but I’m really nervous.”
- “Can I talk to you in private please? It’s important.”
You might find it easier to write everything down in a letter or a text.
If you think that they won’t understand how an image of you has been created using AI, you can share our resource on Deepfakes and Intimate Image Abuse with them.
What should I do next?
It’s really brave and important to talk to someone you trust. There’s lots of support you can access to help process what’s happened, remove the online content, and report the person who created it.
You can find a few places to start on our signposting page.
Quick links to support:
Support services you can access online today.
For under 18s Report Remove Who is it for? Anyone under 18. What do they offer? Easy reporting of imagesAccess to a Childline counsellor Updates on the progress of your report Find out more and get started on the Childline website. Video embed: Report Remove: removing nude images online | For over 18sThe Revenge Porn Helpline Who is it for? Anyone over 18 living in the UK What do they offer? Non-judgemental advice Reporting and removal of content Social media adviceReporting to the police Finding free legal advice. When are they open? Their helpline is available Monday-Friday, 10am-4pm. Get in touch via Reiya, phone or email. |
“How do I start a conversation about misogyny and AI in the workplace?“
AI is already being used in hiring, management, and productivity tools – and too often these systems amplify existing inequalities. If you’re noticing these issues or feeling uneasy about how AI is being used in your workplace, you’re not alone – and your concerns are valid.
Simply starting the conversation can be daunting but you’re not being ‘difficult’ for raising the issue – you’re being responsible.
Bring in real-world examples
“Did you know Amazon had to scrap an AI hiring tool because it downgraded CVs with the word ‘women’s’ in them?”
Reuters: Amazon scrapped secret AI recruiting tool that showed bias
Start with a simple question
Raising your concern doesn’t have to mean making a big statement, it can start with a question in a meeting, slack thread, or 1:1 conversation:
- “How are we checking for bias in the AI tools we use?”
- “Have we reviewed our use of AI tools from an equity perspective?”
- “Do we have clear AI policies – especially when it comes to ethics, transparency, and bias mitigation?”
Make a suggestion
If it feels safe to do so, you can suggest a small next step:
- “Could we run a session of AI and bias during lunch one day?”
- “Would it help to do a quick review of how we’re using AI in our workflows?”
Some helpful resources
You’re not expected to be the expert – you’re just starting the conversation. Here are some resources you can share or refer to:
Equal AI: inclusive practices for ethical AI – practical guidance on challenging bias in AI development and deployment.
I’d blush if I could: closing gender divides in digital skills through education
Mitigating AI gender bias risks in the workplace
If you’re worried about not being taken seriously
That fear is valid. Women are often talked over, dismissed, or sidelined – especially when speaking up about tech, ethics, or discrimination.
That’s why building small conversations over time can be powerful. Bring in facts, invite meaningful discussion, and lean on trusted colleagues when you can.
Young Women Work – A campaign to tackle workplace gender inequality and improve workplace progression for young women in Scotland.
