Guide to AI: Body image
HOME > RESOURCES & TOOLKITS > THE YOUNG WOMEN’S MOVEMENT GUIDE TO AI > AI AND GENDER

AI AND GENDER
Body image
TL;DR
AI’s influence on body image is a complex, multi-layered problem, shaped by its role in society. It acts as a tool that can amplify harmful ideas, particularly for young women.
Social media algorithms, which AI drives, can create a “filter bubble” of content that reinforces negative body standards. What starts as a casual interest can quickly become an overwhelming stream of intense diet and fitness videos.
In addition, AI editing tools and generative AI have eliminated barriers to creating digitally altered, “perfect” images. This amplifies existing societal biases found in training data, promoting a narrow and often unattainable beauty ideal.
Algorithms
There’s no denying that AI has significantly shaped social media algorithms, which in turn have deeply influenced the way users engage with these platforms. But how exactly do social media companies use AI?
Jung et al. (2022) explore how AI-driven algorithms operate in the background, continuously learning about users’ preferences and interests by analysing their behavior and interactions on the platform. Based on these insights, the algorithms curate personalised content tailored to each individual user. AI also enhances content creation through features such as facial recognition filters and intelligent audio editing tools. Additionally, AI-powered recommendations assist users in building and expanding their social networks. In this way, human agency and machine agency work together to co-create the user experience on AI-integrated social media platforms. Algorithms are continuously developed to meet the evolving demands of social media, ensuring that content delivery and ad targeting are more relevant and effective. However, the use of AI in this context raises ethical concerns, such as the potential for bias in algorithmic decisions, and the manipulation of user behavior.
With this in mind, one could argue that AI-powered social media offers many benefits. A curated experience can feel like a luxury – content tailored just for you. If you like dogs, you’re shown more dogs. Spend a little longer watching a cooking video, and you’ll see more recipes. Linger on a video about healthy eating, and suddenly your feed fills with similar content. Stay a second too long on a clip about weight loss, and soon you’re seeing intense diet routines, before-and-after body transformations, and extreme fitness challenges. The algorithm doesn’t know when to stop – it simply feeds what holds your attention. What starts as harmless curiosity can quickly spiral into a stream of content that subtly reinforces unrealistic body standards and fuels insecurities. Insecurities which do not need any more fueling.
The dramatic increase in the popularity of Tik Tok in 2020 has encouraged other social media platforms to take their AI fueled strategy under their wing. Tik Tok is known for being different than other social media platforms because users put their viewing experience into the hands of the algorithm – which is notorious for “being inside a user’s head” – by allowing the app to generate content for them, as opposed to other platforms where a user must follow other users to see content (Gabor, 2023). It allows algorithms to analyse users’ most vulnerable thoughts and show them content in this area, with the mission to find what content is addictive enough to keep them on the site and increase their watch time.

This isn’t a business model designed for the individual – it’s built for the masses. At scale, a slight trend towards extreme content may seem negligible, almost invisible. But for an individual, the consequences can be significant.
If you feel like social media is affecting how you view your body, you’re not alone. And hopefully, this gives some insight into why your algorithm might be recommending certain types of content.
In the least extreme cases, if you’re a young woman, you’ll likely be placed into a specific algorithmic category. This categorisation draws from the content you watch, the accounts you follow, and even the information you provide when you sign up – like your age. From there, the platform will begin to recommend content that’s popular among other users in your demographic.
This is where it becomes important to understand the concept of normative discontent – the idea that dissatisfaction with one’s body has become a standard, especially among women. Even if you personally feel neutral or positive about your body, the average person in your “box” may not. And so, if diet culture content is frequently engaged with by other young women – which it often is – the algorithm will start feeding it to you too.
Some women may actively seek out this content. Others may not. But the system doesn’t distinguish – it only responds to patterns. And what starts as subtle can quickly shift into a flood of content that reinforces unhealthy body ideals.
Societal expectations
While AI might seem like the furthest thing from human autonomy, it’s crucial to remember that it is built entirely on human data. In fact, AI may be one of the most deeply interconnected systems with the human experience. What humans say, think, or do – AI reflects back to us.
Generative AI and AI editing
Take, for example, the rise of AI-powered image editing. Whether it’s an in-app filter on your favourite social media platform or something more deliberate like Photoshop or FaceTune, the technology behind these tools is driven by AI models trained on millions of images. These images are typically paired with descriptive labels and fed into classification algorithms to teach the system what to “recognise” and “enhance.”
The problem is: much of this training data comes from what is publicly available online. And the internet, as we know it, has largely been shaped by Western, mainstream culture. This creates a biased dataset that fails to represent the full diversity of global identities – across race, culture, body type, and more.
For example, if you search “beautiful woman” online, the results tend to converge around a narrow set of features: lighter skin tones, slim body types, Eurocentric facial structures. So when you use AI beauty filters, what you see is not an objective ideal – it’s a reflection of what Western digital spaces have reinforced as “beautiful.”
These categories weren’t handpicked image by image by tech companies. They are an emergent property of the biased data AI has been trained on. In this way, AI doesn’t invent beauty standards – it amplifies the ones society already projects, often without scrutiny. Some tech companies – like Adobe or even platforms like ChatGPT – have begun implementing positive reinforcement strategies in an effort to reduce bias in their AI systems. Positive reinforcement, in this context, refers to the practice of rewarding or prioritising certain types of data or outputs that promote fairness, inclusivity, or diversity, while downplaying those that reinforce harmful stereotypes. While well-intentioned, these interventions come with their own set of risks and controversies. For one, defining what is “positive” or “fair” is inherently subjective and often shaped by the cultural or political lens of those designing the systems.
AI editing models often use generative AI to create images that reflect a kind of social imaginary. As Sobey (2025) discusses these systems are trained on the existing visual content of the internet, absorbing and reproducing dominant social forces in their outputs. As a result, the default image of “normate” personhood is thin – often singularly and unattainably so. This doesn’t just pose harm to fat people, but to everyone, especially when considering studies on how idealised social media imagery contributes to perfectionism, eating disorders, and distorted body image.
In writing this, we conducted research using generative AI to produce the so-called “perfect” body type. The model refused to do so directly (a form of positive reinforcement), but when asked indirect or adjacent questions, it easily generated images of a thin, blonde, Eurocentric woman or a chiseled, brown-haired man.
We know that many women edit their photos – in fact, a study by City, University of London (Gill, 2021) found that 90% of women surveyed said they would edit images to reshape their nose or jawline, brighten their teeth, or alter their waist before posting on social media. The key word here is “would”, which points to a hypothetical intention that, until recently, was limited by access and ability. Photoshop, for instance, requires both a paid license and technical skill. But with the rise of generative AI editing tools, that “would” rapidly becomes a “could.” The barriers are gone – now, it’s easy, fast, and often free, making digital self-alteration more accessible than ever.
The strong relationship between negative body image and the use of photo editing is not new. Dr. Jasmine Fardouly, a body image expert from the University of New South Wales, explains that a study she conducted last year suggests that the more unattainable the beauty standards young people encounter online, the greater the harm it causes (Boseley, 2022).
‘“It’s promoting a beauty ideal that’s not attainable for you,” she says. “It’s not attainable for anyone, really, because nobody looks like that. Everybody’s faces are being made to look the exact same way.”
Replacing existing beauty standards
Not only do women find themselves constantly comparing their bodies to those of friends, family, celebrities, and social media influencers – struggling to distinguish what is real and what is not – but now there’s a new, somewhat daunting addition to the mix: the AI “model.” And not the algorithmic kind.
One recent example is the debut of the first AI model featured in Vogue magazine, as part of a Guess campaign. While this wasn’t an editorial choice made by Vogue itself, it nonetheless marks a shift – one that signals the potential decline of real people in the fashion space (BBC News, 2025). AI models are cheaper, more reliable, and most importantly, lack autonomy. Unlike human models, they don’t require rest, payment, or consent.
The AI model in the Guess ad isn’t particularly subtle – but that’s only for now. Generative AI is rapidly advancing, and it’s only a matter of time before it becomes nearly impossible to tell at first glance whether a model is real or not.
This adds yet another layer to the body image issues the fashion industry has perpetuated for decades. These AI bodies have no flesh, no bones, no organs – and no physical limitations. There’s only so much you can do to alter a real human body. But an AI-generated figure? It has no boundaries. And that’s a dangerous precedent.
The real-world knock on
When we talk about the “normative discontent” many women feel about their bodies, it’s important to distinguish between disordered eating and an eating disorder. Both are serious issues that shouldn’t be dismissed. While they’re related – since both involve problematic eating patterns – there are important differences between them.
The below definitions are from Dennis (2021):
Disordered eating covers a wide range of unhealthy eating behaviors and distorted attitudes toward food, weight, body shape, and appearance. These can include dieting, skipping meals, fasting, restricting food intake, cutting out entire food groups, binge eating, overusing diuretics, laxatives, or weight-loss medications, and engaging in compensatory behaviors like purging or excessive exercise. The severity can vary, but disordered eating doesn’t meet the full frequency, duration, or psychological criteria for a diagnosed eating disorder.
Eating disorders, on the other hand, are complex mental illnesses involving persistent disturbances in eating behaviors along with significant psychological distress. The DSM-5 TR outlines specific diagnostic criteria for the major eating disorders, including anorexia nervosa (AN), bulimia nervosa (BN), binge eating disorder (BED), avoidant/restrictive food intake disorder (ARFID), and other specified feeding and eating disorders (OSFED).
This distinction matters because disordered eating is strikingly common among young women. If dissatisfaction with your body has begun to influence your eating habits in unhealthy ways, you are far from alone. In 2023, official UK statistics (Newlove-Delgado et al., 2023) showed that among 17–19-year-olds, 77.5% of young women and 42.3% of young men reported eating problems. For those aged 20–25, the rates were 72.3% for young women and 43.0% for young men.
While “eating disorder” might sound clinical or extreme, it’s important to know that harmful eating patterns exist on a spectrum – and even if you don’t meet the criteria for a diagnosis, your experience is valid. Most young women share in this struggle. The good news is that with the right support and treatment, it’s possible to change your eating habits – even if you’ve been stuck in disordered eating or dieting cycles for years.
It’s no secret that videos glamorising disordered eating and body image concerns circulate freely on TikTok and other social media platforms. While this might be casually acknowledged among peers, it’s only recently that, on an academic level, the rise of highly personalised, AI-driven social media has been directly linked to increased disordered eating patterns.
For example, a study by Blackburn and Hogg (2024) found that participants categorised as high and extreme daily TikTok users reported significantly higher levels of disordered eating behaviours compared to those with low or moderate usage. Another study by Griffiths et al. (2024) found that algorithms for users with eating disorders delivered more appearance-oriented content (+146%), dieting content (+335%), exercise content (+142%), and toxic eating disorder content (+4,343%). Notably, stronger algorithmic biases toward these videos were associated with more severe eating disorder symptoms. While users with eating disorders were only slightly more likely to “like” these types of videos, the algorithms themselves were far more likely to push this content to them in the first place. This suggests that TikTok’s content personalisation processes – which operate beyond conscious user actions like ‘liking’ – may actually worsen symptoms for vulnerable users.
Although correlation doesn’t automatically mean causation, the statistics are alarming. In the UK, the prevalence of eating disorders among 17–19-year-olds jumped from 0.8% in 2017 to 12.5% in 2023 (Newlove-Delgado et al., 2023). For young women, the rate rose from 1.6% to 20.8%, and for young men, from 0.0% to 5.1%. While part of this increase may be due to better mental health awareness and diagnosis within the NHS, it’s reasonable to suspect that the introduction of highly personalised #ForYou pages has played some role in this statistically significant rise.
While eating disorders affect all genders, the data is clear: they are more prevalent in younger people, and young women are far more likely to be affected than young men. This makes it not just a mental health issue, but a gendered one – another way in which existing systems can perpetuate harm, with serious consequences, particularly for young women.
When we talk about the “cycle of harm” in AI, we can see how even systems intended to protect people can become part of the problem. A striking example is the National Eating Disorders Association (NEDA) removing its AI chatbot, Tessa, from its help hotline after concerns arose that it was giving harmful advice – encouraging weight loss, calorie counting, and body fat measurement – all of which can exacerbate eating disorder symptoms (Psychiatrist.com, 2023).
Remaining positive
Amidst the negativity, there are glimmers of positivity. AI is not the root of all evil – it’s a tool that reflects and shapes the human experience, for better or worse. And humans are capable of good. AI holds real promise in preventing and treating eating disorders and body image issues. In fact, there’s a whole field of computer science – Human-Computer Interaction (HCI) – dedicated to understanding the sociological effects of technology on people. Clinical research into chatbots is growing quickly because of their potential. They can offer accessible treatment support at little or no cost, ensure that more people have access to evidence-based help, and act as a first-line intervention to connect individuals with the right services.
One example of this potential is KIT , a conversational agent designed to support people struggling with body image and eating concerns, as well as their loved ones (such as parents, partners, and friends). KIT’s initial conversational scripts and decision tree were developed in collaboration with the Butterfly Foundation – an Australian national charity supporting those affected by eating disorders. (Graham, Kosmas, & Massion, 2023)
If anything we’ve discussed here feels familiar, know this: you are not stuck in that place. Start with small steps – click “not interested” on harmful diet content, scroll past it without engaging, and avoid liking similar posts. If that’s not enough, take a break. Social media will still be there when you return, and while avoidance isn’t a long-term solution, until platforms are held accountable for harmful content, stepping back entirely can be the healthiest choice.
Most importantly, remember you are not alone. Talk to friends, family, or peers – you may find the isolation eases once you share your experience. And if you need more help, speak to your GP or explore resources from the eating disorder charity Beat.
References
Grogan, S. (2006). Body image and health: Contemporary perspectives. Journal of Health Psychology, 11(4), 523–530. https://journals.sagepub.com/doi/10.1177/1359105306065013
Jung, H., Lou, C., & Kang, H. (2022). AI agency vs. human agency: Understanding human-AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication, 27(5), zmac014. https://academic.oup.com/jcmc/article/27/5/zmac014/6670985
Rodin, J., Silberstein, L., & Striegel-Moore, R. (1985). Women and weight: A normative discontent. In T. B. Sonderegger (Ed.), Psychology and gender: Nebraska Symposium on Motivation 1984 (Vol. 32, pp. 267–307). University of Nebraska Press.
Spitzer, B. L., Henderson, K. A., & Zivian, M. T. (1999). Gender differences in population versus media body sizes: A comparison over four decades. Sex Roles, 40(7–8), 545–565.
Gabor, J. (2023). The TikTok algorithm is good, but is it too good? Exploring the responsibility of artificial intelligence systems reinforcing harmful ideas on users. Catholic University Journal of Law and Technology, 32(1), Article 6. https://scholarship.law.edu/cgi/viewcontent.cgi?article=1149&context=jlt
Sobey, A. (2025). The thinness of GenAI: Body size in relation to the construction of the normate through GenAI image models. AI and Ethics, 5(4), 4181–4196. https://doi.org/10.1007/s43681-025-00684-x
Gill, R. (2021). Changing the Perfect Picture: Smartphones, Social Media and Appearance Pressures [Report]. Gender and Sexualities Research Centre, City, University of London.
Boseley, M. (2022, January 1). Is that really me? The ugly truth about beauty filters. The Guardian. Retrieved from https://www.theguardian.com/lifeandstyle/2022/jan/02/is-that-really-me-the-ugly-truth-about-beauty-filters The Guardian
BBC News. (2025, July 26). Does this look like a real woman? AI Vogue model raises concerns about beauty standards. BBC News. Retrieved from https://www.bbc.com/news/articles/cgeqe084nn4o
Dennis, A. B. (2021). Disordered eating vs. eating disorders. National Eating Disorders Association. Retrieved from https://www.nationaleatingdisorders.org/what-is-the-difference-between-disordered-eating-and-eating-disorders/
Newlove-Delgado, T., Marcheselli, F., Williams, T., Mandalia, D., Dennes, M., McManus, S., Savic, M., Treloar, W., Croft, K., & Ford, T. (2023). Mental health of children and young people in England, 2023: Wave 4 follow-up to the 2017 survey, part 5: Eating problems and disorders [Statistical report]. NHS England. Retrieved from https://digital.nhs.uk/data-and-information/publications/statistical/mental-health-of-children-and-young-people-in-england/2023-wave-4-follow-up/part-5-eating-problems-and-disorders NHS England Digital+1
Griffiths, S., Harris, E. A., Whitehead, G., Angelopoulos, F., Stone, B., Grey, W., & Dennis, S. (2024). Does TikTok contribute to eating disorders? A comparison of the TikTok algorithms belonging to individuals with eating disorders versus healthy controls. Body Image, 51, 101807. https://doi.org/10.1016/j.bodyim.2024.101807
Blackburn, M. R., & Hogg, R. C. (2024). #ForYou? The impact of pro-ana TikTok content on body image dissatisfaction and internalisation of societal beauty standards. PLOS ONE, 19(8), e0307597. https://doi.org/10.1371/journal.pone.0307597
Psychiatrist.com. (2023, June 5). NEDA suspends AI chatbot for giving harmful eating disorder advice. Retrieved from https://www.psychiatrist.com/news/neda-suspends-ai-chatbot-for-giving-harmful-eating-disorder-advice
Graham, A. K., Kosmas, J. A., & Massion, T. A. (2023). Designing digital interventions for eating disorders. Current Psychiatry Reports, 25(1), 125–138.