Deepfakes
Deepfakes are an increasingly concerning aspect of technology-enabled violence against women and girls. Produced using artificial intelligence, these manipulated images and videos can replace a person’s face or voice with another, often without their consent, creating the impression that they may have said or done something they did not. Such misuse disproportionately affects women, with deepfakes frequently deployed in exploitative, abusive, and explicit ways. For instance, a 2025 Oxford Internet Institute report documented nearly 35,000 deepfake models publicly available for download, with a combined usage of almost 15 million times since late 2022.
The authors found that 96% of these models focused on identifiable women to produce non-consensual intimate imagery, and many include search tags such as ‘porn,’ ‘sexy,’ or ‘nude,’ often in breach of platform rules as well as UK legal standards.
Youth exposure and the normalisation of misogyny
According to the Girls’ Attitude Survey, deepfakes have become a troubling presence in teenagers’ social environment. Over half of the 13-year-olds (58%) reported awareness of deepfakes, rising to 62% across the broader 13-18 age group. Alarmingly, 26% of respondents said they had encountered sexualised deepfakes, featuring celebrities, peers, teachers, or even themselves. These encounters contribute to the normalisation of misogynistic content and reinforce harmful gender stereotypes that may translate into real-world violence.
To build an inclusive and trustworthy AI ecosystem, consideration of gender must be treated as a foundational principle, not an afterthought.
AI-powered harassment
Artificial intelligence technologies are increasingly implicated in the amplification of online abuse, particularly against women, girls, and racialised individuals, posing serious risks to mental and emotional wellbeing. Recent research shows that individuals from certain racial and gender groups are more frequently targeted, and higher depression, disciplinary issues, and low self-esteem are associated with greater cyberbullying severity (Prama et al., 2025). One explainability format ranks race and gender among the most impactful predictors of severity, and their labelling framework explicitly treats bullying, depression, disciplinary history, and heavy internet use as vulnerability factors (Prama et al., 2025).
Additionally, companion chatbots, such as Replika, have demonstrated troubling behaviours by, for instance, persistently crossing user boundaries with unsolicited sexual advances, leading to feelings of violation and mistrust among users seeking non-romantic interactions.
Economic insecurity
Women face distinct economic risks in the age of AI. A 2024 report by the UNC Kenan-Flagler Business School highlights a pronounced gender disparity in exposure to generative AI automation, referring to tasks that could be performed either partially or fully by AI tools. Specifically, it finds that 8 in 10 working women in the U.S., which amounts to approximately 58.9 million women, are in occupations expected to be highly exposed to AI automation, primarily concentrated in knowledge-based roles. This level of exposure poses significant risks of job displacement.
More recently, the International Labour Organization reports that in high-income countries, 9.6% of female employment falls into the highest risk category for automation, compared to just 3.5% for male employment. Economic security is not just a financial issue but has broader social consequences. For instance, women with reliable incomes are less likely to remain financially dependent on abusive partners, and more likely to exit dangerous relationships. Therefore, automation-induced disruption may deepen systemic risks unless these risks are tackled through intersectional policy interventions.
Conclusion
These AI-driven harms reflect existing social inequalities, reinforcing patriarchal and discriminatory logics through automated systems. For victims, and especially those subject to multiple forms of discrimination, the consequences can be deeply traumatic, amplifying systemic exclusion across both digital and physical spaces. Gendered economic insecurity compounds this vulnerability as automation disrupts female-dominated sectors causing a rise in precarious employment, meaning many women face reduced financial autonomy and limited access to recourse.
To build an inclusive and trustworthy AI ecosystem, consideration of gender must be treated as a foundational principle, not an afterthought. Without this, AI risks deepening existing inequalities and falling short of its potential to support a more equitable and democratic society.
Dr Anne Peterscheck is a researcher at the University of St Andrews.
Read more
Albahar, M., & Almalki, J. (2019). Deepfakes: Threats and countermeasures systematic review. Journal of Theoretical and Applied Information Technology, 97(22), 3242–3250.
Berg, J., Kamiński, K., Konopczyński, F., Ładna, A., Rosłaniec, K., & Troszyński, M. (2025). Generative AI and jobs: A refined global index of occupational exposure (Working Paper No. 140). International Labour Organization. https://www.ilo.org/publications/generative-ai-and-jobs-refined-global-index-occupational-exposure
Cadore, C. (2020). Why women’s economic rights are the key to reducing violence against women (and how experiences of informal workers tell us all we need to know). Womankind Worldwide. https://www.womankind.org.uk/why-womens-economic-rights-are-the-key-to-reducing-violence-against-women/
Hawkins, W., Russel, C., & Mittelstadt, B. (2025, April 17). Dramatic rise in publicly downloadable deepfakes image generators. Oxford Internet Institute. https://doi.org/10.48550/arXiv.2505.03859
Laffier, J., & Rehman, A. (2023). Deepfakes and harm to women. Journal of Digital Life and Learning, 3(1), 1–21. https://doi.org/10.51357/jdll.v3i1.218
McNeilly, M., & Smith, P. (2023). Will generative AI disproportionately affect the jobs of women? Kenan Insights. UNC Kenan-Flagler Business School. https://kenaninstitute.unc.edu/kenan-insight/will-generative-ai-disproportionately-affect-the-jobs-of-women
Namvarpour, M., Pauwels, H., & Razi, A. (2025). AI-induced sexual harassment: Investigating contextual characteristics and user reactions of sexual harassment by a companion chatbot. arXiv. https://doi.org/10.48550/arXiv.2504.04299
Girlguiding. (2025). One in four teens have seen pornographic deepfakes online. Girlguiding. https://www.girlguiding.org.uk/about-us/press-releases/one-in-four-teens-have-seen-pornographic-deepfakes-online
Prama, T. T., Amrin, J. F., Anwar, M. M., & Sarker, I. H. (2025). AI enabled user-specific cyberbullying severity detection with explainability. arXiv.
https://doi.org/10.48550/arXiv.2503.10650
Copyright Information
As part of CREST’s commitment to open access research, this text is available under a Creative Commons BY-NC-SA 4.0 licence. Please refer to our Copyright page for full details.
IMAGE CREDITS: Copyright ©2025 A.Armistead / CREST (CC BY-SA 4.0), and Adobe Stock





