The rise of Generative AI (GenAI) has brought transformative possibilities but also significant risks, particularly for women, girls, and marginalized communities across cultural contexts. This workshop co-led by members of the Inclusive AI Lab UU team focuses on tackling a rising harmful manifestation of these risks: ‘deepfake pornography’ – AI generated nonconsensual intimate content. Such content not only infringes on privacy but also perpetuates gender-based violence, victim-blaming, and online harassment, exacerbated by socio-cultural stigmatization and weak regulatory frameworks. This workshop can be interesting to UX professionals, policymakers, tech experts, designers, activists, academics, and students engaged in globally-oriented, cross-cultural, and gender-centric approaches to creating more inclusive digital ecosystems that bridge research, policy, and design. The workshop aims to engage participants in the process of mapping challenges, stakeholders, and processes at play to tackle this problem. We aim to jointly help in the design of a Gender AI Safety framework that enables the safety, security, and freedom of those most impacted. Through creative simulations, we will organize our breakout exercises along the lines of:

Data to Design: Based on emerging data, participants will co-create redressal systems and design interfaces prioritizing user safety, equity, and dignity through storyboarding and journey mapping.

Data to Policy: Building on case studies, participants will formulate organizational policies for GenAI product development, emphasizing ethical and equitable practices

Data to Literacy: The session will ideate campaigns to raise awareness about deepfake pornography, equipping communities with the digital literacy needed to mitigate harm and empower gender minorities to reclaim their participation online.

The goal is to create actionable insights to address GenAI-enabled harms and ensure women’s safety and well-being online across contexts.

About the organisers:
Payal Arora
is a Professor of Inclusive AI Cultures at Utrecht University and co-founder of FemLab, and Inclusive AI Lab. She is the author of award-winning books including ‘The Next Billion Users’ with Harvard Press and ‘From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech’ with MIT Press. Forbes called her the ‘next billion champion’ and the ‘right kind of person to reform tech.’

Kiran Vinod Bhatia is a digital anthropologist and UX professional working at the intersection of marginalization and digital media. Her new book ‘Children’s Digital Experiences in Indian Slums’ is out with Amsterdam University Press. She leads the Localizing Responsible AI cluster at Inclusive AI Lab, Utrecht University.

Marta  Zarzycka is a senior User Experience researcher at Google, working across complex domains like trust & safety, counter abuse, risk and harm taxonomies, and egregious harms protections (focusing on non-consensual intimate imagery).