Deepfakes, AI Abuse and the New Frontier of Sexual Violence Against Women

By Edwin Wanjawa and Dommie Yambo-Odotte

Artificial intelligence (AI) is reshaping modern life in ways once thought impossible. But alongside its promise is an unsettling and rapidly escalating threat: the use of AI to create sexually explicit deepfakes to shame, silence, and extort women and girls. What was once the stuff of science fiction has become an everyday weapon of digital gender-based violence—evolving faster than laws, institutions, and public awareness can respond.

Across Kenya, women and girls are increasingly reporting cases where their social media photographs—sometimes as innocent as a graduation picture, a classroom selfie, or a professional headshot—are scraped, manipulated, and transformed into hyper-realistic nude images or pornographic videos. These deepfakes are then deployed to extract money, coerce sexual favours, intimidate victims, or destroy reputations. For many survivors, the outcome is the same: fear, humiliation, and a profound loss of control over their own identity.

Teenage girls in high schools are among the most vulnerable. A single manipulated image shared in a school WhatsApp group can trigger social exclusion, bullying, and emotional distress. Girls report skipping school, deleting social media accounts, or withdrawing from activities altogether. In some cases, the perpetrators are peers experimenting with freely available AI apps; in others, the harm emerges from anonymous online predators who bank on the silence of minors. The shame factor remains a powerful silencing tool.

At the university level, the pattern grows more complex. Deepfakes are increasingly weaponised by intimate partners, classmates, and online strangers. Some students are blackmailed into relationships or coerced into sharing real intimate images under the threat of releasing fabricated ones. Others find their names trending on campus platforms, with fake videos circulated alongside misogynistic commentary. The trauma is compounded by the fact that many victims are forced to prove that the images are fake—a very challenging task once the content has gone viral.

Women in public life face an even more calculated version of this violence. For female politicians, journalists, and activists, deepfakes have become part of the political battlefield. They are used to delegitimise public voices, distort public perception, and stoke moral panic. A woman running for office may find a fabricated sexual video released a week before party primaries. A journalist exposing corruption may be hit with manipulated nudes to undermine credibility. Activists engaging in governance or feminist advocacy are frequently threatened with deepfakes to deter them from speaking.

These attacks do not merely harm individuals—they corrode democracy. They deter women from participating in leadership. They distort the information environment. They reinforce existing gender inequalities by ensuring that public spaces remain hostile to women’s voices.

Yet Kenya’s legal and policy framework has not kept pace with this new frontier of harm. The Computer Misuse and Cybercrimes Act criminalises the publication of intimate images without consent, but it does not explicitly address AI-generated sexual fakes. The absence of a clear legal definition creates a loophole where perpetrators can claim that the images are not “real” and therefore fall outside existing provisions. For investigators, prosecutors, and judges, the novelty of these technologies presents a steep learning curve. Victims often encounter dismissive responses: “It’s just a fake—ignore it.” But the social, psychological, and political consequences are far from imaginary.

Globally, legislatures are moving to close these gaps. The European Union, parts of the US, South Korea, and Australia are developing explicit statutes targeting the creation and distribution of AI-generated sexual content. Kenya, by contrast, is at the beginning of the conversation. Without urgent reforms, the country risks becoming a fertile ground for a new wave of digital exploitation.

This calls for a multi-pronged response.

First, Kenya needs a clear legal definition of deepfakes and non-consensual AI-generated sexual imagery, treated as a standalone offence with penalties at the point of creation, possession, distribution, or threat of dissemination.

Second, law enforcement agencies need specialised capacity to investigate and respond to AI-enabled abuse. Cybercrime units, prosecutors, and court officers must be equipped to understand the nature of deepfakes and the profound harm they cause.

Third, educational institutions must develop protocols for responding to tech-facilitated sexual violence among minors and young adults. Schools and universities cannot rely on disciplinary committees alone; this is a psychological, mental-health, reputational, and child-protection issue.

Finally, as a society, we must interrogate our own role. The casual forwarding of pornographic content—whether real or fabricated—fuels a market of humiliation. It turns victims into spectacles and reinforces the very misogyny that allows deepfakes to thrive. The digital world is not a separate space; it is an extension of our social and moral fabric.

For DTM, the message is clear: AI should not become an engine of sexual exploitation. Advocacy is urgently needed to shape new laws, educate the public, and support survivors. As technology advances, Kenya must ensure that dignity, rights, and justice advance alongside it.

A society that cannot protect its women from digital harm cannot claim to be prepared for the future.

 

Edwin Wanjawa is the Programmes Associate, DTM and Dommie Yambo-
Odotte is the Executive Director and Producer, DTM