As the world marks the 16 Days of Activism against Gender-Based Violence from November 25 to December 10, conversations this year have shifted sharply toward the growing threat of digital violence—particularly the rise of Technology-Facilitated Gender-Based Violence (TFGBV). Women and girls now face new forms of abuse ranging from cyberstalking to non-consensual sharing of private images, with artificial intelligence creating an even more dangerous frontier.
One of the most troubling trends is the weaponization of AI to create sexually explicit deepfakes. According to UN Women, 90–95% of all deepfake images circulating online are sexual images of women, a statistic that underscores how technology continues to amplify gendered abuse.
This global problem became highly visible in late 2024 when sexually explicit AI-generated images of American singer Taylor Swift spread rapidly across social platforms, viewed more than 47 million times on X before mass reporting finally forced their removal.
But behind the statistics and viral incidents are real women—those whose reputations, mental health, and livelihoods are shattered by misuse of AI. In Kenya, 22-year-old Stella Sanaipei from Kajiado County is one of the many victims fighting for justice.
A Life Interrupted by Deepfakes
Before the abuse began, Sanaipei enjoyed posting dance videos on TikTok and building her modeling portfolio. But everything changed when manipulated nude images and videos—using her real photographs—were circulated on Telegram, TikTok, and X in December 2023.
“The pictures and videos I had posted were taken and manipulated to appear as nude pictures,” she says. “One was a passport-size photo I took for my modelling portfolio.”
The attacks escalated when an impersonated TikTok account was created under her name after she deactivated her own. A now-deleted X account called “City-digest” also aggressively promoted Telegram channels trafficking her deepfakes.
Citizen Digital Investigations Reveal AI Bot Networks
A Citizen Digital review identified several Telegram channels distributing Sanaipei’s deepfakes. One of the channels openly promoted an advanced AI bot named Swapvideo AI, described as offering “automatic clothing removal and face changing” with “one-click operation.”
Some channels even monetized the exploitation, instructing users to pay for access to manipulated videos and images.
Sanaipei’s own investigations traced some of the activity to a Telegram administrator based in Kisii and a person she believes to be involved, identified only as “Tony.”
Seeking Justice in a System Playing Catch-Up
In March 2024, she reported the case to the Directorate of Criminal Investigations (DCI). But the first officer she met dismissed her hopes, telling her, “Most of these cases go unresolved, you just move on.”
Undeterred, she reported the matter again at DCI Ongata Rongai and was referred to the DCI headquarters in Kiambu. It was there she discovered that a friend was facing the same ordeal—and that she too had reported her case at DCI Ngong Road.
During the height of the harassment, a TikTok video implying her nude photos were circulating gained over 1 million views. At the time, she was an intern and had to explain everything to her supervisors, who fortunately supported her. Her friend going through similar abuse was not as lucky—she lost her job.
“How malicious can you be? What have I done to you for you to do this to me?” she asks.
“Every time it happens, I remind myself that it is not me—but you cannot explain to everyone that it is not me.”
Her emotional and mental wellbeing has suffered deeply over the past two years. The deepfakes resurfaced again in March and September 2024, prolonging her trauma. Through it all, she credits her parents for holding her up.
“I am lucky to have parents ready to support me,” she says.
A Call for Accountability and Stronger Protections
While global celebrities rely on mass reporting to force tech companies to take down deepfakes, Sanaipei hopes that Kenya’s legal system can deliver justice for victims who do not have millions of followers to defend them online.
As the 16 Days of Activism spotlight TFGBV, experts and advocates are calling for:
stronger government regulations to prevent AI-enabled abuse
tech companies to build safer platforms
effective mechanisms for identifying and removing deepfake content
public awareness, as TFGBV often leads to real-world harassment, sexual violence, and devastating socio-economic consequences.
UN Women notes that digital spaces enable perpetrators to act anonymously, quickly, and widely—while victims struggle to find support in legal systems that are still adapting to rapidly evolving technology.