A ‘Whack-a-Mole’ Battle: US Academic Struggles to Erase AI Deepfakes from YouTube

When deepfake videos impersonating prominent American academic John Mearsheimer began spreading across YouTube, the University of Chicago professor launched a determined effort to have them removed. What followed was a months-long struggle that exposed the growing difficulty of combating AI-driven impersonation in the digital age.

As the fabricated videos multiplied, Mearsheimer and his office pressed the Google-owned platform to take them down, confronting a slow and cumbersome process that required filing individual takedown requests for each video. The experience has become a cautionary tale for academics, professionals, and public figures increasingly vulnerable to identity theft and disinformation enabled by artificial intelligence.

In recent months, Mearsheimer’s office identified 43 YouTube channels publishing AI-generated videos using his likeness. Some falsely depicted him making provocative statements on sensitive geopolitical rivalries, giving viewers the impression that the content was authentic.

One fabricated clip, which also circulated on TikTok, appeared to show Mearsheimer commenting on Japan’s strained relations with China after Prime Minister Sanae Takaichi expressed support for Taiwan in November. Another highly realistic video, complete with a Mandarin voiceover aimed at Chinese audiences, falsely portrayed him claiming that American influence in Asia was waning as China surged ahead.

“This is a terribly disturbing situation,” Mearsheimer told AFP. “These videos are fake, and they are designed to give viewers the sense that they are real.”

He warned that such content undermines open and honest discourse—particularly on platforms meant to facilitate informed debate.

Central to the problem, according to his office, was YouTube’s reporting system, which makes it difficult to flag infringing channels unless the impersonated individual’s name or image appears explicitly in a channel’s title, description, or avatar. As a result, Mearsheimer’s team was forced to submit takedown requests video by video, a process that required a dedicated staff member.

Even then, the removals failed to stop the spread. New channels continued to emerge, some evading detection by slightly altering their names—such as using spellings like “Jhon Mearsheimer.”

“The biggest problem is that they are not preventing new channels dedicated to posting AI-generated videos of me from emerging,” Mearsheimer said.

After months of what he described as a “herculean” effort, YouTube eventually shut down 41 of the 43 channels identified by his office. However, many of the videos had already attracted significant attention, and the risk of their reappearance remains.

“AI scales fabrication itself,” said Vered Horesh, an executive at AI startup Bria. “When anyone can generate a convincing image of you in seconds, the harm isn’t just the image—it’s the collapse of deniability. The burden of proof shifts to the victim.”

She added that platform safety measures should be built into products from the outset, rather than relying on takedown processes after damage has already been done.

In response, a YouTube spokesperson said the company was committed to developing AI technology responsibly and enforcing its policies consistently for all creators. In his annual letter outlining YouTube’s priorities for 2026, CEO Neal Mohan said the platform is working to reduce the spread of low-quality “AI slop” while simultaneously expanding AI tools for creators.

Mearsheimer’s ordeal highlights a rapidly evolving digital landscape, where advances in generative AI are distorting shared realities and empowering anonymous actors to impersonate trusted voices.

In recent months, AI-generated hoaxes have impersonated doctors to promote fake medical products, CEOs to push fraudulent investment schemes, and academics to manufacture opinions for geopolitical agendas.

To counter the problem, Mearsheimer said he plans to launch his own YouTube channel to help audiences distinguish genuine content from deepfakes.

A similar step was recently taken by Jeffrey Sachs, a US economist and Columbia University professor, who announced his own channel following what he described as an “extraordinary proliferation” of AI-generated videos impersonating him.

“The YouTube process is difficult to navigate and is essentially a whack-a-mole game,” Sachs told AFP. “There remains a proliferation of fakes, and it’s not simple for my office to track them down. This is a major, continuing headache.”

Leave a Reply

Your email address will not be published. Required fields are marked *