Keanu Reeves spends several thousand dollars each month on a specialist takedown service that has already erased roughly 400 000 fake profiles across TikTok, Instagram and Facebook, reflecting a broader surge in deep-fake celebrity scams that cost U.S. victims an estimated $637 million last year. “Unreal Keanu” and similar AI-generated accounts still attract millions of followers, while platform rules and looming U.S. legislation aim to curb the threat—but enforcement often depends on stars’ ability to finance constant monitoring.
Keanu Reeves pays a private monitoring firm “a few thousand dollars a month” to hunt down imitation profiles that use his likeness across TikTok, Instagram and Facebook and file rapid takedown requests. The company says it has removed about 400 000 fraudulent accounts in the past year, including pages impersonating the actor’s representatives. IGN first disclosed the arrangement, and a subsequent industry report described it as part of a subscription model increasingly favored by high-profile clients.
The need is clear: TikTok’s best-known spoof feed, “Unreal Keanu,” has amassed more than eight million followers despite disclaimers that it is fictional. Media-forensics analysts note that comment threads show many viewers still believe the silent skits feature the real actor, underscoring how persuasive synthetic video has become. Romance-fraud investigations highlight the real-world fallout; a recent Florida case saw a victim send tens of thousands of dollars after video-chatting with an AI clone that mimicked Reeves’s face and voice.
The Hollywood Reporter places the scams in a wider financial context, citing FBI data that puts U.S. losses from celebrity impersonations at $637 million in 2024, with many incidents unreported. Reeves, who once called deep-fake technology “scary” and bans digital alterations in his film contracts, is among more than 400 artists backing the proposed No Fakes Act, which would outlaw unauthorized AI use of a person’s likeness.
Platforms stress that tools already exist: TikTok’s guidelines let users flag “deepfakes, synthetic media and manipulated media,” while Meta’s policy removes doctored videos that could mislead viewers. An external oversight board has urged Meta to go further, warning that the current rules leave room for harmful fabrications during high-stakes events. Researchers argue that without stronger automated screening, only celebrities who can afford constant takedown services will stay ahead of imitators—leaving most users vulnerable to the next wave of synthetic fraud.