YouTube expands AI deepfake detection tool to celebrities and public figures
The platform's likeness detection feature, which flags AI-generated content of enrolled public figures, is expanding from content creators and politicians to Hollywood celebrities regardless of account status.
1 source · cross-referenced
- YouTube is extending its likeness detection tool—originally tested with creators in fall 2025 and expanded to politicians and journalists in March—to celebrities who can now flag and request removal of AI deepfakes.
- The system requires participants to submit government ID and a selfie video to enroll; it scans for AI-manipulated face content and evaluates takedown requests against YouTube's privacy policy.
- Removal is not guaranteed; parody and satire are protected categories. YouTube has compared the mechanism to Content ID, its copyright detection system, and industry observers anticipate monetization options may follow.
- Entertainment industry representatives, including talent agency CAA, view the tool as part of a broader shift toward commercial control of likeness rights, with some celebrities potentially choosing to monetize rather than restrict AI versions of themselves.
YouTube is rolling out its likeness detection tool to celebrities as part of a broader effort to give public figures control over AI-generated impersonations on the platform. The tool, which scans uploaded content for AI-manipulated facial likenesses, was first tested with creators last fall and expanded to politicians and journalists in March. The expansion to celebrities removes a previous barrier: enrolled figures no longer need a YouTube account to participate.
To use the system, participants must provide government-issued identification and a selfie video. The tool then monitors the platform for matching synthetic content and surfaces matches to the enrolled figure, who can request removal. However, YouTube evaluates each takedown request against its privacy policy, and content deemed parody, satire, or otherwise protected may not be deleted.
YouTube's approach mirrors its existing Content ID system for copyright enforcement, with one key difference: Content ID allows rights holders to monetize infringing videos and claim revenue. The likeness detection tool does not yet offer this option, though industry signals suggest it may be coming. YouTube recently announced a separate feature enabling creators to generate AI clones of their own likenesses, and talent agency CAA maintains biometric databases that entertainers can use to authorize commercial uses.
The entertainment industry's framing of deepfakes has begun to shift. Rather than viewing synthetic media purely as a threat, some talent managers and celebrities are exploring how AI impersonations could deepen fan engagement or generate licensing revenue. High-profile cases like TikTok creator Khaby Lame's attempt to commercialize his likeness rights illustrate this trend, though such deals have faced execution hurdles.
- Apr 23, 2026 · The Verge
Meta installs computer monitoring tool on US employee machines to train AI agents
Trust65 - Apr 22, 2026 · Axios
Beneath calm S&P 500 surface, extreme stock volatility emerges as AI and geopolitics reshape investor behavior
Trust56 - Apr 21, 2026 · The Verge
Public anxiety about AI remains high ahead of elections, but voter concerns have not yet crystallized into a dominant campaign issue
Trust59