YouTube expands AI deepfake detection tool to all creators over 18
YouTube is expanding its AI deepfake detection tool to all creators aged 18 and over as part of a wider safety rollout.
YouTube is preparing to expand access to its AI-powered likeness-detection tool, allowing all creators aged 18 and above to determine whether their image has been used in artificial intelligence-generated videos uploaded to the platform. The company said the feature would begin rolling out over the coming weeks as part of broader efforts to address growing concerns over deepfakes and unauthorised digital impersonation.
Table Of Content
The announcement was made through Team YouTube’s community page, where the platform explained that the system is designed to give users “more peace of mind by giving them easy access to request the removal of unauthorised content”. Although the feature is officially aimed at creators, YouTube confirmed that it can also be used by individuals who are not established content producers.
The expansion reflects increasing pressure on technology companies to respond to the rapid spread of AI-generated media. Advances in generative AI have made it significantly more difficult for viewers to distinguish between genuine footage and fabricated videos. Platforms such as YouTube are facing mounting scrutiny over how manipulated content could be used to mislead audiences, damage reputations or spread false information.
Wider access aims to address rising deepfake concerns
YouTube spokesperson Jack Malon said the latest expansion would ensure all eligible creators receive the same safeguards regardless of the size or age of their channel.
“With this expansion, we’re making clear that whether creators have been uploading to YouTube for a decade or are just starting, they’ll have access to the same level of protection,” Malon said in a statement.
The company believes the broader rollout could help not only creators but also ordinary users whose likenesses may appear in deceptive or malicious AI-generated clips. Deepfake technology has increasingly been used online to create fabricated celebrity endorsements, fake interviews and manipulated political content, often without the consent of the people depicted.
For online creators, the tool may also help identify brands or organisations that use their image in promotional material without permission. Concerns over the unauthorised commercial use of AI-generated likenesses have grown sharply in recent years as AI video tools have become more widely accessible and easier to use.
YouTube first introduced the technology in a 2024 preview before officially launching it in late 2025. At the time, access was restricted to members of the YouTube Partner Programme, which includes creators who have monetised their channels after meeting requirements for subscribers, watch hours, and Shorts views.
The company later expanded availability to journalists and politicians, groups considered particularly vulnerable to impersonation and misinformation campaigns. The latest rollout marks the broadest release of the technology so far and signals YouTube’s intention to make AI safety tools more widely available across its platform.
Verification process required before access is granted
Users who want to use the likeness detection system must first complete an enrolment process through YouTube Studio on a desktop computer. The process begins by selecting the “Likeness” option under the “Content detection” section.
After starting the application, users are asked to scan a QR code with their smartphone. They must then provide a government-issued identification document and complete a selfie video verification process to confirm their identity.
Once verification is complete, YouTube’s systems will begin scanning uploaded videos for potential matches involving the user’s face. Any videos that appear to contain their likeness will be listed within the same section of YouTube Studio for review.
Users can then assess whether the content represents unauthorised use and, if necessary, submit a removal request. During the complaint process, individuals can provide additional details explaining how their likeness has allegedly been used without permission.
YouTube also asks users whether their voice was used in the video. However, the company noted that the detection system is currently unable to identify voice cloning independently and focuses mainly on facial likeness recognition.
The verification requirements appear intended to reduce misuse of the reporting system and ensure that only genuine individuals can file impersonation claims. Identity verification measures have become increasingly common across major online platforms as AI-generated scams and fake identities continue to rise.
Platforms face growing pressure to strengthen AI protections
The wider rollout of YouTube’s deepfake detection tool comes as governments and regulators around the world examine how AI-generated content should be regulated. Concerns have intensified over the use of synthetic media during elections, in financial fraud and in online harassment campaigns.
Technology firms have been introducing a range of tools to limit harmful AI content while balancing freedom of expression and creative experimentation. Several companies are developing watermarking systems, authenticity labels and detection technologies to help identify manipulated media.
YouTube’s latest move suggests the company expects concerns over digital impersonation to become a long-term challenge rather than a temporary issue linked to early AI experimentation. By making detection tools available to a broader group of users, the platform appears to be positioning itself as more proactive in addressing emerging risks associated with generative AI.
Despite the new protections, experts have warned that detection systems may struggle to keep pace with rapidly improving AI video technology. Deepfake tools are becoming increasingly sophisticated, with some now capable of creating highly convincing videos using only limited source material.
Even so, wider access to detection and reporting systems could give users more ways to respond when manipulated content appears online. As AI-generated media becomes more common across social platforms, pressure is likely to continue to grow on companies to improve transparency and strengthen safeguards for users.





