Tuesday, 16 December 2025
26.5 C
Singapore
28.3 C
Thailand
23.2 C
Indonesia
27.5 C
Philippines

YouTube introduces new policies for AI-generated content

YouTube introduces new policies for AI-generated content, focusing on mandatory disclosures, content removal options, improved moderation, and responsible AI tool development.

YouTube is set to implement new policies for content created or altered using artificial intelligence (AI). This move aims to balance the innovative potential of AI with the safety and transparency needs of its users.

Mandatory labels and disclosures

Under these new guidelines, YouTube creators must inform their audience when a video contains AI-generated alterations. This includes deepfakes, which are realistic synthetic media showing events or speeches that didn’t actually happen. Creators must add labels detailing any artificial or altered content in the video description. YouTube has provided examples to guide creators on how this should look.

For sensitive topics like elections, natural disasters, public figures, and conflicts, a more noticeable label might be needed directly on the video player. Non-compliance with these requirements could lead to serious consequences, including video removal, account suspensions, or even expulsion from the YouTube Partner Program. However, YouTube plans to collaborate with creators to ensure they fully understand these new rules.

New removal request options

YouTube will also introduce a process allowing individuals to request the removal of AI-generated content featuring their likeness or voice without their permission. This includes AI-created deepfakes mimicking a person’s unique voice or appearance. Additionally, music artists can request takedowns of AI-generated music that imitates their singing or rapping style. When reviewing these requests, YouTube will consider factors like parody, public interest, and newsworthiness.

Improved content moderation with AI

YouTube already employs AI to assist human moderators in identifying and addressing abusive content at scale. The platform uses generative AI to broaden its training data, helping it quickly identify new types of threats and reduce exposure to harmful content.

Responsible development of new AI tools

In developing new AI tools for creators, YouTube is prioritising responsible innovation. The company is working on safeguards to prevent its AI systems from generating content that violates policies. This involves continuous learning and improvement through user feedback and testing against potential abuse.

New policy enforcement strategies

While specific enforcement details are yet to be revealed, YouTube may use human and automated methods. The platform could train its content moderation systems to detect AI-created media lacking proper disclosures. It might also conduct random audits of accounts uploading AI content, allowing users to report undisclosed AI material. Regardless of the approach, consistent enforcement will be crucial for establishing clear expectations and norms around these disclosures.

Looking ahead

YouTube is excited about the creative possibilities of AI but remains cautious about its risks. The platform is committed to working with creators to foster a safe, transparent, and beneficial AI-driven future. Creators are encouraged to stay informed about these evolving policies to maintain their accounts in good standing.

Hot this week

Crunchyroll Arc returns to celebrate fandom, connection, and anime’s global rise

Crunchyroll brings back its Arc year-in-review experience, highlighting anime fandom, personalised personas, and the medium’s growing global impact.

Instarem and Choco Up embed non-dilutive SME financing into Instarem Business platform

Instarem and Choco Up embed non-dilutive SME financing into Instarem Business, offering up to US$1 million with fast approval and disbursement.

New research finds growing public demand for modern emergency call systems in Australia and New Zealand

New study shows strong public support for modern, data-driven and AI-enabled emergency call systems in Australia and New Zealand.

Plaud Note Pro launches in Singapore as AI-powered note-taking device

Plaud launches the Note Pro in Singapore, introducing a slim AI note-taker with real-time human-AI alignment and up to 50 hours of recording.

iRobot files for bankruptcy after prolonged cash pressures and failed Amazon deal

iRobot files for bankruptcy after weak sales and a failed Amazon deal, with plans to sell the Roomba maker to its main manufacturer.

Meta outlines evolving scam and influence threats in latest adversarial report

Meta’s latest Adversarial Threat Report highlights evolving scam networks, AI-driven abuse and efforts to protect users across APAC.

Jobstreet by SEEK outlines key job market shifts and skills needed to thrive in Singapore in 2026

Jobstreet by SEEK highlights rising retrenchments, strong tech demand, and the growing importance of AI and skills-based hiring in Singapore.

Crunchyroll Arc returns to celebrate fandom, connection, and anime’s global rise

Crunchyroll brings back its Arc year-in-review experience, highlighting anime fandom, personalised personas, and the medium’s growing global impact.

Plaud Note Pro launches in Singapore as AI-powered note-taking device

Plaud launches the Note Pro in Singapore, introducing a slim AI note-taker with real-time human-AI alignment and up to 50 hours of recording.

Related Articles