Saturday, 29 November 2025
33.2 C
Singapore
29.5 C
Thailand
23.9 C
Indonesia
28.5 C
Philippines

YouTube introduces new policies for AI-generated content

YouTube introduces new policies for AI-generated content, focusing on mandatory disclosures, content removal options, improved moderation, and responsible AI tool development.

YouTube is set to implement new policies for content created or altered using artificial intelligence (AI). This move aims to balance the innovative potential of AI with the safety and transparency needs of its users.

Mandatory labels and disclosures

Under these new guidelines, YouTube creators must inform their audience when a video contains AI-generated alterations. This includes deepfakes, which are realistic synthetic media showing events or speeches that didn’t actually happen. Creators must add labels detailing any artificial or altered content in the video description. YouTube has provided examples to guide creators on how this should look.

For sensitive topics like elections, natural disasters, public figures, and conflicts, a more noticeable label might be needed directly on the video player. Non-compliance with these requirements could lead to serious consequences, including video removal, account suspensions, or even expulsion from the YouTube Partner Program. However, YouTube plans to collaborate with creators to ensure they fully understand these new rules.

New removal request options

YouTube will also introduce a process allowing individuals to request the removal of AI-generated content featuring their likeness or voice without their permission. This includes AI-created deepfakes mimicking a person’s unique voice or appearance. Additionally, music artists can request takedowns of AI-generated music that imitates their singing or rapping style. When reviewing these requests, YouTube will consider factors like parody, public interest, and newsworthiness.

Improved content moderation with AI

YouTube already employs AI to assist human moderators in identifying and addressing abusive content at scale. The platform uses generative AI to broaden its training data, helping it quickly identify new types of threats and reduce exposure to harmful content.

Responsible development of new AI tools

In developing new AI tools for creators, YouTube is prioritising responsible innovation. The company is working on safeguards to prevent its AI systems from generating content that violates policies. This involves continuous learning and improvement through user feedback and testing against potential abuse.

New policy enforcement strategies

While specific enforcement details are yet to be revealed, YouTube may use human and automated methods. The platform could train its content moderation systems to detect AI-created media lacking proper disclosures. It might also conduct random audits of accounts uploading AI content, allowing users to report undisclosed AI material. Regardless of the approach, consistent enforcement will be crucial for establishing clear expectations and norms around these disclosures.

Looking ahead

YouTube is excited about the creative possibilities of AI but remains cautious about its risks. The platform is committed to working with creators to foster a safe, transparent, and beneficial AI-driven future. Creators are encouraged to stay informed about these evolving policies to maintain their accounts in good standing.

Hot this week

Belkin UltraCharge Pro 3-in-1 Magnetic Charging Dock with Qi2 25W review: Fast, quiet and convenient charging

Belkin UltraCharge Pro 3-in-1 Magnetic Charging Dock with Qi2 25W offers fast, quiet and convenient wireless charging for iPhone, Apple Watch and AirPods.

The forgotten battle royale that ended a studio still deserved more than a one-month run

A look back at Radical Heights, the short-lived battle royale that showed promise but shut down after just one month.

Sumsub reports sharp rise in synthetic personal data fraud in APAC

Sumsub reports a sharp rise in synthetic identity fraud and deepfake attacks across APAC as AI-driven scams become more sophisticated.

SMRT upgrades Bishan Depot with automation to double train overhaul capacity

SMRT upgrades Bishan Depot with automation to double overhaul capacity and enhance safety, efficiency, and workforce sustainability.

OnePlus confirms 15R launch date as part of three-device announcement

OnePlus confirms the 17 December launch of the 15R, Watch Lite, and Pad Go 2, with UK pre-order discounts and added perks.

Slop Evader filters out AI content to restore pre-ChatGPT internet

Slop Evader filters AI-generated content online, restoring pre-ChatGPT search results for a more human web.

Lara Croft becomes gaming’s best-selling heroine amid new Tomb Raider rumours

Lara Croft becomes gaming’s best-selling heroine as new Tomb Raider rumours fuel excitement.

Cronos: The New Dawn drives major profit surge for Bloober Team

Bloober Team reports record Q3 2025 results as Cronos: The New Dawn drives a major surge in global sales and profit.

China warns of growing risk of bubble in humanoid robot industry

China warns of a potential bubble in the humanoid robot industry, raising concerns about market saturation, investment risks, and global impact.

Related Articles