Sunday, 30 November 2025
31 C
Singapore
30.3 C
Thailand
23.3 C
Indonesia
28 C
Philippines

The music industry steps up efforts to track AI-generated songs

Music platforms are developing tools to trace AI-generated songs, aiming for control, licensing, and transparency across the industry.

In 2023, the music industry had its wake-up call — and it sounded a lot like a song by Drake.

A viral track titled Heart on My Sleeve imitated Drake and The Weeknd so convincingly that it racked up millions of streams before anyone could confirm who made it. It wasn’t just the sound that alarmed people — it was the realisation that no one was truly in control.

In response, the music world is quietly building new technology not to stop AI-generated songs altogether but to make them traceable. Developers are embedding detection systems into every stage of the music process — from model training and uploading platforms to licensing databases and recommendation algorithms.

The aim isn’t to ban synthetic music but to label and track it. “If you don’t build this stuff into the infrastructure, you’re just going to be chasing your tail,” explains Matt Adell, cofounder of Musical AI. “You can’t keep reacting to every new track or model — that doesn’t scale. You need infrastructure that works from training through distribution.”

Detection is becoming part of music’s core systems

Startups and major platforms are racing to include AI-detection tools within licensing and publishing workflows. Companies like YouTube and Deezer have begun flagging AI-generated content as soon as it’s uploaded, influencing how these songs appear in search results and recommendation feeds. Others like SoundCloud, Audible Magic, Pex, and Rightsify are expanding moderation and identification tools throughout their music ecosystems.

This rapid development is creating a patchwork of detection systems, each aiming to make AI music traceable from the moment it is created. Companies like Vermillio and Musical AI have developed software that automatically tags songs as synthetic, embedding this data into the track’s metadata.

Vermillio’s TraceID, for example, breaks down songs into individual parts — such as vocal tone or lyrics — and identifies the segments generated by AI. It can spot mimicry even when only certain features of an original song are used. This is especially useful for rights holders who want to be notified and offered licensing options before a track is released.

Instead of acting like YouTube’s Content ID, which often misses subtle imitations, TraceID is designed to offer proactive, verified licensing. Vermillio predicts this authenticated licensing could grow from US$75 million in 2023 to US$10 billion by 2025. It’s not about catching copies but measuring creative influence and making fair deals from the start.

AI training data is under the spotlight

Some companies are further examining the data used to train music AI models. By analysing training inputs, they aim to determine how much a generated song borrows from specific artists or styles. This approach could allow licensing based on influence — before a song is released.

Sean Power, cofounder of Musical AI, describes their system as a full-cycle tool. “Attribution shouldn’t start when the song is done — it should start when the model starts learning,” he says. “We’re trying to quantify creative influence, not just catch copies.”

Meanwhile, Deezer already uses in-house tech to identify and limit the reach of fully AI-generated songs. As of April, around 20% of daily uploads were flagged as AI-made — twice as many as in January. These tracks stay on the platform but are not pushed in algorithmic or editorial recommendations. Shortly, Deezer plans to add clear labels for users to identify AI-generated content.

“We’re not against AI at all,” says Aurélien Hérault, Deezer’s Chief Innovation Officer. “But a lot of this content is being used in bad faith — not for creation, but to exploit the platform. That’s why we’re paying so much attention.”

Spawning AI is pushing detection further upstream with its Do Not Train Protocol (DNTP), which allows musicians to opt out of having their work used in AI training. While visual artists already have similar tools, the audio industry has been slower to catch up. There’s still no standardised approach to consent and transparency at scale.

Some experts argue that DNTP must be run independently and supported by various stakeholders to gain trust. “Nobody should trust the future of consent to an opaque central company,” says technologist Mat Dryhurst. “It needs to be nonprofit and collaborative to truly protect creators.”

Music’s future is being shaped by this behind-the-scenes race to build detection into the foundation of how music is made, shared, and discovered. And it’s only just beginning.

Hot this week

Google limits free Nano Banana Pro image generation due to high demand

Google is reducing free Nano Banana Pro and Gemini 3 Pro usage due to high demand, limiting daily access while paid plans remain unchanged.

Valve offers strongest hint yet on expected Steam Machine pricing

Valve hints that the Steam Machine may be priced close to a similarly powerful DIY PC, but external factors keep final costs uncertain.

ShadowV2 botnet spotted during AWS outage, researchers warn of possible return

ShadowV2 botnet briefly emerged during the AWS outage, targeting IoT devices, raising concerns about future cyberattacks.

Marsham Edge: Converting AI hype into measurable performance gains for megaprojects

Marsham Edge CEO Muriel Demarcus explains how AI can transform megaprojects into data-driven infrastructure that delivers on time and on budget.

Chrome tests new privacy feature to limit precise location sharing on Android

Chrome for Android tests a new privacy feature that lets websites access only approximate location data instead of precise GPS information.

DeepSeek launches open AI model achieving gold-level scores at the Maths Olympiad

DeepSeek launches Math-V2, the first open AI model to achieve gold-level scores at the International Mathematical Olympiad.

AI browsers vulnerable to covert hacks using simple URL fragments, experts warn

Experts warn AI browsers can be hacked with hidden URL fragments, posing risks invisible to traditional security measures.

Slop Evader filters out AI content to restore pre-ChatGPT internet

Slop Evader filters AI-generated content online, restoring pre-ChatGPT search results for a more human web.

Lara Croft becomes gaming’s best-selling heroine amid new Tomb Raider rumours

Lara Croft becomes gaming’s best-selling heroine as new Tomb Raider rumours fuel excitement.

Related Articles

Popular Categories