Spotify Announces New AI Safeguards, Says It’s Removed 75 Million ‘Spammy’ Tracks

Spotify
Antony Jones/Getty Images for Spotify

Problems surrounding AI have been one of the music industry’s most pressing issues in recent months, and Spotify has announced a strengthening of its platform’s protections against “AI slop,” spamming and other fraudulent activity. The company notes that in the past 12 months, it has removed over 75 million “spammy” tracks from its platform.

Its new protections include a policy to police unauthorized vocal impersonation (“deepfakes”) and fraudulent music uploaded to artists’ official profiles; an enhanced spam filter to prevent mass uploads, duplicates, SEO hacks, artificially short tracks designed to fraudulently boost streaming numbers and payments. The company also says it’s collaborating with industry partners to devise an industry standard in a song’s credits to “clearly indicate where and how AI played a role in the creation of a track.”

“The pace of recent advances in generative AI technology has felt quick and at times unsettling, especially for creatives,” the company writes in a just-published post on its official blog. “At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push ‘slop’ into the ecosystem, and interfere with authentic artists working to build their careers.

“The future of the music industry is being written, and we believe that aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers.”

In a press briefing on Wednesday, Spotify VP and Global Head of Music Product Charlie Hellman said, “I want to be clear about one thing: We’re not here to punish artists for using AI authentically and responsibly. We hope that they will enable them to be more creative than ever.

“But we are here to stop the bad actors who are gaming the system. And we can only benefit from all that good side if we aggressively protect against the bad side.”

According to the announcement, those protections include:

Love Film & TV?

Get your daily dose of everything happening in music, film and TV in Australia and abroad.

Stronger impersonation rules

The issue: We’ve always had a policy against deceptive content. But AI tools have made generating vocal deepfakes of your favorite artists easier than ever before.

What we’re announcing: We’ve introduced a new impersonation policy that clarifies how we handle claims about AI voice clones (and other forms of unauthorized vocal impersonation), giving artists stronger protections and clearer recourse. Vocal impersonation is only allowed in music on Spotify when the impersonated artist has authorized the usage.

We’re also ramping up our investments to protect against another impersonation tactic — where uploaders fraudulently deliver music (AI-generated or otherwise) to another artist’s profile across streaming services. We’re testing new prevention tactics with leading artist distributors to equip them to better stop these attacks at the source. On our end, we’ll also be investing more resources into our content mismatch process, reducing the wait time for review, and enabling artists to report “mismatch” even in the pre-release state.

Why it matters: Unauthorized use of AI to clone an artist’s voice exploits their identity, undermines their artistry, and threatens the fundamental integrity of their work. Some artists may choose to license their voice to AI projects — and that’s their choice to make. Our job is to do what we can to ensure that the choice stays in their hands.

Music spam filter

The issue: Total music payouts on Spotify have grown from $1B in 2014 to $10B in 2024. But big payouts entice bad actors. Spam tactics, such as mass uploads, duplicates, SEO hacks, artificially short track abuse, and other forms of slop, have become easier to exploit as AI tools make it easier for anyone to generate large volumes of music.

What we’re announcing: This fall, we’ll roll out a new music spam filter — a system that will identify uploaders and tracks engaging in these tactics, tag them, and stop recommending them. We want to be careful to ensure we’re not penalizing the wrong uploaders, so we’ll be rolling the system out conservatively over the coming months and continue to add new signals to the system as new schemes emerge.

Why it matters: Left unchecked, these behaviors can dilute the royalty pool and impact attention for artists playing by the rules. Our new music spam filter will protect against this behavior and help prevent spammers from generating royalties that could be otherwise distributed to professional artists and songwriters.

AI disclosures for music with industry-standard credits

The issue: Many listeners want more information about what they’re listening to and the role of AI technology in the music they stream. And, for artists who are responsibly using AI tools in their creation process, there’s no way on streaming services for them to share if and how they’re using AI. We know the use of AI tools is increasingly a spectrum, not a binary, where artists and producers may choose to use AI to help with some parts of their productions and not others. The industry needs a nuanced approach to AI transparency, not forced to classify every song as either “is AI” or “not AI”.

What we’re announcing: We’re helping develop and will support the new industry standard for AI disclosures in music credits, developed through DDEX. As this information is submitted through labels, distributors, and music partners, we’ll begin displaying it across the app. This standard gives artists and rightsholders a way to clearly indicate where and how AI played a role in the creation of a track — whether that’s AI-generated vocals, instrumentation, or post-production. This change is about strengthening trust across the platform. It’s not about punishing artists who use AI responsibly or down ranking tracks for disclosing information about how they were made.

This is an effort that will require broad industry alignment, and we’re proud to be working on this standard alongside a wide range of industry partners, including: Amuse, AudioSalad, Believe, CD Baby, DistroKid, Downtown Artist & Label Services, EMPIRE, Encoding Management Service – EMS GmbH, FUGA, IDOL, Kontor New Media, Labelcamp, NueMeta, Revelator, SonoSuite, Soundrop and Supply Chain.

Why it matters: By supporting an industry standard and helping to drive its wide adoption, we can ensure listeners see the same information, no matter which service they’re listening on. And ultimately, that preserves trust across the entire music ecosystem, as listeners can understand what’s behind the music they stream. We see this as an important first step, which will undoubtedly continue to evolve.

The post concludes, “We support artists’ freedom to use AI creatively, while actively combating its misuse by content farms and bad actors. Spotify does not create or own music; this is a platform for licensed music where royalties are paid based on listener engagement, and all music is treated equally, regardless of the tools used to make it.

“These updates are the latest in a series of changes we’re making to support a more trustworthy music ecosystem for artists, for rightsholders, and for listeners. We’ll keep them coming as the tech evolves, so stay tuned.”

Variety will have more on the situation as it continues to develop.

From Variety US