
Spotify has confirmed it has removed more than 75 million tracks flagged as “spammy” over the past year as part of a wider crackdown on AI misuse and fraudulent uploads.
The streaming platform outlined new measures aimed at protecting artists from impersonation and ensuring greater transparency for listeners.
The company announced a policy specifically targeting AI voice clones and vocal deepfakes. It stated that the unauthorised use of an artist’s voice will no longer be permitted unless it is officially licensed.
The rules also expand safeguards against fraudulent uploads that appear under another artist’s profile.
The update follows reports earlier this year that AI-generated tracks had been uploaded to Spotify featuring musicians who had passed away, sparking concern across the industry.
To combat manipulation, Spotify is introducing a new spam detection system later this year. The filter will block uploads that use tactics such as duplicates, artificially short tracks, or other methods designed to exploit the platform’s recommendation system.
In addition, Spotify is supporting an industry standard for AI disclosures in music credits. This will allow artists and labels to specify how AI contributed to a track, including vocals or instrumentation.
Distributors and partners will provide this information, which will then appear in track credits across the app.
Spotify stated that these measures are designed to keep artists in control of how AI is used in their work while building trust among listeners as generative technology becomes increasingly widespread.
The update also follows recent debate around The Velvet Sundown, a band initially presented as human-made but later confirmed to be AI-generated, illustrating the growing need for transparency in digital music.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ