As AI-generated music floods streaming platforms and social media, a pressing question emerges: Who owns a song created by algorithms? From viral AI Drake tracks to AI-composed jingles, the legal system struggles to answer this—leaving artists, tech companies, and lawmakers in a high-stakes standoff. Let’s dissect the battle over ownership, profits, and creativity.
In 2023, a TikTok user named Ghostwriter released a song using AI-cloned vocals of Drake and The Weeknd. It garnered 15 million streams before Universal Music Group (UMG) issued takedowns, citing copyright infringement. The track exists in a legal gray area: Who owns the vocals—the AI user, the platform, or the original artists?
Musician Grimes launched a platform allowing fans to use her AI voice for tracks, with a 50% royalty split. While innovative, it raises questions: If a fan’s AI-generated song goes viral, who controls licensing deals—Grimes, the fan, or the AI developer?
Though focused on images, Getty’s lawsuit against Stability AI (for training models on copyrighted photos) could impact music. If AI tools like Google’s MusicLM are trained on copyrighted songs, are outputs derivative works—or entirely new?
Current laws were designed for human creators, leaving AI music in limbo:
The U.S. Copyright Office states that works lacking “human authorship” (like AI-generated art) aren’t protected. However, if a human “significantly modifies” AI output, they may claim copyright.
The EU’s proposed AI Act demands transparency about training data sources but avoids ownership clarity. Courts may treat AI music as “computer-generated works,” granting rights to the human “arranger.”
Japan’s 2018 guidelines allow AI-generated content to be copyrighted if humans oversee the process—a stance favoring tech companies.
AI models like OpenAI’s Jukedeck are trained on millions of copyrighted songs. Artists argue this violates their rights; tech firms claim it’s “fair use” for innovation.
Is the owner:
The programmer who built the AI?
The user who input prompts?
The original artists whose work trained the AI?
Courts have yet to decide.
Platforms like Boomy distribute royalties to AI users, but labels like UMG demand compensation for AI using their artists’ styles.
Treat AI like Photoshop—users own outputs but must license training data. The UK’s 1988 Copyright Act already does this for computer-generated works.
Force AI companies to disclose training data sources and share royalties with original artists. Adobe’s AI music tool, trained only on licensed tracks, sets a precedent.
Create labels like “AI-Assisted” or “AI-Generated” with tiered ownership rules.
Q1: Can I copyright a song made with AI?
In the U.S., only if you prove “substantial human input” (e.g., editing melodies, lyrics). Pure AI outputs aren’t protected.
Q2: Is using AI to mimic an artist’s voice legal?
Not without permission. Drake’s label sued AI voice clone projects for violating publicity rights and copyright.
Q3: Do artists get paid if an AI copies their style?
Currently, no—unless laws change. Tennessee’s ELVIS Act (2024) aims to protect voices, but style remains unprotected.
The AI music ownership dilemma exposes a fractured legal system racing to redefine art, labor, and profit in the algorithmic age. While courts debate, artists and tech pioneers are forging their own rules—from Grimes’ royalty splits to Adobe’s ethical AI models. One thing is clear: Until laws modernize, the question “Who owns AI music?” will only grow louder.