Monetizing AI-generated music
Why the music industry should embrace music made with AI—and how they can profit from it

The AI playlist, similar to Spotify’s “THIS IS” playlists
Disclosures
A lot of the ideas I’m proposing below go against valid arguments concerning protecting IP. But while we need policies to keep AI ethical, these take time to come up with and implement. In the meantime, the technology is here, widely adopted already, and evolving very f****** fast. People are using AI to create and release music, and Youtube is already full of “AI covers”, i.e. covers of songs by artists who never actually sang them.
So what I’m trying to do is propose potential ways the music industry can embrace AI and capitalize on the technology rather than attempt to suppress it completely. Like any negotiation, the best outcome is when you increase the size of the pie rather than make your piece bigger than everyone else’s.
I love pie, so let’s get into it.
Patterns in the music industry
Every industry is impacted by AI, so why would I focus on music, you ask? Firstly, it’s an industry I know well. I worked in it for 10 years in various capacities, including as a royalty accountant and entrepreneur. I even tried to build a new revenue stream for artists by founding a startup called closiit (read my post-mortem here). And while I’ve stepped away from the music business in 2023, I miss the drama :)
More importantly, it’s also an industry notoriously slow at accepting change and innovations, which has set it back time and time again. This, I believe, is the reason it remains such a small market (for context, the global sports market is valued at close to $500bn, while the global music industry is less than $80bn).
I see the industry responding to AI the same way it responded to the emergence of Napster and Spotify, two technologies that revolutionized how we distribute and consume music. It’s trying to fight the inevitable instead of adapting to it. Streaming platforms are just going to wait this out until the music industry realizes how much revenue it’s missing out on and tries to catch up with a not-so-perfect-after-all solution to monetize AI. So it’s still very early in the game, and an exciting time (at least for me) to imagine how we could figure this out sooner.
How AI is used to make music
Like in most creative fields, there are 3 ways AI can be used to make music:
- 100% AI: creating a track from scratch using just prompts (like Boomy)
- AI as a creative tool: using an artist’s voice in a track, mimicking the style of a musician for a specific part etc.
- AI as a technical tool: for mixing, mastering etc.
I’m most interested in 1 and 2 for this article, as I don’t believe using AI as a technical tool creates any monetization complexities.
Why the music industry needs to embrace AI
AI won’t replace musicians and artists, because I don’t believe consumers will connect with AI music in the same way. As AI tools become more accessible, human output will grow more scarce, and therefore become more valuable in the eyes of consumers. We crave human error because we relate to it.
Additionally, using a famous artist’s voice in your song doesn’t guarantee you success. In fact, as more people adopt the technology, the standard of quality gets raised while the volume of new music keeps growing, making it increasingly difficult to stand out.
We’ve had the ability to create highly detailed images using computers for a long time now, yet we still pay money to go to museums, and even more money to acquire a piece of art with a level of detail far inferior to a computer’s pixel prowess.
Similarly, everybody freaked out and sued one another over the use of samples. The industry said it would kill the real musician, that artists who used samples weren’t really artists blah blah blah. But not only are there more artists than ever hiring musicians, people are paying more money than ever to go to concerts. The live music industry has seen its revenues grow from $1.7B in 2000 to over $30B in 2024.
AI is just a new tool with copyright implications that we need to solve for. So given the inevitable fact that we will see more and more AI content using training data from artists and songwriters, we have a choice: suppress it (and obviously fail), or try to monetize it. I vote for the latter.
An overview of streaming royalties
Before I start blasting AI buzzwords (you know, CHIPS AND NEURAL NETWORKS AND TRAINING DATA AND FOUNDATION MODELS AND FEEDBACK LOOPS), I think it would be helpful to explain the basic of royalties, specifically when it comes to streaming since that’s what we’re focusing on here.
I’m going to keep it very simple and high-level. There are a lot more intricacies to this, but they aren’t necessary to explain this point.
You have 2 players that are relevant to this article:
- Digital Service Providers (also known as DSPs): the streaming services like Spotify, which earns revenue from subscriptions and advertising on their platform
- Rights holders: the people who write a song (songwriters), the people who record the song (artists, producers), and their representatives, Music Publishers and Record Labels respectively.
How do rights holders get paid by DSPs? Spoiler, nobody gets paid per stream. Most DSPs function the same way, but I’ll focus on Spotify here for simplicity.
Spotify collects all subscription revenue and advertising revenue, throws it into one big pool, keeps 30% of it, and pays the remaining 70% to rights holders based on “market share”, i.e. the share of streams a track got over the total streams on the platform that month.
Example:
- Total revenue on Spotify in February ’25: $10,000
- Spotify share (30%): $3,000
- Rights holders share (70%): $7,000
- Total stream on Spotify that month: 1,000,000
- Total streams for “DtMF” by Bad Bunny that month: 2,000
- Amount due to rights holders of “DtMF”: 2,000 / 1,000,000 x $7,000 = $14
Thanks for the crash course, now what?
Now that you’re a royalty expert, you’ll hopefully be able to follow my stream of consciousness below. In short, I believe streaming services can start incorporating AI music both into their licensing process and the catalog they offer to consumers. I see 3 phases to make this happen.
Phase 1: build AI licensing guidelines
If I were an artist, I wouldn’t want somebody using my voice to say whatever they want on a track. We need guidelines to respect the work of rights holders, and those guidelines need to be set by the rights holders themselves. That’s phase 1.
Currently, rights holders grant Spotify a license to make their music available on the streaming platform. As part of this licensing agreement, I propose adding a clause that allows rights holders to opt in to having their catalog used in AI-generated content, subject to the guidelines they establish, such as:
- Limit it to specific songs (say pre-2000) to avoid conflicting with their current releases in style, voice etc.
- Limit it to a specific asset type, for example “you can use my lyrical style but not my voice”
- Prevent use on songs with topics like politics or religion
- Etc.
The same way there is “Spotify for Artists”, there should be the “SpotifAI Hub” for rights holders to manage their guidelines from.
Spotify then makes this catalog “available” to AI creators as a set of guidelines of what they can use, and how they can use it etc. Basically, this tells them how AI creators should build their prompts.
I know what you’re thinking:
- Why would “AI creators” respect these guidelines? Well, because like everyone else, they want people to listen to their music. As of right now, AI creators see their music get taken down from streaming services, as we’ve witnessed with ghostwriter’s AI-powered semi-hit “featuring” AI Drake and The Weeknd. It’s the exact same thing with samples: you can decide to release a track on Spotify without clearing the rights to the sample, but you’re risking having it taken down, or, if the track has already generated decent revenue, enter the lovely world of music industry lawsuits, with its villains and preachers.
- Why would rights holders want their work being used to generate content with AI? It unfortunately doesn’t matter if they want it or not, it’s already being used. So might as well try to set some boundaries and make money off it
- Why would Spotify care? Because they don’t want to alienate what can become a major part of the music created in the near future. Plus, there’s an opportunity to generate more money, as I’ll discuss further below
Phase 2: build AI catalog
Once there’s a set of guidelines, we can start ingesting music by AI creators onto Spotify, assuming they respect said guidelines for the specific rights holders whose content they are using as training data.
That is, when ingesting music, Spotify will require distributors to:
- Disclose when a track was made with AI. If so, the track will be annotated for the consumer on Spotify’s platform, just like the “Explicit” label
- Disclose what rights holders’ work was used to generate this song, e.g. whose voice is featured on the track, whose song was used as reference in the prompt, etc. It might even require some guessing from the AI creator, but to make it simpler, Spotify can provide a dropdown search bar to select from songs, artists, and other rights holders that make up their catalog. These rights holders will also be listed in the “Credits” modal under a new section called “AI training data”
- Agree to terms & conditions. Once the user selects a rights holder form the dropdown search, the list of guidelines for that specific rights holder will appear as a reminder, along with a Consent Section and Checkbox to agree to the repercussions of not following said guidelines
Artist pages will also have separate playlists. Just like “This is Drake”, we’ll see “AI made with Drake”.

The “AI label”, similar to Spotify’ “Explicit” label
Phase 3: Create a new royalty pool
In an ideal world, and perhaps a not so distant future, AI creators may strike deals directly with the rights holders whose work is being used as training data, offering them a royalty on the new track. But that’s gonna take some time to flesh out.
So we need another way to monetize this new AI catalog, and that starts with charging users to access it. After all, if consumers don’t want it, why even bother?
Spotify can charge its users an extra $1/month to listen to all the AI music in the world. That additional subscription revenue will be thrown into a new royalty pool which we’ll call the “AI Pool.” Instead of a standard 70/30 split like the one described above, a portion of the AI pool will be redirected back to rights holders like this:
- Spotify share: 30%
- AI creator share: 55%
- Rights holders share (i.e. training data): 15%
Whether 15% is adequate is up for debate. But 55/15 is on par with a producer royalty, which is anywhere between 15% and 25% of the artist’s share of profits.
Next, how do you distribute the Rights holders’ share across all rights holders? The easy way would be to apply the same market share calculation as the one described earlier in the article. This is not reflective of what songs are used the most by AI creators, but it’s under the assumption that the dataset used to train generative AI music models is as large as Spotify’s catalog, which isn’t far from the truth.
Conclusion
And just like that, the pie got bigger for everyone:
- Consumers have a new catalog to explore amidst a technological revolution
- Spotify increases their revenue by introducing a new subscription tier
- AI creators get to share their music on the world’s biggest music platform, without having to figure out how to report royalties
- Rights holders have a new income stream which, let’s face it, is going to keep growing
Now, I realize my solution is far from perfect (HA! I beat you to it). You’re thinking:
- Will this model capture 100% of AI generated music?
- Does it prevent AI creators from uploading music without disclosing the training examples they used?
- Are rights holders paid exactly what they are owed based on how much their work was used in training data?
Nope, nope, and nope again. But it’s too early for the perfect solution anyways. I mean, we’re still figuring out how this new AI s*** is supposed to work. Currently, generative AI models don’t track what training examples were used to generate a specific output. Instead, they rely on patterns from large datasets, which creates a blended output. This means that there is no way to say “this output used One Dance by Drake as a training example”. So under the model I’m proposing, tracks created using AI without explicit annotation could go unnoticed when distributed to Spotify, until they’re somehow identified as infringing on somebody’s IP and removed from the platform.
Researchers are looking for ways of measuring the impact of specific training examples on a given output, but it looks like more of an approximation at the moment. Still, that’s already a start, and hopefully we get to a point where a new fingerprint technology is developed and stored into training data. That way the output generated can carry a watermark, and therefore inform us more precisely on what rights holders should be credited.
So no, my model is not perfect. But at least it sets a framework to build off of. And instead of waiting until it’s too late, like the music industry has done again and again, maybe it’s time to get scrappy and be ahead of the curve.