Copyright and AI Consultation: the UK music industry focus on fairness

The UK Government Consults on Copyright and AI

Unless you’ve been living under a rock since the start of 2025, you’ve probably heard the growing debate that’s shaking the creative industries to their core: artificial intelligence. More specifically, the UK government’s apparent alignment with US tech giants—at the expense of the UK’s rich creative sector.

If you are reading this and you are not a creator, you might be wondering what all the fuss is about. Maybe you’ve dabbled with AI music or art tools, found them entertaining, and thought, what’s the harm? On the surface, these tools seem like harmless fun – exciting, even. But beneath the novelty lies a darker reality for those who rely on their craft to make a living.

Think about your relationship with music. Maybe you’re a passionate fan, but never had the chance to learn an instrument. Maybe school steered you toward maths and sciences, dismissing music as a ‘Mickey Mouse’ subject. Maybe you dreamed of being a rock star but never had the time or skill to make it happen. Now, AI offers a shortcut. With just a few words and the tap of a button, you can describe the song you always wished you could write, and in seconds, an AI platform will compose it, perform it, generate lyrics, create album artwork and even distribute it a multitude of streaming services. Just like that, you are a rock star – no effort required!

Sounds incredible, right? But have you ever stopped to ask, how does that AI tool actually work? Where did it learn to write music? Why does an app that entertained you for half an hour also encourage you to upload your song to Spotify? And most importantly – who is paying the price for your AI-generated hit?

By January 2025, 25 million people had used Suno to create a song, with 50% of those users generating at least 10 songs in a single session. While exact figures on daily song creation, subscription numbers, or AI-generated tracks uploaded to streaming platforms remain undisclosed, it’s clear that Suno alone is likely producing millions of songs every day. Even if a small percentage of those AI-generated songs make it onto streaming services, the impact on human artists could be devastating. Musicians who spend years studying at top conservatoires and music schools, honing their craft, writing original music, and building their careers rely on visibility and fair compensation. The floor of AI-generated tracks dilutes this ecosystem, making it even more difficult for genuine creators to gain traction and earn a living.

The numbers are already staggering. Between January and March 2023, an average of 120,000 songs were uploaded to streaming services every single day, a significant increase from 93,400 daily uploads in 2022. AI is a major driver of this surge, and as AI music platforms continue to expand, that number will only rise.

So, explain the problem!

Let’s start by answering some of those questions about how platforms like Suno and Udio are able to write your latest rock song.

How do AI tools actually work?

Generative AI music platforms, like Suno and Udio, rely on vast amounts of data extracted from human-created music to generate new compositions. These tools use machine learning and deep learning models trained on massive datasets, much of which is protected by copyright. This includes any music or sound recordings available online, such as film scores, advertisement jingles, library music, classical music catalogues, commercial tracks, user-generated content, streaming service libraries, and other publicly accessible recordings.

The fundamental process behind these AI tools can be broken down into several key steps:

  1. Data Collection & Training: AI developers collect vast datasets of music, lyrics, and metadata. While some of this data is sourced from open-source or public domain materials, an increasing portion includes copyrighted content. Ideally, this content should be legally purchased or licensed, but in many cases, it is scraped from the internet without explicit permission from the creators who own those works.
  2. Model Training: Training occurs in multiple stages. The collected data (human-created music) is preprocessed, which means it is cleaned, labelled and formatted to ensure consistency and quality. The model then extracts stylistic and structural elements from the data to identify and learn patterns in key elements, such as melody, harmony, rhythm, lyrics and production techniques. Using machine learning and deep learning techniques, the AI model is trained to generate music that mimics the stylistic and sturctural elements of its training data. Developers then refine the model by testing its outputs against the pure inputs, adjusting parameters and improving its ability to produce human-like songs and compositions.
  3. Pattern Recognition & Composition: Once trained, the AI uses complex algorithms to analyse and replicate musical structures. When a user inputs a prompt (e.g., “Create a rock song with a bluesy feel”), the model generates a new song or composition by combining and modifying elements it has learned from its training data (the human-created, likely copyright protected, music it was trained on).
  4. Lyric & Vocal Synthesis: AI models can also generate lyrics based on themes, structures, and rhyming patterns found in existing songs. Some platforms even synthesise vocals, mimicking human singers or blending vocal styles to create realistic performances.
  5. Automated Production & Mixing: Many AI tools go beyond composition, automatically producing and mixing the track to achieve a polished, studio-quality sound. This process includes arranging instruments, adjusting levels, and applying effects to enhance the final output. These AI models learn from human-created music and sound recordings, analysing how professional producers achieve balance, clarity and depth in their mixes. The AI then applies these learned techniques to generate new compositions that mimic music industry standards.
  6. Distribution & Monetisation: Some AI music platforms encourage users to distribute their AI-generated songs to streaming services like Spotify. While this might seem harmless, it raises critical legal and ethical concerns, especially when AI-generated content competes directly with human-made music for visibility and royalties.

What does this have to do with copyright?

This process might seem like an exciting innovation, but the reality is that AI-generated music is often built on unlicensed, human-created works. Without proper licensing frameworks in place, musicians, songwriters, and producers risk having their work used without consent, credit, or compensation.

Copyright is a legal framework that grants creators exclusive rights over their original works, including musical works, literary works, and visual art. It ensures that musicians, songwriters, and producers can control how their work is used, distributed, and monetised. Copyright is fundamental to the music industry because it provides a means for artists to earn a living from their creative efforts. Without strong copyright protections, creators risk having their work used without consent or compensation, undermining their ability to sustain their careers.

For the music industry, copyright underpins the licensing structures that enable artists and rightsholders to receive fair remuneration for the use of their music. It governs everything from streaming royalties to synchronisation fees in film, television, and advertising. Without copyright, there would be no legal mechanism to ensure that artists are paid when their music is played, sold, or reproduced. In an era where AI can generate music at scale, strong copyright protections are more important than ever to safeguard the livelihoods of human creators.

AI companies are legally obligated to obtain permission before using copyrighted works because copyright law grants creators exclusive control over how their content is used. Without obtaining a license or explicit consent, AI companies infringe upon the rights of music creators by utilising their works without compensation or credit. This legal requirement exists to ensure that creators maintain agency over their intellectual property and are fairly compensated when their work is used commercially.

Seeking permission is crucial for several reasons:

  • Fair Compensation: Artists and rightsholders rely on licensing fees, royalties, and other revenue streams tied to the use of their music. Unauthorised AI training deprives them of this income.
  • Maintaining Creative Integrity: AI-generated music that mimics human-created works can dilute the originality of the music market, making it harder for artists to distinguish their work.
  • Preventing Exploitation: AI companies that scrape copyrighted content without consent engage in a form of exploitation that disregards the time, effort, and resources invested by musicians and producers.
  • Market Competition: Allowing AI-generated content to flood the music market and streaming platforms without proper oversight creates an unfair competitive landscape, where human musicians must compete with algorithmically produced tracks that often originate from unlicensed data sources.
  • Legal Compliance: Many jurisdictions have strict copyright laws that require AI companies to respect licensing agreements. Failing to obtain permission could lead to legal disputes and penalties.

By enforcing licensing requirements and upholding copyright protections, the music industry can ensure that artists retain control over their creative output and continue to thrive in an era of AI-driven content generation.

What was the consultation all about?

In December 2024, the UK government launched a consultation on Copyright and Artificial Intelligence, a crucial policy moment that could define the future of AI’s relationship with creative industries. The consultation was prompted by ongoing concerns about how AI companies use copyrighted material – particularly music, literature, and visual art – as training data for generative AI models. At the heart of the debate is the balance between fostering AI innovation and protecting the economic rights of creators whose work fuels these technologies.

The UK’s creative industries are a powerhouse, contributing £124.6 billion to the UK economy in 2023 and employing 2.4 million people. The sustainability of this sector relies on strong copyright protections, ensuring that creators can earn a living from their work. However, the proposals outlined in this consultation risk destabilising the industry by allowing AI developers to exploit copyrighted content without proper licensing.

The consultation presented several policy options for regulating AI’s use of copyrighted works. The most controversial of these is commonly referred to as Option 3, which proposes a text and data mining (TDM) copyright exception with an opt-out mechanism. This would allow AI companies to scrape and use copyrighted material by default, forcing rightsholders to actively opt out if they do not wish their works to be used.

The key questions raised in the consultation included:

  • Should AI developers be allowed to train models on copyrighted works by default, provided rights holders have the ability to opt out?
  • How can transparency be improved in AI training datasets?
  • Should new licensing frameworks be introduced to ensure fair remuneration for creators?
  • How can the government ensure that UK law supports AI innovation while protecting the rights of creatives?
  • How should AI-generated digital replicas that mimic existing artists, styles, or voices without authorisation be protected and regulated?

My response to the consultation

As a researcher specialising in AI, copyright, and music industry studies, I submitted a response to this consultation, arguing against the proposed changes that threaten to undermine the rights of music creators. In my response to the consultation, I highlighted several critical issues and made the following key arguments:

  1. Option 1 Over Option 3 – The Unworkability of an Opt-Out Mechanism: I strongly opposed the proposal allowing AI developers to use copyrighted material by default, arguing that an opt-out mechanism is unworkable for musicians and has already proven ineffective in the EU. I supported Option 1 which aims to strengthen copyright protections by including legally binding transparency obligations from AI developers and penalties for non-compliance.
  2. Transparency Should Be a Legally Binding Obligation: For licensing frameworks to be effective, AI developers must be transparent about the works that have been used to train their AI models, and the date from which that work was scraped. The government only included transparency obligations under Option 3. I argued that transparency should be a requirement no matter which Option the government ultimately chose to adopt.
  3. Copyright is Fit for Purpose and Licensing Frameworks Already Exist: I argued that the existing copyright system is well-equipped to manage AI-generated content, and that licensing frameworks should be enforced rather than replaced with broad exceptions.
  4. Ensuring Fair Compensation for Creators: I advocated for remuneration models that fairly compensate creators whose works are used in AI training, including licensing fees and royalty payments.
  5. Ensuring Robust Protection for Artists’ Voices and Musicians’ Performances: I raised concerns about AI-generated digital replicas mimicking artists’ voices and styles without authorisation, calling for stronger legal protections against such exploitation, such as personality rights that protect an artist’s name, image, likeness and voice. Similar protections have been introduced in the US and it is vital that UK law adopt similar protections.
  6. Market Competition and Ethical Considerations: I warned against the oversaturation of AI-generated music on streaming platforms, which could distort the market and reduce opportunities for human musicians. AI-generated works are already being used as a cheap alternative to human-created works in the music market (for advertising, gaming and synchronisation). We must not enable a volatile market where human-creators are forced to compete with AI-generated works that have been trained on their works.
  7. The Protection of Computer-Generated Works (CGWs) as a Related Right: I proposed recognising Section 9(3) of the Copyright, Designs and Patents Act as a related right. Section 9(3) offers protection to computer generated works (CGWs). The government intends to remove this protection. I am in favour of retaining Section 9(3) as a related right, which would protect AI-generated works in the same say sound recordings are protected. Such protections could be utilised as a mechanism for generating royalties for human composers alongside licensing frameworks for AI training data.

For a full, detailed breakdown of my responses, you can access the PDF of my submission below:

Looking forward…

The consultation closed at 23:59 on Tuesday the 25th February. It will be several months before we become aware of the consultation’s outcome, however we do already know that the government received over 11,500 responses. The consultation end-date brought the creative industries together in ways that have never been seen before in the UK. All UK newspapers aligned to present a front page advertising the #MakeItFair campaign, and over 1,000 musicians were named on a silent album released under Virgin Records in protest of the government’s plans to amend copyright in favour of big tech and AI development. I am proud to say that my name is on that record!

Looking forward, the UK government has a critical decision to make: Will it protect the rights of music creators and other artists, or will it prioritise the interests of AI developers at the expense of the creative economy? This is a pivotal moment for the music industry and for the UK’s centuries old and pioneering copyright regime. Is the UK government blinded by the dollar signs promised by Silicon Valley, or will it choose to protect one of the greatest and most valuable creative industries in the world?

Leave a Comment