I am really pleased to be writing this post because the publication of Brave New World? marks an important milestone in my work with the Independent Society of Musicians (ISM), and in the wider conversation about generative AI and creators’ rights. I began working with the ISM’s External Affairs team and CEO Deborah Annetts about a year and a half ago as Senior AI Researcher and Policy Advisor, just as the government launched its consultation on AI and copyright. It was a fantastic opportunity to put four years of research into practice and to see that work begin to have real-world impact, while also contributing to an organisation committed to representing music creators, which is exactly why I wanted to do a PhD on the impact of AI in the first place. It also gave me the opportunity to develop new skills in political engagement and to expand my research interests into adjacent areas, including human rights and personality rights, both of which I hope to write more about in future posts. As an academic researcher, it has been an absolute privilege to work on the front line of policy campaigning with an organisation like the ISM, which puts musicians at the heart of its work.

For those unfamiliar with the organisation, the ISM does vital work in supporting, representing, and advocating for musicians across the UK. It is not a trade union in the same way as the Musicians’ Union, but that position gives it greater freedom in developing policy, leading campaigns, and engaging directly with parliamentarians and policymakers. At a time when the pace of technological change is accelerating, and the legal and policy implications of generative AI are becoming harder to ignore, this work — and the ISM’s commitment to protecting creators’ rights — has never felt more important.
The most significant pieces of work I have been involved in through the ISM is Brave New World? Justice for Creators in the Age of GenAI. Conceived by the brilliant Deborah Annetts, the project brought together five key creative sector organisations: the ISM, the Society of Authors (SoA), the Association of Photographers (AOP), the Association of Illustrators (AOI), and Equity. I was able to draw on extensive datasets of primary research shared by those organisations, amounting to responses from more than 10,000 creators across these sectors, one of the largest cross-sector evidence bases assembled to document the impact of generative AI on creative work. Researching and writing this report was both a huge professional privilege and, at times, a sobering experience. The more evidence we gathered, the clearer it became that the economic and legal disruption caused by generative AI was not some future problem waiting on the horizon. It was already here, and creators were already living with the consequences. As I put it in recent speeches at the Houses of Parliament and the Royal Over Seas League on behalf of the ISM, this report was about doing something very simple: putting creators back in the room.
How the report came about
Brave New World? emerged at a moment when AI and copyright were rapidly moving to the centre of UK policy debate. The government launched its Copyright and Artificial Intelligence consultation in December 2024, asking how the UK’s copyright framework should respond to AI training and to the competing pressures coming from the creative industries and the AI sector. The consultation ran from 17 December 2024 to 25 February 2025 and raised, among other things, the possibility of introducing a new text and data mining exception for AI training.
Before the consultation had even closed, the government published Matt Clifford’s AI Opportunities Action Plan, which many in the creative industries read as signalling a clear preference for an opt-out model: one that would allow AI developers to train on copyrighted works unless rights holders actively reserved their rights. That provoked widespread concern. For many creators and rights holders, it suggested not only a weakening of copyright protection, but also a model that would shift the burden onto creators to protect their own work in a technical environment where effective opt-out tools do not yet properly exist. It also created the impression that the government’s preferred direction of travel had already been set before the consultation process had concluded.
Concern was not simply ideological. Copyright exists to protect original human-created works and to provide a route to remuneration for the skill, labour, and judgement involved in creating them. While copyright law has long included exceptions designed to balance creator protection with public interest goals such as research and access, those exceptions were never intended to facilitate the large-scale use of creative works in the development of commercial systems that may then compete with those same works in the market. For many in the creative industries, the prospect of a broad TDM exception with an opt-out mechanism therefore felt not like a minor technical adjustment, but like a fundamental shift in whose interests copyright law was being asked to serve.
At the same time, Parliament was locked in repeated debates over the Data (Use and Access) Bill, including Baroness Kidron’s amendments, which sought to strengthen transparency and make clear that AI developers could not simply treat copyright-protected works as free training material. After months of parliamentary ping-pong, and a remarkable effort by Baroness Kidron, members of the House of Lords, and many organisations involved in the Creative Rights in AI Coalition (CRAIC), the alliance behind the Make it Fair Campaign, those amendments were not accepted. In the end, however, the Act did require the government to publish both a report on the use of copyright works in AI development and an economic impact assessment within nine months of Royal Assent, with a progress statement within six months.
Those debates mattered enormously. They were not just technical arguments about policy design. They were arguments about what kind of market the UK wants to build, whose interests count when rules are made, and whether creators will retain any meaningful control over the works, voices, images, and identities now being absorbed into generative AI systems at scale. As those debates intensified, it became increasingly obvious to us that one thing was missing from much of the public discourse: the lived experience of creators themselves. It was at that point, in June 2025, that the idea behind Brave New World? took shape as a political intervention designed to impress upon policymakers just how urgent the issue of industrial-scale extraction from copyrighted works had become.
The creator voice was being lost
Too much of the debate around generative AI and copyright has been dominated by industry lobbying, litigation (largely in the United States), and political rhetoric about innovation, growth, and the need for the UK not to be “left behind” in a global AI race. But underneath all of that noise is something much more immediate: a labour market changing at speed, and a crisis of copyright, consent, and identity moving even faster. That is exactly the point I tried to make in recent speeches, where I argued that creators were too often being spoken about but not listened to.

Not long before the report was published, deals were being made behind closed doors between music companies and AI firms. In one sense, this was promising: it demonstrated that a licensing environment for generative AI training is possible. But it also raised a familiar concern. Once again, creators risk being left out of the decision-making process. For musicians in particular, this feels uncomfortably similar to the kinds of deals struck at the turn of the century, when streaming companies partnered with record labels in response to digital piracy, but creators often saw little meaningful financial return. As I argued in an earlier post on the lessons of the streaming era, there is a very real danger that generative AI will follow a similar path, with creators bearing the cost while others capture most of the value.
That was one of the main reasons I felt so strongly about Brave New World?. I believed, as did the ISM, that the conversation needed to be reset. We needed evidence that centred creators rather than treating them as an afterthought to innovation policy. We needed to document what was already happening across the creative industries, and to show that these were not isolated anxieties or speculative fears, but real and widespread experiences of loss, insecurity, extraction, and displacement.
What the report shows
What Brave New World? demonstrates is that generative AI is already affecting creative work in material ways. The five partner organisations involved with the report represent more than 80,000 creators, and the report draws on survey responses from more than 10,000 professional creators gathered between 2022 and 2025. It is one of the largest UK cross-sector evidence bases of its kind. The evidence reveals a consistent pattern across sectors: job and income losses, large-scale scraping without consent, digital replicas and cloned identities, and a transparency gap that leaves creators without meaningful control or remedy.

What struck me most in working through the evidence was how consistent the pattern was across different creative fields. In music, 73% of musicians said that unregulated generative AI now threatens their ability to earn a living, while 53% said they had already lost work or income to AI-generated music, or could not be sure that they had not. In writing, 72% of authors reported that job opportunities had already been cut because clients were turning to AI, while 86% said their earnings had already fallen. For illustrators and photographers, too, the picture was stark: commissions disappearing, bread-and-butter work drying up, and careers being hollowed out not by some distant technological future, but by market substitution happening now.
Just as alarming was the extent to which creators felt they had lost control over their own work. Across all sectors, 99% of creators believed their work had been scraped without consent, and between 95% and 99% said that consent and payment should be required for AI training. The report also revealed the growing scale of identity-based harms: cloned voices, digital replicas, style mimicry, and counterfeit creative outputs that blur the line between inspiration, exploitation, and impersonation. Taken together, the findings tell a sobering story. This is not innovation happening alongside creators; it is a system in which value is too often being extracted from them without permission, transparency, or fair remuneration.
That matters because these harms are not confined to one part of the creative economy. They affect musicians, writers, illustrators, photographers, performers, and many others. They affect not just copyright but consent, attribution, remuneration, bargaining power, dignity, and trust. They affect not just individual careers, but the ecosystems that sit around them: studios, engineers, venues, publishers, production teams, commissioners, and the wider networks that creative work sustains. When creative work disappears, it does not disappear in isolation; the losses ripple out through whole skills and jobs ecosystems, even those outside of the creative industries, including entertainment, the nighttime economy, tourism, hospitality, and the wider local economies that depend on cultural activity.
The harms do not end there
One of the most important things about Brave New World? is that it does not treat this as a narrow copyright dispute alone. The report also considers the broader societal consequences of unregulated generative AI, including its implications for privacy, identity, data protection, consumer protection, access to justice, and human rights. For example, when someone’s voice, face, name, or likeness can be cloned and redeployed without their knowledge, we move beyond copyright into questions of personal autonomy, dignity, and identity as well as the authenticity of creative works. As the report argues, these harms engage not only intellectual property law, but wider protections associated with privacy, identity, misrepresentation, deception, and the availability of an effective remedy when rights are infringed.

The report also covers some of the hidden costs of generative AI that are too often missing from the “innovation versus regulation” story. Generative AI carries significant environmental costs, from the energy demands of data centres to the water consumption required to cool them, and these are not peripheral concerns. They are part of the real material footprint of this technology. Brave New World? also highlights the wider human and economic costs of displacement: the insecurity that spreads through local economies, the pressure placed on creative ecosystems and associated supply chains, and the risk that the benefits of AI will be concentrated in a small number of firms while the social and economic costs are borne elsewhere. In that sense, the report is not only about creators’ rights, but about the kind of society, economy, and cultural future we are prepared to accept.
This is not an anti-AI argument
I have said this consistently in my research and policy work, and it was central to the report as well: this is not about being anti-technology. It is not about rejecting AI outright. It is about insisting on lawful, ethical, and accountable AI. Copyright is not some old-fashioned obstacle standing in the way of progress. It is one of the UK’s foundational creative and economic frameworks. It exists so that creativity can flourish alongside technological change, not be quietly stripped of value in the process. The real question is not whether generative AI will develop. It is whether it will develop on fair terms. That is also the core argument running through the CLEAR framework set out in the report: consent first, licensing not scraping, ethical use, accountability and transparency, and remuneration and rights.
A CLEAR Framework for AI

The report does not just set out a list of concerns; it proposes a policy direction. In Brave New World?, we argue that any future legislation, regulatory response, or industry solution must be framed by the CLEAR Framework. In other words, CLEAR is not an optional add-on or a slogan. It is the framework we are asking policymakers to stand behind so that the UK’s response to generative AI is shaped by lawful markets, meaningful creator protection, and fair value exchange, rather than by speed, opacity, and extraction. As I put it in my ROSL speech, the aim is to build a functioning market in which innovation can happen with creators, not at their expense.
CLEAR stands for Consent, Licensing, Ethical use, Accountability and transparency, and Remuneration and rights. Each part responds directly to what creators are asking for and to what a workable system actually requires. Consent means no training on creators’ works by default and no burying permission in terms and conditions. Licensing means lawful access, not scraping first and asking questions later. Ethical use means drawing real lines around identity theft, deception, harmful impersonation, and exploitative uses of AI. Accountability and transparency mean that AI companies must disclose what creative works have been used, how they have been used, and what environmental and social costs are involved. And Remuneration and rights means that if value is extracted from creative labour, value must flow back through fair payment, attribution, and effective remedies, including stronger protections where harms extend beyond copyright into identity and personality.
What matters most to me about CLEAR is that it gives us a way out of the false choice between innovation and protection. It is a framework for ethical AI development, deployment, and adoption, but it is also a framework for good policymaking. If the UK is serious about leading in this space, then leadership cannot mean moving fastest and asking questions later. It has to mean putting in place rules that protect people, culture, labour, and the conditions for genuine innovation. That is why CLEAR sits at the heart of Brave New World? and why I believe it should shape any future legal or policy response to generative AI.
Why this report matters
For me, Brave New World? matters because it refuses to abstract these issues. It brings the discussion back to people: to the musician who loses a commission, the performer whose likeness is scanned without informed consent, the writer displaced by AI-generated output, the illustrator watching their style flattened into a monetisable imitation, the photographer discovering that their work has become training material for systems they never agreed to support. The report insists that these people are not collateral damage in someone else’s innovation story. They are the foundation of the UK’s cultural and economic life.
As the report highlights, the UK’s creative industries contribute £124.6 billion a year to the economy and support nearly 2.5 million jobs, accounting for more than 7% of the national workforce. By comparison, the UK’s AI sector contributes £11.8 billion and supports around 86,000 jobs. That contrast matters. The UK has built much of its global reputation and soft power on the strength of its creative workforce: our musicians, writers, performers, artists, designers, photographers, filmmakers, and the many others whose labour sustains our cultural life. These are not marginal sectors. They are central to the UK’s economic, social, and cultural identity.

So the question for policymakers is a simple but urgent one: why would we weaken the foundations of one of the UK’s most successful and internationally respected sectors in order to privilege an industry that has not yet proven that it can deliver broad-based productivity gains without profound social and economic disruption? Why risk undermining our country’s cultural heritage, creative ecosystems, and global standing for a model of innovation that too often depends on extraction, opacity, and the displacement of human labour? And why accept the erosion of creative careers in exchange for homogenised cultural outputs that may be profitable for a small number of firms, but are ultimately of little value to society?
By all means, let us use AI where it can deliver genuine public benefit. Let us use it to support medical research, to improve healthcare, to help address environmental challenges, to strengthen accessibility, and to solve complex problems that genuinely require large-scale computational power. But that is very different from allowing commercial AI systems to build value by scraping human creativity without consent, without transparency, and without fair remuneration. There is nothing socially progressive about sacrificing the livelihoods of creators in order to subsidise a business model that treats culture as raw material and human expression as a free input.
What is at stake here is not simply a sectoral dispute between the creative industries and the AI industry. It is a question of what kind of economy, what kind of culture, and what kind of society we want to build. If we hollow out the conditions that allow people to create, perform, write, compose, photograph, and imagine, then the long-term damage will reach far beyond individual professions. It will affect education, communities, local economies, cultural diversity, and the future pipeline of talent on which the UK’s creative success depends. Once that infrastructure is weakened, it will not be easily rebuilt.
If the UK wants to lead in AI, it should lead in a way that is ethical, lawful, and socially responsible. It should not do so by undermining one of its greatest national assets. Innovation should not come at the cost of creativity, and economic growth should not be pursued by eroding the very workforce that has helped give the UK its cultural voice and global influence.
And that is why I wanted to write this post. After so much debate framed around what government wants, what tech companies need, or what markets might deliver, I wanted to say clearly that creators, and society, have to be at the centre of this conversation. Not as a token presence, and not as an afterthought, but as workers, rights holders, and cultural contributors whose lives and livelihoods are directly at stake.
That, to me, is what Brave New World? is really about.
