The trouble with writing about AI is twofold. First, the situation changes so fast that the printed word risks being out of date by the time it appears. And second, it’s hard to be positive about a technology that is displacing human creativity whether creative humans like it or not. But there was at least some satisfaction to be gained recently from hearing the tech bros of OpenAI complaining that Chinese chatbot DeepSeek may have infringed copyrights surrounding ChatGPT. Yes, copyright. That’s the same legal concept for which the same tech bros showed the same regard when training their models. There’s a Jeremy Clarkson meme for that.
Less satisfying is that, in its wisdom, the UK government is considering creating an exemption that would allow AI companies to use copyright material as training data without explicitly seeking permission first. The idea seems to be that if we want to remain competitive in the AI sector, we need to give it the keys to the archive. Because if we don’t, the AI sector will simply migrate to another territory that cares even less about its creative sector.
The government’s consultation document, which you can find at bit.ly/4hEmi5P, makes this argument pretty strongly, and in fairness, it’s hard to dispute that the only real protections that can be afforded to the creative sector are those that are also widely implemented elsewhere. So much for Brexit allowing us to take back control.
If this consultation is to be believed, the best musicians (and magazines) can hope for is an opt‑out clause: an exemption from the exemption, if you will, that would allow us to prevent our work being used for model training unless we grant an explicit licence. This would be accompanied by new rules about transparency, forcing AI companies to disclose what material they’ve used for training, and ensuring that AI‑generated content is clearly marked as such.
Is there any chance that this hugely complex endeavour can be completed before irreversible damage is done to the creative industries?
If this can be made to work, it may even be a reasonable solution. But therein lies the rub. Does anyone seriously believe that it can be made to work? How, exactly, are UK authorities planning to audit transparency disclosures made by Google or DeepSeek? How will an exemption that can’t be applied retrospectively help those whose catalogues have already been plundered? How will we block AI tools that don’t comply with the new legislation? How can technical and legal mechanisms be established that are both effective and accessible to individual artists? And is there any chance that this hugely complex endeavour can be completed before irreversible damage is done to the creative industries?
Like I say, it’s hard to be positive sometimes.
Sam Inglis Editor In Chief