The London Book Fair wouldn’t be complete without the annual Charles Clark lecture, perhaps also the highlight of the year for those interested in copyright. This year’s lecture was especially ‘buzzy’ because it addressed the copyright issues arising from the use of Artificial Intelligence [AI]. Entitled Do Androids Dream of Electric Copyright?, it was delivered by Dr Andres Guadamuz, Reader in Intellectual Property Law at the University of Sussex and chaired by Dan Conway, the current CEO of the Publishers Association.

Dr Guadamuz was an engaging and unpretentious speaker who drew in the audience with a witty and attractively eccentric presentation. He said that he had spent ten years reading about AI, not a fashionable subject until recently, so he felt like a groupie supporting an obscure pop group that had suddenly shot to fame. Most of his research has been connected not to language models but to images. He has also explored the ethical issues attendant upon ChatGPT.
With regard to the latter, his conclusion is that if someone ‘practises transparency’ by admitting they used ChatGPT – for example, to prepare a presentation – that would be ethical. However, its use implies the need for massive changes in the way that many activities are conducted. For example, if a student uses AI a significant number of times in an essay, academics can detect it and perhaps disqualify the student; but the student may not have been told that this is against the rules.
More broadly, everyone must come to grips with the idea that the world is changing. Every day a new intelligence language model is deployed. Generative Pre-Trained Transformer [GPT] discovery is now on version GPT4; free ChatGPT is on version 3.5. Other, similar intelligent languages include LLAMA; Alpaca; Dolly 2.0 and Ernie. AI is everywhere! And Open Source technology means that advances are accelerating exponentially. AI will continue to develop and be deployed regardless of litigation or regulation. “The genie is out of the bottle.”
So how is it most useful to think about AI and copyright? Dr Guadamuz suggests there are 3 main separate debates:
- Are AI-generated works protected by copyright?
- Is training an AI application using existing creative works infringing copyright?
- Are AI works themselves infringing copyright?
This raises several interesting authorship scenarios, with the following premises:
- Only humans can create copyright.
- Machines can generate work that could under some circumstances be protected by copyright.
- Sui generis rights – database rights, reward of investment – may exist, but be of shorter duration.
Dr Guadamuz went on to describe the legal approach to these scenarios throughout the world. Different countries have very different laws, from the USA, which says such content can’t be copyrighted, to the UK (and others), which stipulates that copyright exists in favour of the person who made the arrangements necessary for the work to be created and lasts for 50 years.
What is copyright for? Is it to protect investment? Is it to protect authors? Is it to protect human authors from free or cheap competition? Do we want ‘copyright police’ to conduct human points tests?
The key technical issue is that to train the various AI models, data is needed to start with. The early phases of developing a model therefore involve copying; the later phases, when the models have been built, don’t need copies – i.e., there is nothing original left in the application.
Copyright law already allows temporary copies of a work to be made for lawful use. Then there is Text and Data Mining [TDM]. TDM was created with different objectives in mind, to aid scientific research, not to create something. In the UK, bona fide TDM is already regarded as an ‘exception’ to copyright law; and in 2021 a UKIPO consultation proposed that a new exception should make TDM lawful for any purpose, not just scientific research. The House of Lords threw this out, calling it ‘misguided’, because self-regulation creates loopholes, which allow, for example, academic institutions to engage in data ‘washing’ or laundering.
Outputs of AI include models for producing text, images, videos, music; but these outputs do not appear in the form of a collage. They are derivative of the inputs, but not fragments of them. (Parody, to which the law is sometimes, but very rarely, applied, is a ‘poor relation’ of these outputs.)
In conclusion, Dr Guatamuz said that he tells his students they “should be scared, they should be terrified”, because they are about to enter a jobs market that no one understands. Echoing his words, Dan Conway said that publishers “shouldn’t be scared, they should be terrified”, and artists should be terrified more. Alternatively, AI could be great for publishing – it could potentially lead to more licensing and the capacity to create more work – but there are obviously pitfalls. How would Dr Guatamaz tackle infringements in an AI world? Dr Guatamaz said that the great challenge was that any case of infringement must focus on the inputs, not the outputs; and therefore relate to the training phase of the application (i.e., before the outputs have been created). The UK government has yet to get its act together over this. There is EU legislation – but of course the UK no longer belongs to the EU. Lessons can be learnt from the music industry, which “won the battle, but lost the war”.
There followed a lively debate. The question that made most impression on me came from Oliver Gadsby, who was present in his capacity as a member of the PLS board. He said, “The human mind can surprise and delight [perhaps echoing Jane Austen]. My experience of AI is rather bland text. Can AI surprise and delight?” Dr Guatamaz’s alarming answer was yes – because the human mind responds with its soul. “Sometimes there’s an image that’s really, really good; I know it’s me bringing my own values and emotions to the image – but that’s art.” Chilling! Or exhilarating?
[written by Linda Bennett]