TW Backstage Insights

Insights: AI and the creative industries

By | Published on Monday 6 November 2023

Last week the UK government held a global AI Safety Summit at Bletchley Park in Buckinghamshire, while American President Joe Biden issued an executive order which he described as “the most significant action any government anywhere in the world has ever taken on AI safety, security and trust”. Both events confirmed that regulating artificial intelligence is now a top priority for governments and lawmakers across the world.

However, for the creative industries, a more interesting development this last week was the publication of thousands of submissions that have been made to an ongoing review in the US considering how AI interacts with copyright. Because the rise of so called ‘generative AI’ presents both opportunities and challenges for creators and creative businesses, and some of the challenges relate to copyright matters.

AI AND THE CREATIVE INDUSTRIES
Artificial intelligence isn’t new, of course. However, you can’t fail to have noticed that AI technologies have become a much bigger talking point in the last year. Partly because of technological developments. And partly because the need to regulate AI has risen up the political agenda.

For the creative industries, of particular interest is ‘generative AI’, a specific kind of artificial intelligence. This term refers to AI models that are able to generate original content, including text, image, audio, video and music.

This technology provides great new tools for creators and creative businesses, which can be used as part of the creative process, making it quicker, easier and cheaper to create great content.

But, of course, it also creates some challenges and concerns. Generative AI platforms will compete with human creators. And they may do so – at least in part – by exploiting the creativity of those human creators.

COPYRIGHT AND GENERATIVE AI
Which brings us to copyright matters. Generative AI models are commonly trained by being exposed to existing content: text, images, audio or video. And that usually involves copying the existing content onto a server in order to analyse and learn from it.

This poses an important question: does a tech company need permission from the creator and/or owner of the text, images, audio or video before it copies that content onto a server and uses it to train a generative AI model?

This is first and foremost a copyright question. Because copyright gives creative people control over the output of their creativity.

If you write a script, or make a film, or record a track, you have control over what happens to that work. If other people want to legally copy, distribute, rent, adapt, perform, communicate or make available the script, film or track, they need to get permission from the copyright owner.

The copyright owner can then decide whether or not to grant that permission and – if so – at what price and on what terms.

So, does that control extend to the training of a generative AI model? Most creators and copyright owners would say “yes, it does”.

COPYRIGHT EXCEPTIONS AND ‘FAIR USE’
But not all AI companies agree. Copyright law usually provides exceptions, scenarios where people can make use of copyright protected works without getting permission from the copyright owner.

Common exceptions include critical analysis and parody. Many tech companies argue that – in some countries at least – training an AI model is covered by an exception, likely relating to text or data mining.

Under American copyright law, rather than a list of specific exceptions we have the more ambiguous concept of fair use. Third parties can make use of copyright protected works without getting permission if that use is fair use.

Scenarios that are deemed fair use under US law tend to be similar to the scenarios covered by specific copyright exceptions in other countries, though the principle of fair use is more open to interpretation.

THE US COPYRIGHT OFFICE REVIEW
Which brings us to this review led by the US Copyright Office on how AI interacts with copyright. An important part of the consultation is the debate over whether or not the training of an AI model with existing content constitutes fair use.

Perhaps unsurprisingly, submissions made to the US Copyright Office by organisations representing creators and copyright owners are generally pretty clear it does not. Or, at least, while it may in certain very specific circumstances, it does not as a general rule across the board.

Meanwhile, submissions made by AI companies generally take the opposite position – ie the training of an AI model with existing content is fair use and, therefore, tech companies do not need to get permission from copyright owners to utilise their works.

QUOTES FROM THE SUBMISSIONS TO THE US COPYRIGHT OFFICE

Stability AI: “We believe that training AI models is an acceptable, transformative and socially beneficial use of existing content that is protected by the fair use doctrine and furthers the objectives of copyright law, including to ‘promote the progress of science and useful arts’”.

ChatGPT owner OpenAI: “[We believe] that the training of AI models qualifies as a fair use, falling squarely in line with established precedents recognising that the use of copyrighted materials by technology innovators in transformative ways is entirely consistent with copyright law”.

Google: “The doctrine of fair use provides that copying for a new and different purpose is permitted without authorisation where – as with training AI systems – the secondary use is transformative and does not substitute for the copyrighted work”.

Recording Industry Association Of America and American Association Of Independent Music: “Although we recognise that fair use involves a fact-specific, case-by-case analysis, the unauthorised reproduction of copyrighted works by AI developers to develop models that produce AI-generated works that actually or potentially compete with the inputted works comes as close as a use can come to being presumptively not fair use”.

Motion Picture Association: “The fair use defence requires that courts take a ‘subtle, sophisticated approach’ to each case, rather than establishing broad, categorical rules. Given the intensely fact-intensive nature of fair use, it is neither feasible nor appropriate to define ex ante the circumstances in which the defence would apply to uses of copyrighted works to train AI models”.

New York Times: “Generative AI tools may be new, but the fair use arguments that proponents advance have been routinely rejected in a variety of parallel contexts where creators of a new digital product seek to use others’ content and then compete against them”.

THE ONGOING DEBATE
For creators and creative businesses, it will be very interesting to see how the debate over the copyright obligations of AI companies develops. Because the conclusion of this debate will decide whether creators and creative businesses have control over the use of their content to train AI models.

In the US, alongside the Copyright Office review, there are already a number of lawsuits working their way through the courts in which copyright owners are suing AI companies that have trained models with their content without permission.

In the UK, the government’s Intellectual Property Office is currently working on a code of practice around copyright and AI, having convened a working group earlier this year that includes both AI companies and organisations representing creators and creative businesses.

Concurrent to that, there is a related but distinct debate within the creative industries. Because individual creators and performers do not necessarily own the copyright in the content they have created.

Which poses another key question: even if the AI companies have to get permission from the copyright owner – maybe a publisher, studio or label – does that copyright owner also need to seek permission from the individual creator or performer?

When it comes to the obligations of AI companies, creators and creative businesses are generally aligned. But when it comes to the obligations of publishers, studios and labels in the AI domain, there is often difference of opinion. As we have seen in the recent strikes in Hollywood.

We’ll consider more about this element of the conversation around AI and the creative industries another day.

FURTHER READING

Human Artistry Campaign

Creators’ Rights Alliance on AI

Equity on AI

Society Of Authors on AI

UK Music on AI

Council Of Music Makers on AI



READ MORE ABOUT: