The grey area of artificial imitation
Artificial Intelligence has evolved rapidly in recent years, and with this evolution, new and complex challenges concerning intellectual property (IP) rights, particularly deepfake technology and celebrity impersonation, have surfaced.
Numerous tools now allow users to generate audio and video clips mimicking the voices and faces of celebrities.
This technology occupies a grey area between innovation and legality, where the lines between creativity, ethical considerations, and legal boundaries are often blurred.
Celebrity voice and image generators
Various AI platforms enable users to create audio and video content that closely mimics the voices and appearances of well-known figures, from politicians to actors and musicians. Users can input text or other data, select a celebrity, and the AI generates a clip that sounds and looks remarkably like the chosen individual. These platforms often market themselves as entertainment tools, intended for pranks, personalised messages, or humorous content.
However, despite their playful positioning, the capabilities of these tools raise significant concerns about the potential for misuse, particularly in the context of deepfakes – a technology already under scrutiny for its ability to fabricate realistic but false content.
A legal benchmark
Scarlett Johansson's legal dispute with OpenAI over a voice in its ChatGPT product highlights the broader legal and ethical concerns that arise when technology uses voices or images closely resembling those of celebrities.
Johansson was troubled by how closely the voice resembled hers, leading to public outcry and the eventual removal of the voice from the product. This case raises important questions about the boundaries of creative use and the potential for unauthorised exploitation of a celebrity's likeness, particularly in the realm of AI-generated content.
This then highlights the "right of publicity," which allows individuals to control how their identity, including their voice, is used commercially. It is a principle that is becoming increasingly relevant as technologies like deepfake and AI voice generators push the boundaries of what is possible. While these tools are often marketed as entertainment, they exist in a legal grey area where the potential for intellectual property infringement and ethical concerns is significant.
The Johansson case is a key example of the challenges in defining the limits of creative freedom in technology, especially when it involves the likenesses of real people without their explicit consent. What makes Johansson’s case more profound is that she had refused the use of her voice more than once before the company used another voice that closely resembled Johansson’s. This case genuinely underscores the need for clear legal guidelines to navigate these emerging issues as AI continues to evolve.
The introduction of the Artificial Intelligence Act (AI Act) by the European Union is a significant step towards regulating AI technologies, particularly those that pose risks to fundamental rights. The AI Act categorises AI systems based on their level of risk and imposes strict requirements on high-risk AI applications, which could include deepfake and celebrity impersonation technologies.
The legal and ethical grey area
Companies that produce AI tools for celebrity impersonation operate in a legal grey area, often leveraging parody as a defence against potential IP infringement claims. These companies typically assert that their AI-generated voices and images are parodies and not affiliated with the actual celebrities. This stance aims to protect them under laws that allow for parody and satire, which are often considered forms of free speech in many jurisdictions.
However, the issue is not entirely clear-cut. The technology behind these companies could be argued to infringe on the right of publicity, especially if the generated content leads to confusion or implies endorsement by the celebrity. Moreover, even if the AI-generated content is considered parody, the realistic nature of the voices and images could still raise ethical concerns, particularly if the content is used maliciously or to deceive.
Brand protection and the future of IP law
The Scarlett Johansson case and the advent of AI tools that mimic celebrity voices and images illustrate the ongoing tension between technological innovation and IP law. As AI technology continues to advance, it is likely that new legal frameworks will be needed to address the unique challenges it presents. For celebrities, protecting their brand will increasingly involve monitoring and potentially litigating against AI-generated content that misuses their identity.
While some companies currently benefit from the lack of explicit legal precedents regarding AI-generated voice and image content, this situation may not last. As deepfake technology becomes more widespread, there is a growing likelihood that courts will be asked to define the boundaries of parody, satire, and infringement more clearly, potentially leading to new regulations that could impact these tools.
Technology, such as AI-based voice and image generators, brings to the surface the complex legal and ethical issues surrounding the use of deepfake technology. While these entertainment tools currently operate within a grey area, the potential for IP infringement and the ethical implications of such realistic impersonations cannot be ignored. The comparison to Scarlett Johansson's legal battles over the use of her identity highlights the importance of clear legal guidelines to protect individuals’ rights in the age of AI. As technology continues to evolve, so too must the laws that govern it, ensuring that innovation does not come at the cost of individual rights and brand integrity.