The potential of Scarlett Johansson suing OpenAI for creating a voice assistant that sounds like her performance in the film “Her” has raised questions about AI’s many legal complications. Legal experts suggest that Johansson may have a credible claim based on past cases related to copyright and right-of-publicity laws. While OpenAI denies directly sampling Johansson’s voice, they may still face legal issues regarding the unauthorized use of her likeness, especially in California where such laws are strict. OpenAI’s defense, such as claiming their voice assistant was not intended as an advertisement, may not hold up in court.

Multiple legal cases in the past have shown that imitating a person’s voice can lead to significant damages, as seen in lawsuits against Ford Motor Company and Frito-Lay involving Bette Midler and Tom Waits. Johansson’s situation with OpenAI mirrors these cases, and whether OpenAI can defend itself may depend on proving they did not intend to imitate Johansson’s voice. However, the company’s actions, including the association between Sky and Johansson in marketing materials, may undermine any such defense they could make. OpenAI may face liability regardless of their intentions.

The parallels between Johansson and Sky highlighted by OpenAI’s promotional materials may indicate that the company wanted users to associate the voice assistant with Johansson, thus potentially violating California’s law on publicity rights. This situation showcases the legal challenges that technology companies face in the age of deepfakes and AI, and the lack of clear federal legislation to address these issues. Some tech companies, like Adobe, have proposed a federal right against AI impersonation, while lawmakers are working on legislation like the NO FAKES Act and the No AI Fraud Act to protect creators.

The ongoing debate over publicity rights laws in Congress reflects the complexity of addressing AI-related legal issues. Concerns around protecting free expression, the likenesses of individuals after death, and the potential exploitation of AI-generated performances by record labels are some of the issues at stake in the discussion. The need for comprehensive legislation to navigate the legal implications of AI technologies, such as deepfakes and voice assistants, remains a priority for lawmakers, digital rights groups, and academia. The outcome of these debates will shape the future of legal protections for individuals in the age of advanced AI technologies.

Share.
Exit mobile version