In a significant legal battle that had the potential to shape the future of AI and celebrity rights, actress Scarlett Johansson filed a lawsuit against OpenAI, claiming they used her voice without permission in their new ChatGPT 4o.
This lawsuit, filed in a federal court, marked a notable escalation in the ongoing debate over the ethical boundaries of AI and individual privacy protection. Johansson, known for her roles in films like “Black Widow,” argued that OpenAI’s AI systems were using her voice and image without consent, which could lead to the creation of misleading or damaging deep fake content.
Johansson disclosed that she turned down an offer from OpenAI to voice their AI system the previous year due to personal reasons. She was “shocked” and “angry” upon hearing the voice, which she claimed was so similar to hers that even her closest friends and news outlets couldn’t distinguish between them.
Her legal team contended that this violated her rights to privacy, publicity, and intellectual property.
“Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her natural speaking voice,” the blog post read. “To protect their privacy, we cannot share the names of our voice talents.” OpenAI’s CEO, Sam Altman.
The case sparked intense inspection of the responsibilities of AI developers like OpenAI in protecting the rights of individuals, particularly public figures whose images are widely recognizable.
OpenAI maintained they followed the rules to ensure proper AI use, focusing on creating AI for beneficial purposes. They also worked to prevent misuse or harm.
The legal clash between Scarlett Johansson and OpenAI highlighted the complex intersection of technology, privacy, and intellectual property in the digital era. As the case unfolded, it sparked broader discussions on AI governance and consumer protection.
On Monday, OpenAI decided to discontinue the voice option in ChatGPT following extensive criticism and parallels drawn to Johansson’s voice.