OpenAI Says It's Working With Actors to Crack Down on Celebrity Deepfakes in Sora

OpenAI Says It's Working With Actors to Crack Down on Celebrity Deepfakes in Sora

OpenAI has announced new measures to prevent users of its AI video generation app, Sora, from creating videos that use the likenesses of actors and celebrities without their consent. This move comes after actor Bryan Cranston, along with the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and several talent agencies, raised concerns about the unauthorized use of performers’ images in AI-generated deepfake videos. The agreement between these parties highlights the growing tension between AI companies and rights holders—including celebrities, studios, and talent agencies—over the ethical and legal implications of generative AI technologies.

Sora, which was launched just three weeks ago as a sister app to OpenAI’s ChatGPT, allows users to create and share AI-generated videos with unprecedented ease. What distinguishes Sora from other AI video generators and social media platforms is its ability to incorporate the recorded likenesses of real people into these videos, effectively placing them into new, AI-created scenarios. While some users have found the resulting videos amusing or entertaining, others have expressed discomfort and concern over the realistic deepfakes that blur the line between fiction and reality.

Bryan Cranston, best known for his role as Walter White in “Breaking Bad,” was among the first to notice that his likeness was being used without permission on Sora shortly after its launch. Concerned about the implications, Cranston reached out to his union, SAG-AFTRA, to address the issue. The union, together with several talent agencies, negotiated an agreement with OpenAI that emphasizes the need for celebrities to opt in before their likenesses can be used in AI-generated videos on Sora. In a joint statement, OpenAI confirmed that it has “strengthened the guardrails around replication of voice and likeness” to prevent unauthorized use and expressed regret for any “unintentional generations” that may have occurred.

OpenAI already had some safeguards in place to block users from generating videos featuring well-known individuals. For example, the AI refused to create a video of singer Taylor Swift performing on stage when prompted. However, these restrictions have proven imperfect. Recently, there was a surge in videos depicting the late civil rights leader Martin Luther King Jr. in inappropriate or offensive contexts, ranging from deepfakes of him rapping or wrestling in WWE-style matches to overtly racist content. Such “disrespectful depictions,” as OpenAI described them, led the company to temporarily suspend the ability to generate videos featuring King’s likeness.

The Estate of Martin Luther King Jr., Inc. (King, Inc.) has been collaborating with OpenAI to determine how Dr. King’s image should be handled in Sora-generated content. Bernice A. King, Dr. King’s daughter, publicly appealed for an end to the circulation of AI-generated videos of her father, expressing distress over their disrespectful nature. Similarly, Zelda Williams, daughter of the late comedian Robin Williams, condemned such AI videos as “gross,” emphasizing the emotional impact these unauthorized uses can have on families.

OpenAI has stated its belief that public figures and their families should ultimately control how their likenesses are used. The company allows authorized representatives of public figures and their estates to request that their likeness not be included in Sora. In the case of Dr. King, his estate holds the authority to decide how his image is represented. This approach reflects OpenAI’s acknowledgment of the sensitivities involved and the importance of respecting the rights and wishes of those whose likenesses are at stake.

Despite these efforts, the relationship between AI companies and content creators remains fraught with challenges. Prior to Sora’s launch, OpenAI had informed several Hollywood-related talent agencies that they would need to opt out if they did not want their intellectual property to be included in the app. This opt-out approach diverged from traditional copyright law, which generally requires explicit licensing of protected material before use. Following criticism, OpenAI reversed its position, illustrating the complexities of adapting legal and ethical frameworks to emerging AI technologies.

The clash between AI developers and creators over copyright and likeness rights is not unique to OpenAI. Across the industry, high-profile lawsuits have emerged as artists, writers, actors, and studios seek to protect their intellectual property against unauthorized use in AI training datasets and generated content. These legal battles underscore an ongoing struggle to define how copyright and personal rights apply in the context of rapidly advancing AI tools.

In a related development, CNET’s parent company, Ziff Davis, filed a lawsuit against Open

Previous Post Next Post

نموذج الاتصال