OpenAI has significantly enhanced the safeguards within its Sora AI video generation platform. The aim is to strictly prevent the creation of videos featuring celebrities and public figures who have not explicitly given their consent. This crucial update was announced in a joint statement issued by the San Francisco-based artificial intelligence powerhouse, the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), actor Bryan Cranston, and several other prominent entities. The statement comes after Cranston vocalized his concerns regarding the potential misuse of his likeness and voice in Sora-generated content without his permission.
Sora’s Enhanced Protection Against Celebrity Deepfakes
Since the introduction of the Sora app, users have actively generated a wide array of videos depicting celebrities and public figures. From imaginative scenes like Stephen Hawking effortlessly diving into a swimming pool to humorous portrayals of Einstein as a wrestler, the internet’s creative community has embraced the tool. However, this surge in AI-generated content also led to significant apprehension and backlash from various celebrities concerned about their digital identities.
Just last week, OpenAI, in collaboration with the Estate of Martin Luther King, Jr., released a statement detailing their efforts to manage the representation of Dr. King’s image and voice in Sora creations. OpenAI acknowledged that some users had produced “disrespectful depictions of Dr. King’s image,” prompting the company to reinforce its protective measures for historical figures.
The joint statement on Monday, involving SAG-AFTRA, OpenAI, Bryan Cranston, United Talent Agency, Creative Artists Agency, and the Association of Talent Agents, specifically addressed the protocol for generating celebrity likenesses. This collective action was spurred after Cranston personally brought his concerns to SAG-AFTRA.
Cranston shared his perspective, stating, “I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way. I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work respect our personal and professional right to manage replication of our voice and likeness.”
These fortified safeguards are being implemented amidst the ongoing legislative process for the No Fakes Act in the United States. This proposed federal bill, whose full title is the Nurture Originals, Foster Art, and Keep Entertainment Safe Act, aims to provide legal protection to artists, actors, and musicians against the unauthorized use of their digital likeness, voice, or performances through AI technology.
Sam Altman, CEO of OpenAI, reiterated the company’s commitment, saying, “OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness. We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers.”
For more insights, consider this related video: [A YouTube video titled “OpenAI Announces Sora: Text-to-Video AI Model” is embedded, offering a visual overview.]
If you or someone you know needs support, please consider reaching out to mental health helplines:
- Vandrevala Foundation for Mental Health: 9999666555 or help@vandrevalafoundation.com
- TISS iCall: 022-25521111 (Monday-Saturday: 8 am to 10 pm)
(Please contact your nearest mental health specialist if you need support or know someone who does.)