This week, the two of us, authors of this article, spent several hours immersed in a unique digital experience: scrolling through a personalized feed of short-form videos starring ourselves in various fantastical scenarios.
One incredibly realistic nine-second clip showed us grinning widely as we skydived, our parachutes comically replaced by pizzas. In another, Eli, with remarkable precision, hit a game-winning home run in a baseball stadium packed with cheering robots. And then there was Mike, trapped in a “Matrix”-style duel against Ronald McDonald, wielding cheeseburgers as his unconventional weapons.
Eli was genuinely astonished by the cheeseburger video, messaging Mike to express his amazement before hitting ‘like’. Mike, not one to miss an opportunity, continued to bombard colleagues with more clips, including himself ballroom dancing with his dog and dramatically perched on a throne made entirely of rats. While Mike found these creations amusing, his New York Times colleagues found them, to put it mildly, slightly disturbing.
The platform behind these extraordinary videos wasn’t TikTok, Instagram Reels, or YouTube Shorts, the current leaders of short-form video. It was Sora, a new smartphone application from OpenAI, designed to generate videos entirely through artificial intelligence. Sora’s underlying technology first appeared last year, but this week saw the release of its latest, invitation-only iteration. This updated version is faster, more powerful, and can even incorporate your own likeness by analyzing uploaded images of your face.
Our brief time with the app made one thing abundantly clear: Sora has evolved beyond a simple AI video generator. It functions, in essence, as a social network in disguise, mirroring TikTok’s user interface, algorithmic video suggestions, and the ability to connect and interact with friends. The advanced AI model empowering Sora makes video production remarkably easy, granting users a seemingly endless capacity to create AI-generated content.
However, this innovation also brought a sense of unease.
Early users quickly began producing videos using copyrighted material from popular culture. The feed was, regrettably, saturated with an abundance of “Rick and Morty” and Pikachu clips. When Mike shared a Sora video of himself on his personal Instagram, several friends genuinely questioned if it was truly him, highlighting a potential future where the lines between reality and artificiality become dangerously blurred.
Even more concerning is the ease with which Sora can generate realistic video likenesses, potentially fueling misinformation. This capability could create convincing fake events that might provoke real-world reactions. While other AI video generators offer similar functions, Sora’s accessibility and power could significantly escalate these risks.
It’s still early, and Sora’s long-term impact remains to be seen. Yet, OpenAI seems to have achieved what tech giants like Meta and X have been striving for: a product that seamlessly integrates AI into the everyday lives of the masses, encouraging user-generated content and fostering consistent engagement.
The competition in this space is rapidly intensifying. Just last week, Meta launched Vibes, a social media feed within its dedicated AI app, leveraging an AI video generator from the startup Midjourney. Google is also in the game with its own similar product, Veo.
As the social internet continues its evolution from text-based communication to photo sharing, and now to billions of hours of video consumption, tech leaders anticipate that AI video tools will be crucial in shaping the next generation of social media platforms.
Rohan Sahai, OpenAI’s product lead for Sora, explained in an interview, “We believe the most effective way to introduce this technology to a broad audience is through a social framework. When faced with such profoundly transformative technology, including a new form factor, our company’s core mission is to make it widely available.”
(It is worth noting that The New York Times has previously sued OpenAI and Microsoft, alleging copyright infringement related to their AI systems. Both companies have denied these claims.)
Visually, the new Sora closely resembles TikTok, even adopting the familiar “For You” designation for its social feed. Users can create avatars of themselves from scanned face images or use images of others, including prominent figures like OpenAI’s chief executive, Sam Altman. OpenAI has named this feature “Cameos,” a nod to the popular app where fans purchase personalized videos from celebrities.
Safety experts, however, are wary, suggesting that Sora, particularly its Cameos feature, could pave the way for new forms of misinformation and online scams. One notable Sora video that quickly gained traction depicted an artificial Mr. Altman purportedly stealing a graphics processing unit from a department store, appearing as if captured by a security camera.
Rachel Tobac, CEO of SocialProof Security, a cybersecurity startup, remarked, “It significantly simplifies the creation of believable deepfakes in a way we haven’t quite experienced before.”
Sora does implement restrictions on certain types of content, including sexual and copyrighted material. For example, while you can generate videos featuring characters from “South Park,” popular figures like Batman or Superman are off-limits. Rights holders can request their work be excluded from Sora’s content generation on a case-by-case basis via a copyright disputes form. Public figures also have the option to grant explicit permission for their likenesses to be used by Sora.
Varun Shetty, OpenAI’s head of media partnerships, stated, “We are committed to collaborating with rights holders to block characters from Sora upon their request and to address all takedown notices promptly.”
As Sora clips began to spread across platforms like X and TikTok this week, reactions varied from surprise and delight to outright disgust. A primary concern is that Sora will contribute to the growing phenomenon of “slop” – a derogatory term for the increasing volume of nonsensical, AI-generated videos flooding social networks.
In July, an AI-generated clip of a baby seemingly piloting a 747 became one of YouTube’s most-watched videos globally. More recently, an AI video featuring an elderly woman crashing through a glass bridge with a boulder went viral on Facebook and X, leading to numerous similar imitations.
Mr. Sahai emphasized that just as traditional social networks empower creators with tools, Sora will likely see a wide spectrum of content quality. He believes that the highest quality work will naturally rise to prominence. He also pointed out that what might seem like “nonsense” to an outsider could be a humorous and relevant inside joke within a smaller group of friends.
“One man’s slop is another man’s gold,” Mr. Sahai concluded.
Hollywood has been particularly apprehensive in the 36 hours since the app’s release, fearing that Sora could enable users to easily replicate celebrity likenesses without compensation. A memo from executives at the talent agency WME, seen by The Times, informed agents of their commitment to defending clients’ work. WME explicitly stated that all its clients were opting out of having their likenesses or intellectual property used in Sora’s videos.
The memo highlighted, “There is a significant need for robust protections for artists and creatives as they face AI models utilizing their intellectual property, names, images, and likenesses.”
Despite these concerns, Sora’s immense appeal was undeniable. Neither of us had any prior experience in video creation, yet a simple idea, a few minutes of processing, and substantial computing power were all it took to generate a video of Mike and Eli arm wrestling for the title of “best tech reporter.” (Eli, naturally, emerged victorious.)
Not everyone found the experience charming. After Mike showed his partner an unsettlingly realistic Sora video of himself portraying Anton Chigurh, the psychopathic character from the 2007 film adaptation of “No Country for Old Men,” her reaction was clear: “Please never, ever show me this kind of video again.”
Nicole Sperling contributed reporting from Los Angeles.