This month, OpenAI, the creators of the popular ChatGPT chatbot, unveiled a technology that many of us were simply not prepared for. They launched an application called Sora, which empowers users to instantly produce incredibly realistic videos using artificial intelligence, simply by typing a basic description. Imagine generating “police bodycam footage of a dog being arrested for stealing rib-eye at Costco” in seconds.
Sora, available as a free app on iPhones, has proven to be as entertaining as it is unsettling. Since its release, a multitude of early users have shared videos for amusement—think fabricated cellphone footage of a raccoon on an airplane or staged fights between Hollywood celebrities in the style of Japanese anime. Personally, I found myself captivated by creating videos of a cat gracefully floating to heaven and a dog expertly climbing rocks at a bouldering gym.
However, others have unfortunately leveraged this tool for more sinister purposes, such as spreading misinformation, including entirely fake security footage of crimes that never actually occurred.
The advent of Sora, alongside similar AI-powered video generators introduced by Meta and Google this year, carries immense implications. This technology could signify the definitive end of visual evidence as an objective record of reality. Society as a whole will now be compelled to approach videos with the same level of skepticism that people already apply to written words.
Historically, consumers held a greater belief in the authenticity of images (“Pics or it didn’t happen!”), and when photos became easily manipulated, video, requiring far more technical skill to alter, became the trusted standard for proving legitimacy. That era is now unequivocally over.
“Our brains are powerfully wired to believe what we see, but we can and must learn to pause and think now about whether a video, and really any media, is something that happened in the real world,” explained Ren Ng, a computer science professor at the University of California, Berkeley, who specializes in computational photography.
Sora, which rapidly became the most downloaded free app in Apple’s App Store this week, has already caused significant disruption in Hollywood. Film studios are voicing concerns that videos produced with Sora may have already infringed upon the copyrights of various films, shows, and characters. Sam Altman, OpenAI’s chief executive, stated that the company is actively gathering feedback and will soon grant copyright holders greater control over the generation of characters, along with a pathway to monetize the service.
(It is worth noting that The New York Times has previously sued OpenAI and its partner, Microsoft, alleging copyright infringement of news content related to AI systems. Both companies have denied these claims.)
So, how exactly does Sora function, and what does its rise truly mean for you, the everyday consumer? Let’s delve into what you need to understand.
How is Sora being used?
While the Sora app is free for anyone to download, access to the video generation service itself is currently by invitation only. This means users can only create videos after receiving an invite code from an existing Sora user. Many users have been freely sharing these codes across platforms like Reddit and Discord.
Once registered, the app’s interface is quite similar to popular short-form video platforms such as TikTok and Instagram’s Reels. Users can create a video by simply typing a descriptive prompt, for example, “a fight between Biggie and Tupac in the style of the anime ‘Demon Slayer.’” (Before Mr. Altman’s announcement about giving copyright holders more control over their intellectual property, OpenAI initially required them to opt out of having their likeness and brands used on the service, making deceased public figures easy targets for experimental generations.)
Users also have the option to upload an existing photo and then generate a video from it. After the video is produced in approximately one minute, it can be posted to the app’s internal feed, or downloaded and shared directly with friends or on other social media platforms like TikTok and Instagram.
When Sora launched this month, it immediately stood out due to the significantly higher realism of its generated videos compared to those from similar services, including Google’s Veo 3, an integrated tool within the Gemini chatbot, and Meta’s Vibe, which is part of the Meta AI app.
What does this mean for me?
The core implication is that any short video you encounter while scrolling through apps like TikTok, Instagram Reels, YouTube Shorts, and Snapchat now carries a high probability of being entirely fabricated.
Sora marks a pivotal moment in the era of AI-driven fakery. Consumers should anticipate a flood of copycat services in the coming months, some of which may be offered by malicious actors with no restrictions on content generation.
“Nobody will be willing to accept videos as proof of anything anymore,” stated Lucas Hansen, a founder of CivAI, a nonprofit dedicated to educating the public about the capabilities and risks of artificial intelligence.
What problems should I be aware of?
OpenAI has implemented various restrictions to prevent the misuse of Sora, prohibiting the generation of videos containing sexual imagery, harmful health advice, or terrorist propaganda.
However, during my own hour-long testing of the service, I managed to generate several videos with potentially concerning implications:
-
Fake dashcam footage that could be used for insurance fraud: I prompted Sora to create a dashcam video of a Toyota Prius being struck by a large truck. After generation, I was even able to alter the license plate number.
-
Videos making questionable health claims: Sora produced a video of a woman citing non-existent studies suggesting that deep-fried chicken is beneficial for health. While not inherently malicious, the claim was undeniably bogus.
-
Videos defaming others: Sora successfully generated a fake broadcast news story that contained disparaging comments about a person I know.
Since Sora’s release, I have also observed numerous problematic AI-generated videos populating my TikTok feed. These included phony dashcam footage of a Tesla falling off a car carrier onto a freeway, a fake broadcast news story about a fictional serial killer, and a fabricated cellphone video of a man being escorted out of a buffet for eating excessively.
An OpenAI spokesperson clarified that the company launched Sora as a standalone app to provide users with a dedicated space to enjoy AI-generated videos while also making it clear that these clips are AI-produced. The company has also integrated technology to facilitate tracing videos back to Sora, including watermarks and embedded data within video files that serve as unique signatures, the spokesperson explained.
“Our usage policies explicitly prohibit misleading others through impersonation, scams, or fraud, and we take swift action when we detect any misuse,” the company affirmed.
How can I tell what’s fake?
Although videos generated with Sora typically include a watermark of the app’s branding, some users have already discovered methods to crop out this distinguishing mark. Additionally, clips created with Sora tend to be brief, usually no more than 10 seconds long.
Any video that appears to be of Hollywood-level production quality could potentially be fake, as AI models are largely trained using footage from TV shows and movies available on the internet, according to Mr. Hansen.
In my personal tests, videos generated by Sora occasionally displayed obvious flaws, such as misspelled restaurant names and speech that was noticeably out of sync with the speaker’s mouth movements.
However, any advice on how to definitively identify an AI-generated video is likely to be quickly outdated due to the rapid advancements in the technology, cautioned Hany Farid, a computer science professor at the University of California, Berkeley, and co-founder of GetReal Security, a company specializing in verifying digital content authenticity.
“Social media is a complete dumpster,” Dr. Farid stated bluntly, adding that one of the most reliable ways to avoid fake videos altogether is to simply stop using apps like TikTok, Instagram, and Snapchat.