This month, OpenAI, the creators behind the renowned ChatGPT chatbot, introduced a groundbreaking technology many weren’t prepared for: Sora. This innovative application allows users to effortlessly generate incredibly realistic videos using artificial intelligence, simply by providing a text description. Imagine creating ‘police bodycam footage of a dog being arrested for stealing rib-eye at Costco’ with just a few words.
Available as a free app on iPhones, Sora has proven to be both highly entertaining and deeply unsettling. Early users have quickly embraced its potential for comedic and creative purposes, sharing everything from fabricated cellphone footage of a raccoon on an airplane to anime-style celebrity brawls. Personally, I found amusement in generating videos of a cat gracefully ascending to heaven and a dog expertly navigating a bouldering gym.
However, the tool’s power extends beyond innocent fun. Some individuals have already exploited Sora to spread misinformation, crafting deceptive security footage of entirely fictitious crimes.
Sora’s debut, alongside similar AI-driven video generators from tech giants like Meta and Google, carries profound implications. This technology threatens to dismantle the very notion of ‘visual fact’ – the long-held belief that video provides an objective record of reality. Moving forward, society must approach video content with the same critical skepticism it applies to written information.
Historically, photographs carried a strong presumption of authenticity, giving rise to phrases like ‘Pics or it didn’t happen!’ When photo manipulation became commonplace, video stepped in as the go-to medium for verifiable evidence, requiring specialized skills to alter convincingly. With Sora, that era is definitively over.
‘Our brains are powerfully wired to believe what we see, but we can and must learn to pause and think now about whether a video, and really any media, is something that happened in the real world,’ said Ren Ng, a computer science professor at the University of California, Berkeley, and an expert in computational photography.
Sora, which rapidly became the most downloaded free app in Apple’s App Store, has sent ripples through Hollywood. Studios are concerned that AI-generated videos might infringe upon existing film, show, and character copyrights. Sam Altman, OpenAI’s CEO, has acknowledged these concerns, stating that the company is gathering feedback and plans to offer copyright holders more control over character generation and avenues for monetization.
(It’s worth noting that a lawsuit has been filed by The New York Times against OpenAI and Microsoft, alleging copyright infringement of news content for AI training, claims which both companies deny.)
So, how exactly does Sora function, and what does its emergence mean for you, the everyday consumer? Let’s dive into the details.
How People Are Using Sora
While the Sora app is free to download, its video generation capabilities are currently by invitation only. This means users need an invite code from an existing Sora member to access the service, and these codes are frequently exchanged on platforms like Reddit and Discord.
Once registered, the app’s interface mirrors popular short-form video platforms such as TikTok and Instagram Reels. Users craft videos by inputting a text prompt, for example, ‘a fight between Biggie and Tupac in the style of the anime ‘Demon Slayer.’’ Initially, OpenAI allowed the use of likenesses and brands without explicit consent, making deceased figures easy subjects for experimental creations until broader copyright controls were announced.
Beyond text prompts, users can also upload a personal photo and have Sora generate a video based on it. After a video is created, usually within about a minute, it can be shared directly within the app’s feed or downloaded for distribution on other social media platforms.
Upon its release, Sora quickly distinguished itself with its exceptionally realistic video output, surpassing the quality of similar services like Google’s Veo 3 (integrated into the Gemini chatbot) and Meta AI’s Vibe.
What This Means for You
The core implication is stark: any short video you encounter on platforms like TikTok, Instagram Reels, YouTube Shorts, or Snapchat now carries a significant probability of being artificially generated and therefore fake.
Sora marks a pivotal moment in the age of AI-driven deception. Consumers should brace themselves for a proliferation of similar tools in the coming months, including those offered by malicious actors without ethical safeguards or restrictions.
Lucas Hansen, founder of CivAI, a nonprofit dedicated to AI education, cautions, ‘Nobody will be willing to accept videos as proof of anything anymore.’
Identifying Potential Issues
OpenAI has implemented various restrictions to prevent the misuse of Sora for generating inappropriate content, such as sexual imagery, harmful health advice, or terrorist propaganda.
However, my own brief testing revealed some concerning possibilities:
- Fabricated dashcam footage for insurance fraud: I successfully prompted Sora to create a dashcam video of a Toyota Prius colliding with a large truck. Disturbingly, I was then able to alter the license plate number within the generated clip.
- Videos promoting questionable health claims: Sora produced a video of a woman confidently citing non-existent studies to claim that deep-fried chicken is beneficial for health. While not inherently malicious, it was entirely false.
- Defamatory content: I was able to generate a fake broadcast news segment that made disparaging remarks about an acquaintance.
Since Sora’s launch, I’ve also observed numerous problematic AI-generated videos circulating on platforms like TikTok. These include fake dashcam footage of a Tesla falling from a car carrier onto a freeway, a fabricated news report about a fictional serial killer, and a bogus cellphone video of a man being removed from a buffet for excessive eating.
An OpenAI spokesperson explained that Sora was launched as a standalone app to provide a dedicated space for users to enjoy AI-generated videos while clearly indicating their AI origin. The company has also incorporated features to trace videos back to Sora, such as watermarks and embedded data signatures within the files.
The company reiterated, ‘Our usage policies prohibit misleading others through impersonation, scams or fraud, and we take action when we detect misuse.’
How Can You Distinguish Reality from Fabrication?
While Sora-generated videos include a brand watermark, users have already discovered methods to crop it out. Additionally, these AI clips typically have a short duration, usually under 10 seconds.
According to Mr. Hansen, any video that closely mimics Hollywood production quality might be suspect, as AI models are often trained on vast libraries of TV shows and movies available online.
During my experiments, Sora occasionally made discernible errors, such as misspelled words in fictional restaurant signs or speech that didn’t perfectly synchronize with mouth movements.
However, Hany Farid, a computer science professor at the University of California, Berkeley, and co-founder of GetReal Security (a digital content authenticity verification company), warns that any advice on identifying AI-generated video will quickly become obsolete due to the rapid advancements in the technology.
Dr. Farid bluntly stated, ‘Social media is a complete dumpster,’ and suggested that one of the most reliable ways to avoid encountering fake videos is to simply refrain from using platforms like TikTok, Instagram, and Snapchat.