Just recently, OpenAI, the creators behind the famous ChatGPT, unleashed a new technology that caught many of us off guard. They launched Sora, an innovative app allowing anyone to generate remarkably lifelike videos using AI, simply by typing a description – like “police bodycam footage of a dog being arrested for stealing rib-eye at Costco.”
This free iPhone app has proven to be as entertaining as it is unsettling. Early users have flooded social media with amusing creations, from a fake phone video of a raccoon on an airplane to anime-style celebrity brawls. Personally, I had a blast conjuring clips of a cat ascending to heaven and a dog bouldering indoors.
However, the tool’s darker side quickly emerged, with some exploiting it to spread misinformation, such as fabricated security footage of non-existent crimes.
Sora’s debut, alongside similar AI video generators from Meta and Google, signals a profound shift. This technology threatens to dismantle “visual fact” – the notion that video offers an objective record of reality. We, as a society, must now approach video content with the same critical eye we’ve learned to apply to written words.
There was a time when photos were generally trusted (“Pics or it didn’t happen!”). As photo manipulation became easier, video rose as the go-to for proof, requiring considerable skill to alter. But now, even that safeguard is gone.
Ren Ng, a computer science professor at UC Berkeley specializing in computational photography, warns, “Our brains are powerfully wired to believe what we see, but we can and must learn to pause and think now about whether a video, and really any media, is something that happened in the real world.”
This week, Sora quickly became the top free app on Apple’s App Store, stirring significant concern in Hollywood. Studios are worried that AI-generated videos are already infringing on copyrights for films, shows, and characters. OpenAI CEO Sam Altman responded, stating that the company is actively gathering feedback and will soon offer copyright holders mechanisms to control character generation and monetize their work through the service.
(It’s worth noting that this publication has filed a lawsuit against OpenAI and its partner, Microsoft, alleging copyright infringement of news content used to train AI systems. Both companies deny these allegations.)
So, how exactly does Sora function, and what does its rise mean for you, the everyday consumer? Let’s dive into the details.
Unveiling Sora: How Does This AI Video Generator Work?
While Sora is freely available to download, access to its video generation features is currently by invitation only. Users receive a code from an existing Sora member, and these codes are frequently shared across platforms like Reddit and Discord.
Upon registration, the app’s interface will feel familiar to anyone who uses short-form video platforms like TikTok or Instagram Reels. You can create a video by simply typing a prompt, such as “a fight between Biggie and Tupac in the style of the anime ‘Demon Slayer.’” (Initially, before OpenAI committed to offering copyright holders more control, content creators had to actively opt out of having their intellectual property used, which unfortunately made deceased public figures frequent subjects for AI experimentation.)
You can even upload an existing photo and transform it into a video. After about a minute of processing, your AI-generated clip is ready to be shared within the app’s feed, downloaded, or posted to other social platforms like TikTok and Instagram.
Sora’s impact this month was immediate, as its generated videos appeared significantly more realistic than those from rival services like Google’s Veo 3 (integrated into the Gemini chatbot) or Meta’s Vibe (part of the Meta AI app).
Personal Impact: How Will Sora Affect Your Digital Experience?
The bottom line is simple: any short video you encounter on platforms like TikTok, Instagram Reels, YouTube Shorts, or Snapchat now carries a significant chance of being completely fabricated.
Sora represents a critical turning point in the age of AI-driven deception. In the coming months, we can anticipate a surge of similar tools, some undoubtedly offered by malicious individuals seeking to create AI video without any ethical boundaries.
Lucas Hansen, co-founder of CivAI, a non-profit dedicated to AI literacy, states it plainly: “Nobody will be willing to accept videos as proof of anything anymore.”
Navigating the Risks: What Potential Problems Should You Be Aware Of?
OpenAI claims to have implemented safeguards to prevent Sora’s misuse, specifically blocking the generation of videos containing sexual imagery, harmful health misinformation, or terrorist propaganda.
Despite these claims, during just one hour of testing, I managed to create several potentially problematic videos:
-
Deceptive dashcam footage for insurance fraud: I successfully prompted Sora to create dashcam video of a Toyota Prius colliding with a large truck. What’s more, I could even alter the license plate number after generation.
-
Videos promoting dubious health claims: Sora produced a video featuring a woman presenting entirely fabricated studies that advocated for deep-fried chicken’s health benefits. Though not explicitly malicious, the content was demonstrably false.
-
Defamatory content: I managed to generate a bogus broadcast news story that contained disparaging remarks about an individual I know.
Beyond my own experiments, I’ve observed a disturbing number of problematic AI-generated videos on TikTok since Sora’s launch. These include fake dashcam footage of a Tesla tumbling from a car carrier onto a highway, a fictional news report about a serial killer, and a manufactured cellphone video of a man being removed from a buffet for excessive eating.
An OpenAI spokesperson explained that Sora was launched as a standalone app to provide a distinct environment for users to enjoy AI-generated videos and inherently understand their AI origin. They also stated that the company implemented tracing technology, such as watermarks and embedded data signatures within video files, to identify Sora-created content.
“Our usage policies strictly forbid deceiving others through impersonation, scams, or fraud, and we actively intervene when misuse is detected,” the company asserted.
The Challenge Ahead: How Can You Distinguish Reality from AI Fabrication?
While Sora-generated videos come with an embedded watermark, some savvy users have already found ways to crop it out. Additionally, these clips are typically quite brief, often no longer than 10 seconds.
According to Mr. Hansen, any video that boasts Hollywood-level production quality should be viewed with suspicion, as AI models are predominantly trained on vast amounts of footage from TV shows and movies available online.
My own tests with Sora occasionally revealed clear flaws, such as misspelled restaurant names and audio that was noticeably out of sync with character’s lip movements.
However, Hany Farid, a UC Berkeley computer science professor and co-founder of GetReal Security (a digital content verification firm), cautions that any tips for identifying AI-generated video will quickly become obsolete as the technology continues its rapid advancement.
“Social media is a complete dumpster,” Dr. Farid bluntly stated, suggesting that one of the most reliable methods to avoid encountering fake videos is to simply refrain from using platforms like TikTok, Instagram, and Snapchat.