OpenAI’s recent launch of the Sora app has sparked both intrigue and debate, highlighting the rapid merging of imagination with reality. Initially, Sora was a creative experiment in AI video generation that has evolved into a worldwide discussion on copyright, deepfakes, and digital ethics. Released just last week, the app allows users to create 10-second AI videos with sound based on their descriptions. Its ‘cameo’ feature lets users star in their videos as AI-generated versions of themselves or individuals who have consented to be included. However, the internet quickly filled with odd creations, prompting OpenAI to manage the ensuing controversy.
CEO Sam Altman reflected on the unexpected public response, noting, ‘I think the theory of what it was going to feel like to people, and then actually seeing the thing, people had different responses. It felt more different to images than people expected.’ Initially, OpenAI had an opt-out policy for copyright holders, requiring creators to explicitly deny permission to prevent their characters from appearing in Sora videos. But, due to a surge of user-generated content and growing concerns from studios, the company changed its approach. Altman revealed that rightsholders would now have more authority over how their content is utilized in Sora. ‘This came from talking to stakeholders,’ he stated.
‘Many rightsholders are excited, but they want a lot more controls.’ He acknowledged that the app’s viral success exceeded expectations: ‘We thought we could slow down the ramp; that didn’t happen.’ The cameo feature has proven particularly complicated, as many users are willing to have their AI likenesses included, provided the clones refrain from offensive behavior. Bill Peebles, OpenAI’s head of Sora, indicated that users can now impose text-based restrictions, such as, ‘Don’t put me in political videos’ or ‘Don’t let me say this word,’ allowing them to maintain control while engaging in the creative process. However, deeper concerns remain.
Despite watermarking its videos, OpenAI recognizes that users are discovering ways to remove these marks, which raises alarms about misinformation and harmful deepfakes. Tutorials on erasing Sora’s watermark have already emerged on social media, challenging the company’s protective measures. Nevertheless, OpenAI is moving ahead. At DevDay, Altman announced Sora 2, which will be accessible via its API and notably lacks built-in watermarks. Critics have described this as a reckless decision, but Altman defended it as essential for societal adaptation. ‘There’s going to be a ton of videos with none of our safeguards,’ he remarked.
‘The only way to prepare is to experience it.’ OpenAI President Greg Brockman humorously summarized the company’s key lesson: ‘We’re going to need more compute.’ This aligns with OpenAI’s extensive Stargate project, a multi-billion-dollar data center initiative supported by SoftBank and Oracle, designed to enhance upcoming AI capabilities. For Altman, the challenges reflect progress. ‘We’ve got to have this sort of technological and societal co-evolution,’ he concluded. ‘There are clearly going to be challenges for society contending with this quality, but the only way to prepare is to experience it.’