OpenAI Unveils AI Video Generator Sora That Can Render Minute-Long Clips
Source : Gadgets 360
OpenAI, the company behind ChatGPT, introduced its first artificial intelligence (AI)-powered text-to-video generation model Sora on Thursday. The company claims it can generate up to 60-second-long videos. This is longer than any of its competitors in the segment, including Google’s Lumiere, which was unveiled last month. Sora is currently available to red teamers, cybersecurity experts who extensively test software to help companies improve their software, and some content creators. The AI firm also plans to include Coalition for Content Provenance and Authenticity (C2PA) metadata in the future once the model is deployed in an OpenAI product.
Announcing the AI video generator in a post on X (formerly known as Twitter), the company said, “Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.” Interestingly, the length of the video it claims to generate is more than ten times of what its rivals offer. Google’s Lumiere can generate 5-second-long videos, whereas Runway AI and Pika 1.0 can generate 4-second and 3-second-long videos, respectively.
Prompt: “A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.” pic.twitter.com/0JzpwPUGPB
— OpenAI (@OpenAI) February 15, 2024
The X account of OpenAI and CEO Sam Altman also shared multiple videos generated by Sora, along with the prompts used to create them. The resulting videos appear highly detailed with seamless motion, something other video generators in the market have somewhat struggled with. As per the company, it can generate complex scenes with multiple characters, multiple camera angles, specific types of motion, and accurate details of the subject and background. This is possible because the text-to-video model uses both the prompt as well as “how those things exist in the physical world.”
Sora is essentially a diffusion model which uses a transformer architecture similar to GPT models. Similarly, the data it consumes and generates is represented in a term called patches, which is again akin to tokens in text-generating models. Patches are collections of videos and images, bundled in small portions, as per the company. Using this visual data enabled OpenAI to train the video generation model in different durations, resolutions and aspect ratios. In addition to text-to-video generation, Sora can also take a still image and generate a video from it.
However, it is not without flaws either. OpenAI stated on its website, “The current model has weaknesses. It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterwards, the cookie may not have a bite mark.”
Prompt: “Animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle. the art style is 3d and realistic, with a focus on lighting and texture. the mood of the painting is one of wonder and curiosity, as the monster gazes at the flame with… pic.twitter.com/aLMgJPI0y6
— OpenAI (@OpenAI) February 15, 2024
To ensure the AI tool is not used for creating deepfakes or other harmful content, the company is building tools to help detect misleading content. It also plans to use C2PA metadata in the generated videos, after adopting the practice for its DALL-E 3 model recently. It is also working with red teamers, especially domain experts in areas of misinformation, hateful content, and bias, to improve the model.
At present, it is only available to the red teamers and a small number of visual artists, designers, and filmmakers to gain feedback about the product.