OpenAI has released a new Sora video.

OpenAI has released a new Sora video.

OpenAI continues to preview the capabilities of its Sora-generated video model, with the latest clips coming closer to Hollywood productions than any of AI's to date.

Sora is not available outside of OpenAI (and a select group of testers), but you can share the output on social media to see what is possible.

In the first video release, we saw a dog playing in the snow, a couple in Tokyo, and a flight over a 19th century California gold mining town.

We now see clips that look like complete works of art, with multiple shots, effects, and consistent movement across up to a minute of video from a single prompt.

The clips we see hint at the future of true generative entertainment. When combined with other AI models, such as sound, lip-sync, or production-level platforms like LTX Studio, creativity becomes truly accessible.

X creator Blaine Brown shared a video of a music video combining Bill Peebles' Sora Alien, Pika Labs' Lip Sync, and a song created using Suno's AI.

A fly-through of the museum by Tim Brooks is impressive for its variety of shots and flow of movement; it looks like a drone video but is indoors.

Others, like a couple eating in an aquarium, demonstrate their ability with complex movements and maintain a consistent flow throughout the clip.

Sora is a key moment in AI video, utilizing a combination of chatbot transformer technology such as ChatGPT and image-generating diffusion models found in MidJourney, Stable Diffusion, and DALL-E.

It can do things that are currently not possible with other large AI video models such as Runway's Gen-2, Pika Labs' Pika 1.0, and StabilityAI's Stable Video Diffusion 1.1.

The AI video tools available at this time create 1- to 4-second clips, sometimes struggling with complex motion, but realism is about on par with Sora.

However, other AI companies are taking note of Sora's capabilities and its production methods; StabilityAI has confirmed that Stable Diffusion 3 will follow a similar architecture, and a video model is likely to appear eventually.

Runway has already tweaked its Gen-2 model and can see more consistent motion and character development.

Categories