Luma Dream Machine AI Video Generator is getting a huge update — here's what's in store

Luma Dream Machine AI Video Generator is getting a huge update — here's what's in store

Luma Labs released Dream Machine last week, and the AI startup is already rolling out its first round of upgrades that include the ability to extend the clip by 5 seconds

Dream Machine can generate photorealistic video and accurate real-world movements in the way it has seen so far from the closed OpenAI Sora model and the Chinese Kling AI

I tested it when Dream Machine was first released and it felt like the first step Impressive output (after waiting 12 hours for demand), but some UI additions are needed and it seems to have already happened 

The first update has gone live including the continuation of the clip and the ability to download the created video more easily Pro users can also remove watermarks

Being able to expand the clip is likely to be one of the 1 updates Luma wanted to launch initially, and this is already available to users of the platform and each generation will vary depending on the subscription— using one of your monthly assignments

Competitors like Pika Labs and Runway have had the ability to scale video from day one, but success is often limited because the longer the video gets distorted or different Ruma promises that it is different

I personally haven't tried it yet, but it prompted me to extend an existing clip, but I haven't returned at the time I posted this story, and the process is simple, you give it a fresh motion prompt, click on the extension on the clip and move it up

"Extend is an advanced system that recognizes what's happening in the video and extends it in a consistent way to follow instructions," the company writes in X

Luma says that it can also expect new discovery features to appear in the interface in the future This allows you to explore different video concepts and ideas

1 One of the most promising new features is in-video editing It's not live, but it's a place where you can change the background or foreground of the generated video on the fly For example, you can replace one character with another or place it in a new location

This is similar to inpainting in AI image generation and is close to the features already available in Pika Labs Since I haven't tried it, I can't comment on the comparison more than point out that the concept is already available

The only thing I got as a demo of this feature is a video clip, but when I play the video, I can see a new context menu option This includes how to change the background of the video you have created 

Categories