Luma Labs Dream Machine has been upgraded The change is an upgrade of the underlying model, version 15, which offers better realism, motion following, and quicker comprehension
The startup shook up the AI video landscape when it launched out of stealth in June of this year, offering better prompt compliance, more realistic motion, and improved text-to-video realism It quickly became a favorite of AI video creators
Since the launch of this service, there have been upgrades to Runway, the launch of Kling, and Haiper's AI video model and platform to show how fast AI is moving Pika has also seen its image-to-video model updates we are seeing
With version 15, Luma AI has shown that it does not intend to rest on its initial success, taking text-to-video conversion to a level of realism similar to that of Runway Gen-3 Alpha and Kling AI
Luma has also improved prompt compliance, text-to-video generation, and more realistic human motion, giving its models superior text rendering capabilities
In other words, it can generate graphics from simple text prompts that can be dropped into logo screens, end boards, or even PowerPoint presentations
Getting readable text from DreamMachine is the same as getting readable text from Midjourney or any other AI image generator
The results are hit or miss I asked Dream Machine to generate the words “Cats in Space”
It did exactly what I asked it to do, but was not descriptive enough to have it generate the words in a row instead of stacking them It also did not bounce Instead it created a strange zoom motion When I asked him to display my name, he copied this same motion, and one by one the letters emerged from the sand
Even though the text movement and layout was not what I wanted, the letters were perfectly legible in every test I did If you want to reflect a particular style, you can use images as prompts
Before we get into quality, I should also point out that Dream Machine v15 is considerably faster than previous versions: a 5-second video can be generated in about 2 minutes; a 5-second video can be generated in about 2 minutes; a 5-second video can be generated in about 2 minutes
The most noticeable change is the level of realism in both visual and motion quality I ran several different tests, including an old woman in the water, a tiger in the snow, and a drawn fly-through of a castle This includes motion consistency and adherence to real world physics
One thing to note with Dream Machine: while DreamMachine is very good at enhancing prompts, if you give long, descriptive prompts, be sure to uncheck the box to enhance prompts
Overall, this is not an upgrade on the scale of Runway Gen-2 to Runway Gen-3 Still, it is important enough to be worth noting, helping to keep Dream Machine in the top ranks of generative AI video platforms
Comments