Tried Hyper 15 - the latest sola challenging AI video model

Tried Hyper 15 - the latest sola challenging AI video model

Haiper, the Artificial Intelligence Video Lab, has released version 15 of its generative model

This is the latest update from OpenAI's still unreleased Sora model, an ever-growing AI video platform that chases realism, natural motion, and clip duration

I tested Haiper 15 with a series of prompts and it feels more like an upgrade of the Generation 1 model than a significant step change as seen between Runway Gen-2 and Gen-3 or in the Luma Labs Dream Machine release It feels more like an upgrade of the Generation 1 model

This is not to say that Hyper is not an incredibly impressive model; it offers the best value among AI video platforms However, it has not yet reached the motion quality of Runway Gen-3, nor has it solved the morphing and distortion issues found in Haiper 10

Haiper is the brainchild of former Google Deepmind researchers Yishu Miao and Ziyu Wang The London-based company is focused on building underlying AI models and working toward Artificial General Intelligence

The video model is specifically designed to excel at understanding motion, so the tool is not built with motion controls like Runway or Pika Labs, as the AI predicts what is needed We have found that omitting specific movement instructions from the prompts works better

The startup first came out of stealth with a ready-to-use model just four months ago and already has 15 million users Previously, videos were up to 4 seconds in length, but for most users they were only 2 seconds long The new model allows users to start with an 8-second clip

The AI model is the easiest to use and has a strong community It provides a variety of examples and prompt ideas that can be used to animate text and video

Haiper 15 allows clips to be up to 8 seconds long

Also, you can now create clips up to 8 seconds in high resolution, whereas previously high resolution was limited to very short shots of 2 seconds

As with Pika Labs, videos generated using Haiper can also be upscaled and extended Each generation adds 4 seconds to the original

Our first test is to see how well it handles the motion of multiple videos, and we can say that it does surprisingly well The fish appear to be swimming on a pond, but not very distorted or composited

Prompt: “A quiet koi pond in a Japanese garden with colorful fish swimming under lotus flowers

Next, a complex visual environment, in this case a bright light, a city full of people, and a degree of animation were tested This GIF reflects how slowly people are moving in the final video, which should be twice as fast

This was a simple prompt: “A busy cityscape at night, neon signs flickering, people rushing past in the rain

Hands are a nightmare for AI models, and unfortunately, so are hypers At first they look cracked, but after the next five seconds, shown in the GIF, they turn into a strange nightmarish sludge, full video on Haiper's website

“Close-up of the chef's hands preparing sushi He carefully cuts the fish and rolls the rice”

This was the only complete failure in the test prompt Perhaps more specific instructions were needed to capture movement, or perhaps simpler instructions were needed; it is a difficult decision because each AI video model behaves slightly differently

The prompt I used was: “Time lapse of a flower blooming and petals spreading in vibrant colors” I tried the same prompt in Luma Labs and the results were more realistic, but it also failed to display a time lapse

I prefer to use the space prompt, because it is more realistic Because it often confuses the model with respect to movement and it generates multiple Earths Hyper did a good job here and even showed the astronauts moving slowly Worth watching the full video

I used this prompt: “An astronaut floating in space Earth is visible in the background and the stars are twinkling

The next test was not just text, but a hyper image-to-video model I first generated an image of a steampunk city and provided it to Haiper along with motion prompts This did a good job of displaying an unusual scene

Prompt from AI image generator Ideogram: “Steampunk cityscape with airships and clockwork” Haiper's motion prompt alongside the image: “Gears turning, airship slowly moving across the sky

Finally, the Northern Lights This is a useful test for all AI video models and usually starts with text, but I wanted to see how this animates the image the full 8-second video is worth a look

Prompt from AI image generator Ideogram: “Aurora dancing on snowy mountains Haiper's motion prompt alongside the image: “Aurora borealis shimmering and swirling in the night sky

Haiper 15, like models like Runway Gen-2 and Pika Labs 10, is a clear improvement over Haiper 10, but a very tentative upgrade I can't wait to see what Haiper 20 looks like

Clips were sometimes slow and suffered from morphing, but overall, photorealism, movement, and consistency were all greatly improved This was due in part to the doubling of clip lengths

Categories