Runway, one of the leading artificial intelligence video platforms, has announced a new feature that completely changes the game for character consistency and filmmaking in general.
Act-1 is a new approach to AI video generators. It is a form of modern puppetry, where you can film yourself or an actor playing a role and use AI to completely change the way it looks. This solves one of AI's biggest problems: consistency.
Access to Act-1 will begin gradually over the next few weeks. According to Runway, it will soon be available to everyone.
AI video tools are getting much better at human movement, lip-sync, and character development, but there is still some way to go before they bridge the “obviously AI” gap. Runway's new tool may have finally solved that problem.
Instead of letting the AI do the character's movements and reactions, you can upload a video along with a control image (which you style) and essentially map the control image onto the performance.
To me, the real advantage of AI video will come from the fusion of real and generative AI, rather than relying entirely on AI itself. The best films already utilize visual effects along with model shots and film shots, and artificial intelligence is an extension of that.
Runway's Act-1 puts human performance front and center and uses AI as an overlay. This is like Andy Serkis playing Gollum in “The Lord of the Rings,” without the need for motion-capture suits or expensive cameras.
I haven't had a chance to try it yet, but judging by some of the examples shared on the runway, it's as simple as sitting in front of the camera and moving your head. Such elements have already been offered before, including by Adobe, but not by generative AI.
However, it is far more advanced than any previous tool. According to Runway, “In Act-1, eye contact, micro-expressions, pauses, and speech are all faithfully represented in the final generated output”
and “the user can use the same tools to create a more accurate representation of the final output.
Act-1 also goes beyond simple puppetry because it can use existing third-generation AI video technology to create complex scenes and integrate them with human acting.
The company describes X as follows. “One of the strengths of this model is that it produces cinematic and realistic output across a solid number of camera angles and focal lengths.” This opens up new avenues of creative expression, allowing us to generate emotional performances with a depth of character that was previously impossible.
Access to Act-1 will gradually begin rolling out to users today and will soon be available to everyone.
Comments