AI video is still far from full realism (we've all seen the lightsaber duel that became a meme), but it is about to make its big way to more developers, and consumers
Runway and Luma Labs, pioneers in AI video generation, have released APIs (App Programming Interface) for Runway Gen-3 Alpha Turbo and Luma Dream Machine, respectively At the time of this writing, these have only been accessed on a limited basis, but more compatible models are expected to be added in the coming months
This will allow application developers to integrate generative video AI models into their own applications, and the popularity of the creation tools will soar into the news stratosphere For example, one could build a Chrome extension that generates a short video response to a submission to X, rather than typing or selecting a pre-prepared gif
The possibilities are almost endless, as our AI editor's postings to X (formerly Twitter) illustrate
The generated video could be run through some sort of pipeline, but it could also lead to a bespoke video editing tool dedicated to generative AI video
It could also reduce the tedium of creating generative videos in batches and allow the same content to be viewed from different perspectives through a uniform process, such as a shot list
API access for Runway can be found here, and API details (including documentation) for Luma can be found here
Comments