Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage.
Runway ML, a startup based in New York City, was one of the pioneers in developing realistic, high-quality generative AI video creation models. After launching its Gen-1 model in February 2023 and Gen-2 in June 2023, the company saw increased competition from other highly realistic AI video generators like OpenAI’s upcoming Sora model and Luma AI’s recently released Dream Machine model.
However, Runway is making a strong comeback in the generative AI video arena by introducing Gen-3 Alpha. This new model is the first in a series trained on Runway’s new infrastructure designed for large-scale multimodal training. It’s described as a step toward creating General World Models, which are capable of representing and simulating a wide range of real-world situations and interactions.
Gen-3 Alpha can generate highly realistic, detailed 10-second video clips with a variety of emotional expressions and camera movements.
According to Runway, this initial release will support 5 and 10-second video generations with faster generation times. Specifically, a 5-second clip takes 45 seconds to generate, while a 10-second clip takes 90 seconds.
Runway hasn’t provided an exact release date for the model yet, but it has showcased demo videos on its website and social media. It’s also unclear whether Gen-3 Alpha will be available on the free tier or will require a paid subscription, which starts at $15 per month or $144 per year.
VentureBeat interviewed Runway co-founder and CTO Anastasis Germanidis, who confirmed that the Gen-3 Alpha model will be available to paying subscribers within days, with future availability on the free tier yet to be announced. A Runway spokesperson also stated that Gen-3 Alpha would soon be accessible to paid subscribers, Creative Partners, and Enterprise users.
An excited Runway user, Gabe Michael, mentioned on LinkedIn that he expected to receive access later this week. On the platform X, Germanidis noted that Gen-3 Alpha would be included in the Runway product and will enhance existing modes like text-to-video, image-to-video, and video-to-video, as well as enabling new capabilities.
Since the release of Gen-2 in 2023, Runway has learned that video diffusion models have significant room for performance improvement. These models develop powerful representations of the visual world by learning the task of predicting videos from pixellated noise.
Runway’s blog post reveals that Gen-3 Alpha was trained jointly on videos and images by a team of research scientists, engineers, and artists, though the specific data sets used have not been disclosed. This is consistent with the trend among leading AI media generators who generally don’t detail their training data sources or whether they were obtained through paid licensing or web scraping.
Critics argue that AI model makers should compensate the original creators of their training data via licensing deals, with some filing copyright infringement lawsuits. However, AI companies generally maintain that they can legally train on any publicly posted data.
A Runway spokesperson indicated that their in-house research team uses curated internal datasets to train their models. Additionally, Runway has been working with leading entertainment and media organizations to create custom versions of Gen-3, allowing for more stylistically controlled and consistent characters to meet specific artistic and narrative needs.
Though no particular organizations were named, films like “Everything, Everywhere, All at Once” and “The People’s Joker” have previously used Runway for certain effects.
Runway also offers a form in its Gen-3 Alpha announcement for organizations interested in custom versions of the new model, though no pricing details have been provided.
It is evident that Runway is determined to remain a dominant force in the fast-evolving generative AI video creation industry.