Subscribe to our daily and weekly newsletters for the latest updates and exclusive content on leading AI advancements.
AI video technology is evolving rapidly, though it’s still in its infancy. New York City-based startup Runway is a key player in this space, enabling users to create videos in various styles. Recently, they updated their Gen-2 foundation model with a tool called Multi Motion Brush. This new feature lets creators add multiple directions and types of motion to their AI video projects, a significant leap from the existing market options, which typically only allow motion to be added to the whole image or a single highlighted area.
The Multi Motion Brush builds on the Motion Brush feature introduced in November 2023, which permitted only one type of motion at a time. Now, users can control multiple areas of their videos with independent motions. This tool was initially available to select users through Runway’s Creative Partners Program but is now accessible to all Gen-2 users, expanding the model’s toolkit of over 30 creative video production tools.
The Multi Motion Brush aims to give users more control over their AI-generated videos by allowing them to apply independent motion to selected areas. Users start by uploading a still image and “painting” it with a digital brush. They then use sliders in Runway’s web interface to define the direction and intensity of the motion for each painted area. The horizontal, vertical, and proximity sliders help adjust the motion’s direction – left/right, up/down, or closer/further. Each slider ranges from -10 to +10, and users can input values manually, drag the text field, or use the sliders. There’s also a “Clear” button to reset everything to zero.
Runway’s Gen-2 model, unveiled in March 2023, has seen significant enhancements. Initially, it supported only four-second video clips, but an August update extended this to 18 seconds. Other new features include “Director Mode,” allowing users to set the camera’s direction and speed, and various video styles like 3D cartoon, cinematic, and advertising.
In the competitive field of AI-driven video generation, Runway stands out against rivals like Pika Labs and Stability AI. Additionally, Runway offers a text-to-image tool competing with platforms like Midjourney and DALL-E 3. Despite improvements, these tools still occasionally produce blurred, incomplete, or inconsistent outputs.
Stay informed with our daily updates. Subscribe to receive the latest news directly in your inbox. By subscribing, you agree to our Terms of Service.