AI video startup Runway announced the availability of its newest video synthesis model today. Dubbed Gen-4, the model purports to solve several key problems with AI video generation.
Chief among those is the notion of consistent characters and objects across shots. If you've watched any short films made with AI, you've likely noticed that they're either dream-like sequences or thematically but not realistically connected images—mood pieces more than consistent narratives.
Runway claims Gen-4 can maintain consistent characters and objects, provided it's given a single reference image of the character or object in question as part of the project in Runway's interface.
The company published example videos including the same woman appearing in various different shots across different scenes, and the same statue appearing in completely different contexts, looking largely the same in a variety of environments and lighting conditions.
Likewise, Gen-4 aims to allow filmmakers who use the tool to get coverage of the same environment or subject from multiple angles across several shots in the same sequence. With Gen-2 and Gen-3, this was virtually impossible. The tool has in the past been good at maintaining stylistic integrity, but not at generating multiple angles within the same scene.
The last major model update at Runway was Gen-3, which was announced just under a year ago in June 2024. That update greatly expanded the length of videos users could produce from just two seconds to 10, and offered greater consistency and coherence than its predecessor, Gen-2.
Runway’s unique positioning in a crowded space
Runway released the first publicly available version of its video synthesis product to users in February 2023. Gen-1 creations tended to be more curiosities than anything useful to creatives, but subsequent optimizations have allowed the tool to be used in limited ways in real projects.