Runway releases its video generation AI model 'Gen-3 Alpha,' making it possible for anyone to generate 5-10 second videos within days



AI company Runway has announced ' Gen-3 Alpha ,' a new video generation AI model trained on a new infrastructure built for large-scale multimodal training.

Introducing Gen-3 Alpha: A New Frontier for Video Generation
https://runwayml.com/blog/introducing-gen-3-alpha/

Runway unveils new hyper realistic AI video model Gen-3 Alpha | VentureBeat
https://venturebeat.com/ai/runway-unveils-new-hyper-realistic-ai-video-model-gen-3-alpha-capable-of-10-second-long-clips/

Gen-3 Alpha is an AI model capable of generating videos with complex scene changes, a wide range of cinematic options, and detailed art direction capabilities.



Runway described Gen-3 Alpha as 'the first in a series of upcoming AI models that Runway will train on infrastructure built for large-scale multimodal training, and marks an important step in Runway's goal of building general-world models.'

The video attached to the post below was generated from the prompt, 'Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.'




Gen-3 Alpha is trained on both video and images, so it can not only generate video from text, but also generate video from images and images from text. Future enhancements include motion brushes, advanced camera controls, and director tools. Gen-3 Alpha will also be released with new safety measures, including Runway's new and improved visual moderation system and C2PA , a standard for verifying the authenticity of digital content.

The video attached to the post below was generated from the prompt, 'An astronaut running through an alley in Rio de Janeiro.'




Runway emphasizes that Gen-3 Alpha is a thoroughly trained video generation AI model for creative applications, and is the collaborative effort of an interdisciplinary team of research scientists, engineers, and artists.

The video attached to the post below was generated from the prompt 'FPV moving through a forest to an abandoned house to ocean waves.'




Runway is also developing custom versions of the Gen-3 Alpha, part of the Gen-3 model family, in collaboration with and in partnership with major entertainment and media companies, which will enable more stylistic and consistent character generation and target specific artistic and narrative requirements.

The video attached to the post below was generated from the prompt, 'An older man playing piano, lit from the side.'




Runway said of Gen-3 Alpha, 'This technological breakthrough marks an important milestone in our commitment to empowering artists and paving the way for the next generation of creative and artistic innovation.' Gen-3 Alpha will be available to everyone in the coming days.

The video attached to the post below was generated from the prompt, 'A slow cinematic push in on an ostrich standing in a 1980s kitchen.'




The video attached to the post below was generated from the prompt, 'A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head.'




The video attached to the post below was generated from the prompt, 'A colossal statue of an ancient warrior stands tall on a cliff's edge. The camera circles slowly, capturing the warrior's silhouette.'




The video attached to the post below was generated from the prompt, 'An empty warehouse, zoom in into a wonderful jungle that emerges from the ground.'




The video attached to the post below was generated from the prompt, 'Handheld camera moving fast, flashlight light, in a white old wall in an old alley at night a black graffiti that spells 'Runway'.'




Technology media outlet VentureBeat confirmed with Runway that the initial release of Gen-3 Alpha will enable the generation of 5- to 10-second videos. It will take 45 seconds to generate a 5-second video, and 90 seconds to generate a 10-second video. Runway explained that Gen-3 Alpha will be available to everyone 'in the coming days,' but according to information obtained by VentureBeat in an interview with the company's CTO, Anastasis Germanidis, Gen-3 Alpha will initially be available to paid Runway subscribers.

It is not clear what data was used for Gen 3-Alpha's training, but VentureBeat points out that 'most major AI generative models do not disclose in detail what data they use for training. It is unclear whether the data was procured through a paid license agreement or scraped from the internet.' VentureBeat asked Runway about Gen-3 Alpha's training data, but the response they received was vague: 'We have an in-house research team that oversees all training, and we train our models using carefully selected in-house datasets.'

in AI,   Video,   Software, Posted by logu_ii