Runway – ACT One
This link and video from Runway shows the rapid and amazing development that Ai is taking and helping both enhance and extend our video making.
Act-One appears to be part of Runway Gen 3 Alpha (image and video to video) guide here. At time of writing (December 2024) the credit cost for each generated second is 10 credits, with a maximum of 30 seconds output. The standard plan ($15/month) offers 625 credits and the Pro plan ($35/month) offers 2250 per month – but as with many Ai products the terms may change.
“At Runway, our mission is to build expressive and controllable tools for artists that can open new avenues for creative expression. Today, we’re excited to release Act-One, a new state-of-the-art tool for generating expressive character performances inside Gen-3 Alpha.Act-One can create compelling animations using video and voice performances as inputs. It represents a significant step forward in using generative models for expressive live action and animated content.
Capturing the Essence of a Performance
Traditional pipelines for facial animation often involve complex, multi-step workflows. These can include motion capture equipment, multiple footage references, manual face rigging, among other techniques. The goal is to transpose an actor’s performance into a 3D model suitable for an animation pipeline. The key challenge with traditional approaches lies in preserving emotion and nuance from the reference footage into the digital character.
Our approach uses a completely different pipeline, driven directly and only by a performance of an actor and requiring no extra equipment.
Animation Mocap
Act-One can be applied to a wide variety of reference images. The model preserves realistic facial expressions and accurately translates performances into characters with proportions different from the original source video. This versatility opens up new possibilities for inventive character design and animation.
Live Action
The model also excels in producing cinematic and realistic outputs, and is remarkably robust across camera angles while maintaining high-fidelity face animations. This capability allows creators to develop believable characters that deliver genuine emotion and expression, enhancing the viewer’s connection to the content.”