With the new system, directors will be able to fine-tune the performances in post-production, rather than on the film set. Called FaceDirector, the system enables a director to seamlessly blend facial images from a couple of video takes to achieve the desired effect.

FaceDirector is able to create a variety of novel, visually plausible versions of performances of actors in close-up and mid-range shots. Moreover, the system works with normal 2D video input acquired by standard cameras, without the need for additional hardware or 3D face reconstruction.

In a first, the system analyses both facial expressions and audio cues. It then identifies frames that correspond between the takes using a graph-based framework. Once this synchronization has occurred, the system enables a director to control the performance by choosing the desired facial expressions and timing from either video, which are then blended together using facial landmarks, optical flow and compositing.

The researchers would present their findings at the ongoing International Conference on Computer Vision 2015 in Santiago, Chile.