Steve Nallon is a creative and experienced Computer Motion Capture Performer, a form performance he has often combined with his voice actor work.
Computer Motion Capture, or Mo-Cap for short, is the process of recording the movement and actions of human performers and then using that information to animate digital character models in 2D or 3D computer animation. This process is also sometimes called 'Performance Capture', especially when it includes recording face movement that has more subtle expression. Both these terms should be distinguished from CGI or Computer Generated Imagery which is the application of computer graphics to create or contribute to images in art, video games, films and other simulated media.
The principle of the Mo-Cap technology is surprisingly simple. Luminous 'markers' are placed all over a skin-tight bodysuit worn by a performer who then moves around a large studio in which lots of cameras have been placed at various points. As the performer moves or gestures, the cameras essentially record the changing position of these markers on the body suit. This information is then put into the computer which then basically 'joins the dots', creating its own skeleton-like figure that animators somewhat confusingly call 'the actor'. Once movement has been captured in this way, the dynamics can then be used to animate the actual 'model' or 'character' and, hey presto, you have a fully 'living' animation figure whose motion and style of movement proportionately reflects the original actions of the performer!
Computer motion capture is a process that is usually done in stages. The information that the computer has to take in, access and then generate is considerable. Although the final movement of the character model is mostly done after the recording has been completed, in 'post', as the animator would say, it is possible to do the whole thing 'live'. In this situation the performer is miked-up to provide the voice of the character as well as the movement. Steve Nallon has done this on many occasions and in practice this allows the character model to be interactive with a human actor or interviewer in a live situation.
Web Design: NewTimeMedia