Projects 2008 > Physical Cartooning > Journal
It’s time to put our production where our mouths are, as today production started down in the strange and wonderful workshops of HMC Interactive. Here's an update:
Digital Puppeteering
Within the first few days of research it was apparent that some clever people had already dedicated a decent amount of time to developing systems that transformed vocal audio into lip syncing processes. It was also apparent that even when these systems delivered this process successfully, the effect of delivering pure lip sync without any additional emotion or further physical communication left a very cold and unsatisfying experience.
This brought us back to where we started, looking to human manipulation of a puppet, and a puppeteers ability to communicate emotion, to create a successful experience.
Some research around Aardman revealed an already vibrant research area around this subject. ‘Pre-vis’: the process of quickly testing animation styles and movements prior to lengthy and expensive 3d animation production, is a growing industry. This process is basically digital puppeteering for production purposes.
Aardman’s already briefly experimented with pre-vis systems, involving a combination of motion capture and puppeteering interfaces such as those used by animatronic puppeteers. One interesting outcome revealed how suited dancers were to the choreography needed, rather than the actors that usually take on that role.
Our implementation of a puppeteering system is an attempt to look at a more standardized and malleable interface to interact with. Two Lemar pads ( jazz mutant ) will be used to record and translate finger motions into movements of a CG puppet. The actual form of interaction is still being researched; Will a direct digital replacement for a puppeteer’s ‘cross sticks’ suffice? Or will more manageable and intuitive systems be possible?
Magic Mirror
With a large piece of one way mirror in transport as this article’s being written, the magic mirror’s existence creeps closer. Alongside the one way mirror is a machine vision camera, and a few large IR Lamps.
The technology behind this hasn’t really changed much than originally proposed; Basically flood the area with IR light, and have a go at tracking the reflected light from peoples’ pupils. Early experiments revealed that a standard web cam just doesn’t deliver the performance needed, so to allow decent tracking we’ve had to upgrade and get in decent machine vision level cameras.
A conversation with Joanie down at the lab, to involve a few layers of mirrored glass has left our heads pondering. Is it possible to somehow target layers of mirror through linking polarised light from the projectors to layers of the mirror. This way a basic level of parallaxing depth could be created. Nice.
Comments