Conveying facial expression for emotions and spoken word using game engines

One of the interesting challenges in creating virtual worlds is to create computer participants that present engaging and informative facial expressions. Recent game engines have really moved forward in this area. You can see an example of this functionality in the Source (Half-Life 2) engine.  Source comes with a tool called Face Poser that is used to control the appearance of a face.  This has several capabilities - first that the movement of the lips, jaw, mouth opening, etc. are appropriate to what the face is supposed to be saying at any particular time.  Also, the face can be manipulated to covey emotions - confusion, happiness, etc. You can see examples of the "emotions" of a face built in Source using Face Poser by looking at the demo videos for Half-Life 2.  It is interesting, and appropriate to non-English use, that Face Poser is based on phonemes - think the phonetic/pronounciation entry in a dictionary.  This makes it well suited to supporting accents/dialects as well as non-English languages. You can see a first-rate example of synchonized facial expressions (to the words spoken) by viewing the "I'm Still Seeing Breen" video at http://www.machinima.org/paul_blog/
        
        There have been a variety of Poser-like products on the market for 5-10 years now but most were much more high-end - devoted to the movie industry or the DoD - and priced accordingly.  Source Face Poser is unusual in the functionality available at a more modest price point.
        
        I have also seen stories about cultural training for State Dept. and Army personnel using virtual reality.  Most of the work I have heard about (perhaps in the same story Jamil mentioned below) has been at the USC Institute for Creative Technologies at http://www.ict.usc.edu/.
        

No comments: