The ARP is a flexible toolkit for avatar system creation. Its purpose is to support our research into avatar technology and its applications. It achieves this by generalising and extending the facilities provided through standard avatar creation tools.
The avatar can be animated in real time by synthetically generated animation, including mouth and facial gestures, as in the eSign project, or from motion capture data. It is currently used in this latter role in a joint project between the UEA and the BBC for motion capture of deaf signing.
To meet eSIGN requirements for synthetic animation of sign language, nearly 400 positions on the avatar's skin need to be mapped. This process is automated using ray tracing techniques from the avatar skeleton.
For morph target facial animation, a library of mesh shapes (morph targets) that match phonemes and expressions (visemes) is needed. The toolkit provides means for creating these and builds supporting functionality into the avatar.
Current work includes improving the realism of facial animation through improvements to morph target technology, improvements in mesh deformation, and porting to OpenSG, an Open Source rendering scenegraph, for use in virtual heritage recreations.
Future work will target level-of-detail techniques for large scene applications, multi-platform capabilities, and the creation of morph targets based on skin deformation derived from muscle movement in the face.
Further information is available on Virtual Human Signing at UEA.