Developing Natural Full-body Motion Synthesis in Virtual Humans

Abstract

Everyday action sequences such as taking a few steps to point at an object are trivial for humans because they require minimal effort. However, such actions pose serious challenges to developing realistic animated virtual humans that autonomously interact with people in training and educational applications. We propose a framework targets data-driven full-body motion synthesis for virtual humans. Motions captured from human actions (e.g., walk and point) allow us to create behavioral models and build generic motion databases containing upper-body and walking actions. Synthesis output is achieved by parameterizing motion databases with behavior models. We augment our current inverse-blending action model and stepping planner by targeting environment configuration factors, including character gaze (e.g., timing, frequency, fixation points), body position (e.g., bending, preferred location to start action), and walk-action coordination (e.g., temporal dynamics). To further inform the development of realistic virtual humans, user studies validate the model and assist in fine-tuning.


Back to Table of Contents