Synthesizing realistic human movements, dynamically responsive to the environment, is a long-standing objective in character animation, with applications in computer vision, sports, and healthcare, for motion prediction and data augmentation.
Recent kinematics-based generative motion models offer impressive scalability in modeling extensive motion data, albeit without an interface to reason about and interact with physics. While simulator-in-the-loop learning approaches enable highly physically realistic behaviors, the challenges in training often affect scalability and adoption.
We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics. DROP can be viewed as a highly stable, minimalist physics-based human simulator that interfaces with a kinematics-based generative motion prior. Utilizing projective dynamics, DROP allows flexible and simple integration of the learned motion prior as one of the projective energies, seamlessly incorporating control provided by the motion prior with Newtonian dynamics. Serving as a model-agnostic plug-in, DROP enables us to fully leverage recent advances in generative motion models for physics-based motion synthesis.
We conduct extensive evaluations of our model across different motion tasks and various physical perturbations, demonstrating the scalability and diversity of responses.
Generative motion models are trained on motions collected in empty MoCap rooms without physical interactions.
With DROP, we can guide a non-physical generative motion model to synthesize realistic human motions in a variety of physical scenarios, without needing retraining or high-level controller.
Inherited from the motion prior, we can synthesize diverse motions even in the same physical scenario.
DROP also allows us to make two generative models to physically interact with each other, treating the other model (human character) as external physics.
As our method is built upon a generative model trained on large-scale human motion data, compared to DRL-based methods, we can generate diverse and compliant physical responses.
@article{jiang2023drop,
author = {Jiang, Yifeng and Won, Jungdam and Ye, Yuting and Liu, C Karen},
title = {DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics},
journal = {SIGGRAPH Asia},
year = {2023},
}