DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics

1Stanford University, 2Seoul National University 3Meta Reality Labs
Teaser figure.

DROP, a plug-in human simulator that adds dynamics capability to a pre-trained kinematics generative models, synthesizes dynamic reaction and recovery motion in response to a variety of perturbations.


Synthesizing realistic human movements, dynamically responsive to the environment, is a long-standing objective in character animation, with applications in computer vision, sports, and healthcare, for motion prediction and data augmentation.

Recent kinematics-based generative motion models offer impressive scalability in modeling extensive motion data, albeit without an interface to reason about and interact with physics. While simulator-in-the-loop learning approaches enable highly physically realistic behaviors, the challenges in training often affect scalability and adoption.

We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics. DROP can be viewed as a highly stable, minimalist physics-based human simulator that interfaces with a kinematics-based generative motion prior. Utilizing projective dynamics, DROP allows flexible and simple integration of the learned motion prior as one of the projective energies, seamlessly incorporating control provided by the motion prior with Newtonian dynamics. Serving as a model-agnostic plug-in, DROP enables us to fully leverage recent advances in generative motion models for physics-based motion synthesis.

We conduct extensive evaluations of our model across different motion tasks and various physical perturbations, demonstrating the scalability and diversity of responses.

Full Video

Diverse Physical Scenarios

Generative motion models are trained on motions collected in empty MoCap rooms without physical interactions.

With DROP, we can guide a non-physical generative motion model to synthesize realistic human motions in a variety of physical scenarios, without needing retraining or high-level controller.

Inherited from the motion prior, we can synthesize diverse motions even in the same physical scenario.

Tripped by obstacles

Tilting platform

Dragged by hand

Balls thrown at

Walking with a stiff knee

Pushed during back flip

Two-character Interaction

DROP also allows us to make two generative models to physically interact with each other, treating the other model (human character) as external physics.

Comparison to DRL-based Methods

As our method is built upon a generative model trained on large-scale human motion data, compared to DRL-based methods, we can generate diverse and compliant physical responses.

Responses to small external forces

Responses to large external forces


  author    = {Jiang, Yifeng and Won, Jungdam and Ye, Yuting and Liu, C Karen},
  title     = {DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics},
  journal   = {SIGGRAPH Asia},
  year      = {2023},