Join
Welcome to The Movement Lab website! If you are a current Stanford student and interested in working on cutting-edge research projects on humanoids, digital agents and physical intelligence in general, you might be someone we are looking for! This is an unpaid position for gaining research experiences, domain knowledge, and potential publications. The minimal commitment is 8 hours per week for at least one quarter.
The goal of our lab is to create coordinated, functional, and efficient whole-body movements for digital agents and for real robots to interact with the world. We focus on holistic motor behaviors that involve fusing multiple modalities of perception to produce intelligent and natural movements. Our lab is unique in that we study “motion intelligence” in the context of complex ecological environments, involving both high-level decision making and low-level physical execution. Our current research directions include:
- Developing intelligent and agile humanoids with bipedal legs and dexterous hands for whole-body navigation, locomotion and manipulation.
- Creating intelligent and autonomous human characters.
- Building intelligent and safe exoskeletons.
Highlights of Recent Work
Join Us
In practice research in our lab typically entails at least some of the following:
- Building generative models for motion synthesis or control
- Experimenting with humanoids, robot arms, exoskeletons and dexterous hands
- Training virtual agents using RL and transfer to real robots
- Collecting data with different motion capture systems
- Pre-training or fine-tuning VLMs for policy development
- Developing and using multibody simulators
- Designing and building 3D-printable humanoids
- Solving both convex and non-convex optimization problems
- Analyzing, simulating and understanding human interaction with assistive devices and robots.
If you are interested in working at TML, please fill out the research application form (Stanford email required).
Your submission will be reviewed by current TML members—typically those you indicate in the form. We will contact you in two weeks if your background interests align with ongoing projects. Due to the high volume of applications received, we regret that we are unable to respond to all submissions.
Current Projects under Recruitment
Here are the projects actively looking for students to join:
GUI for tuning reinforcement learning reward terms
Posted on 08/19/2025Contact: João Pedro Araújo
Reinforcement learning is currently very popular for both physics-based character animation and robotics. However, tuning the reward function is a tedious and complex task. The goal of this project is to develop a tool to help researchers train RL models by inspecting the behaviors induced by their reward functions. Please apply for more details.
Contact Inference from Everyday Human-Object Interactions (HOI) Videos
Posted in Jan, 2025Contact: Yufei Ye
Imagine a video of a person making tea. We humans can not only understand the person's gestures (pose) in the video but also infer which parts of their hands make contact and estimate the rough force directions at these contact points.
While previous computer vision literature has made significant progress in extracting human poses and object geometry from in-the-wild videos, modalities beyond these kinematics, such as contact or force, remain less explored. In this work, we aim to infer the contact regions of human hands from videos. This is important for various downstream applications, such as more robust imitation learning for manipulations, higher-quality interaction animations, improved motion retargeting, and better content creation interfaces, etc. The primary challenge in achieving this goal is the lack of annotated real-world data. We propose leveraging existing simulators in combination with a limited amount of real-world data to address this challenge.
The expected contributions of this work include: 0) least importantly, a potential publication in top-tier vision/robotics venue 1) a foundational dataset for studying human contact, 2) a robust model capable of performing reasonably well on everyday videos, 3) potential follow-up collaborations to apply this model to dexterous manipulation.
High performance reactive motion policy for safe manipulation
Posted on 10/05/2025Contact: Albert Wu
This project aims to optimize a general-purpose reactive motion generation library grounded in differential geometric control. The first phase focuses on enhancing computational efficiency through automatic differentiation, high-performance simulation, and other state-of-the-art software optimization techniques. The second phase involves deploying the optimized library on constraint-rich, real-world manipulation tasks using TML's hardware systems.
Augment existing whole-body motion datasets with detailed hand motion
Posted on 10/11/2025Contact: Yufei Ye and Guy Tevet
OMOMO, a whole-body manipulation motion capture dataset created by TML, has been widely used in computer animation, computer vision and robotics in the recent years. One common feature request from the users is to add detailed finger motions such that the sequences in OMOMO have realistic and consistent whole-body manipulation and dexterous manipulation. To tackle this problem, we will explore an array of different approaches from training generative models (diffusion/flow-based models) to solving trajectory optimization to collecting hand motion in the real world.
Generative Models for Low/High level control
Posted on 10/12/2025Contact: Takara Truong
We're building state-of-the-art generative models for robotic control that bridge character animation and real-world robotics. Students will learn the full stack: motion-capture, learning RL control in simulation, and training generative models—while contributing to cutting-edge research in athletic robot behaviors like dancing, sports, and household tasks.
Reasoning and Planning for Long-Horizon Robotic Tasks
Posted on 10/12/2025Contact: Guy Tevet
Imagine asking your robot, “I lost my keys, please find them and bring them to me.” Executing this request requires cognitive skills across many levels: reasoning about where to search first, distinguishing your home keys from your car keys, and performing low-level skills such as opening drawers, grasping objects, and navigating back to you. In this project, we aim to develop an end-to-end controller that integrates high-level reasoning, scene understanding, and motor control to accomplish such complex, long-horizon tasks in the real world.
Humanoid Robotic Foundation Models
Posted on 10/15/2025Contact: Yanjie Ze
We are building humanoid foundation models that can generalize, adapt, and perform long-horizon, whole-body, dexterous manipulation skills, in unstructured real-world environments, via their egocentric vision and proprioception. Our previous works include TWIST, VisualMimic, GMR, iDP3, etc.
A Generic Position that Supports Research Development on Humanoids, Dexterous Manipulation, and/or Exoskeletons
Posted on 10/14/2025Contact: Albert Wu
A Generic Position that Supports Research Development on Character Animation and/or Biomechanics Modeling
Posted on 10/14/2025Contact: Pei Xu
If none of these projects interest you, but you would still like to work with the Movement Lab, please write a short cover letter explaining your motivations and/or what you would like to do (some examples: pursue your own research project, join existing research projects in either a researcher or a software engineering role, etc.) and submit it as your answer to the question “Specify any project you are interested in working on.” on the research application form. Then select at least one Movement Lab member on the last question.