DS1 spectrogram: Sim-to-Real: Learning Agile Locomotion For Quadruped Robots

Sim-to-Real: Learning Agile Locomotion For Quadruped Robots

April 27, 20181804.10332

Authors

Tingnan Zhang,Erwin Coumans,Atil Iscen,Yunfei Bai,Danijar Hafner

Abstract

Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques.

Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed.

The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world.

We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency.

We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping.

After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.

Resources

Stay in the loop

Get tldr.takara.ai to Your Email, Everyday.

tldr.takara.aiHome·Daily at 6am UTC·© 2026 takara.ai Ltd

Content is sourced from third-party publications.