Document Type
Unpublished Paper
Publication Date
2025
First Page
1
Last Page
16
Abstract
This survey reviews recent advances in applying reinforcement learning (RL) to enable dynamic and ballistic motions in legged robots, including running, jumping, stair climbing, and parkour. Focusing on high-agility behaviors that challenge traditional control frameworks, we categorize foundational locomotion tasks and highlight the RL methods, such as Proximal Policy Optimization, curriculum learning, and hybrid model-based strategies that have proven effective. We discuss the key challenges in transferring learned policies to real-world robots, managing uncertainty, and integrating perception and proprioception. Drawing from over 150 recent works, we provide a structured taxonomy of objectives, algorithms, and platforms, and identify trends in simulation frameworks and deployment strategies. This review aims to serve as a resource for researchers developing next-generation legged robots that can achieve robust, adaptive, and high-performance motion in complex environments. We provide a companion repository that maps the collection of papers, their methods, tasks, and platforms. This repository can be found at: https://github.com/DIRECTLab/legged_review
Recommended Citation
Allred, Christopher; Justice, Chandler; Scalise, Rosario; Gu, Yan; Clark, Jonathan; Harper, Mario; and Pusey, Jason, "From Walking to Parkour: A Structured Survey of RL for Dynamic Skills in Legged Robots" (2025). Computer Science Student Research. Paper 58.
https://digitalcommons.usu.edu/computer_science_stures/58