Date of Award:
5-2025
Document Type:
Thesis
Degree Name:
Master of Science (MS)
Department:
Computer Science
Committee Chair(s)
Mario Harper
Committee
Mario Harper
Committee
Shah Hamdi
Committee
John Edwards
Abstract
Robots increasingly operate in collaborative teams across domains such as search-and- rescue, warehouse automation, and autonomous driving—scenarios that demand advanced coordination strategies enabled by multi-agent reinforcement learning (MARL). However, existing simulation frameworks often struggle to balance realism, speed, and scalability, especially when supporting diverse, heterogeneous robot teams. This research extends Isaac Lab, a high-performance robotics simulator, by integrating heterogeneous-agent reinforcement learning (HARL) capabilities. The result is a flexible and GPU-accelerated platform for training both homogeneous and heterogeneous robot teams in complex, physics-based environments. These enhancements significantly narrow the gap between simulation and real-world deployment for multi-robot systems.
Checksum
8e271e4ca97863ca86911264f9aeb11a
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Haight, Jacob R., "Advancing Multi-Agent Robotics Simulations Through Heterogeneous Reinforcement Learning in IsaacLab" (2025). All Graduate Theses and Dissertations, Fall 2023 to Present. 451.
https://digitalcommons.usu.edu/etd2023/451
Included in
Copyright for this work is retained by the student. If you have any questions regarding the inclusion of this work in the Digital Commons, please email us at .