Session

Session VI: FJR Student Competition

Location

Utah State University, Logan, UT

Abstract

Onboard system failures during CubeSat operation can have significant consequences for mission success. Limited resources during the development process can hamper the development and implementation of recovery systems, increasing the likelihood of mission failures. In response, this paper establishes a reusable autonomous framework for mission replanning in the event of an onboard system failure. Prior to launch, the framework ingests a standardized mission plan detailing mission objectives, mission priorities, and onboard capabilities and resources. Segmenting this information into a set of discrete tasks with completion dependencies, a reinforcement learning approach is used to select a schedule of tasks with the greatest priority while meeting resource limitations. This selection is scheduled into a new mission plan using a modified reinforcement learning approach. Testing this framework on a series of simulated satellite missions, it demonstrates moderate success in adapting multi-system failures, such as a variety of attitude control, power storage and generation, and computational faults.

Share

COinS
 
Aug 7th, 8:30 AM

An Autonomous Reinforcement Learning Framework for Fault Recovery and Mission Replanning on CubeSats

Utah State University, Logan, UT

Onboard system failures during CubeSat operation can have significant consequences for mission success. Limited resources during the development process can hamper the development and implementation of recovery systems, increasing the likelihood of mission failures. In response, this paper establishes a reusable autonomous framework for mission replanning in the event of an onboard system failure. Prior to launch, the framework ingests a standardized mission plan detailing mission objectives, mission priorities, and onboard capabilities and resources. Segmenting this information into a set of discrete tasks with completion dependencies, a reinforcement learning approach is used to select a schedule of tasks with the greatest priority while meeting resource limitations. This selection is scheduled into a new mission plan using a modified reinforcement learning approach. Testing this framework on a series of simulated satellite missions, it demonstrates moderate success in adapting multi-system failures, such as a variety of attitude control, power storage and generation, and computational faults.