Session

Technical Session 1: Mission Operations and Autonomy

Location

Utah State University, Logan, UT

Abstract

New generations of spacecrafts are required to perform tasks with an increased level of autonomy. Space exploration, rendezvous services, space robotics, etc. are all growing fields in Space that require more sensors and more computational power to perform these missions. Furthermore, new sensors in the market produce better quality data at higher rates while new processors can increase substantially the computational power. Therefore, near-future spacecrafts will be equipped with large number of sensors that will produce data at rates that has not been seen before in space, while at the same time, data processing power will be significantly increased. In regards to guidance navigation and control applications, vision-based navigation has become increasingly important in a variety of space applications for enhancing autonomy and dependability. Future missions such as Active Debris Removal will rely on novel high-performance avionics to support image processing and Artificial Intelligence algorithms with large workloads. Even more complex is the case of vision-based precision landing, that high rate processing is a must and can be the tipping point of a successful mission. This new scenario of advanced Space applications and increase in data amount and processing power, has brought new challenges with it: low determinism, excessive power needs, data losses and large response latency. In this article, a novel approach to on-board artificial intelligence (AI) is presented that is based on state-of-the-art algorithmic trading software techniques, which is a field that underwent a similar challenge, although is a different scale, in the early 2010. The approach presented here optimizes the limited available computing resources, and makes AI applications much more reliable, therefore somewhat reshaping the paradigm of embedded software engineering. A benchmarks is presented here for a pose estimation of the asteroid 67P/Churyumov–Gerasimenko using AI base of images from the Rosetta mission. In this paper, we show that the data processing rate and power saving of the applications increase substantially with respect to standard AI solutions.

Available for download on Saturday, August 07, 2021

Share

COinS
 
Aug 9th, 10:00 AM

A Low Power and High Performance Artificial Intelligence Approach to Increase Guidance Navigation and Control Robustness

Utah State University, Logan, UT

New generations of spacecrafts are required to perform tasks with an increased level of autonomy. Space exploration, rendezvous services, space robotics, etc. are all growing fields in Space that require more sensors and more computational power to perform these missions. Furthermore, new sensors in the market produce better quality data at higher rates while new processors can increase substantially the computational power. Therefore, near-future spacecrafts will be equipped with large number of sensors that will produce data at rates that has not been seen before in space, while at the same time, data processing power will be significantly increased. In regards to guidance navigation and control applications, vision-based navigation has become increasingly important in a variety of space applications for enhancing autonomy and dependability. Future missions such as Active Debris Removal will rely on novel high-performance avionics to support image processing and Artificial Intelligence algorithms with large workloads. Even more complex is the case of vision-based precision landing, that high rate processing is a must and can be the tipping point of a successful mission. This new scenario of advanced Space applications and increase in data amount and processing power, has brought new challenges with it: low determinism, excessive power needs, data losses and large response latency. In this article, a novel approach to on-board artificial intelligence (AI) is presented that is based on state-of-the-art algorithmic trading software techniques, which is a field that underwent a similar challenge, although is a different scale, in the early 2010. The approach presented here optimizes the limited available computing resources, and makes AI applications much more reliable, therefore somewhat reshaping the paradigm of embedded software engineering. A benchmarks is presented here for a pose estimation of the asteroid 67P/Churyumov–Gerasimenko using AI base of images from the Rosetta mission. In this paper, we show that the data processing rate and power saving of the applications increase substantially with respect to standard AI solutions.