Session
Technical Session 7: Advanced Technologies I
Location
Utah State University, Logan, UT
Abstract
Upcoming robotic exploration missions are characterized by constantly increasing spacecraft autonomy requirements. This approach includes the necessity to face new challenging tasks, such as autonomous navigation capabilities, adaptive activity scheduling and on-the-edge data processing. When these scenarios meet small satellite platforms, coming with a plethora of resource constraints, an optimized implementation of computing intensive functionalities is necessary to achieve usable performance. Argotec, an Italian aerospace company, designs and develops its own avionics systems to enable challenging inter-planetary small satellite missions. Within this context, Argotec developed a proprietary implementation of a high-throughput image processing pipeline to support vision-based autonomous navigation and attitude control.
The purpose of this paper is to present the functionality and the performance of this system. Recalling the building blocks being in the image processing chain, the paper starts by listing these blocks and their functionalities, discussing the reasoning behind their inclusion. These functionalities include data binning, low-pass filtering for edge smoothing, color depth compression, binarization, luminance histogram generation, and eventually multi-target labeling. The challenges of delivering the required performance, high-enough to sustain on-the-fly processing in couple with state-of-the-art space cameras, are presented through the step-by-step integration in a flash-based space-grade Microsemi RTG4 FPGA.
The hardware implementation was intentionally generated to be parametrizable and platform-independent, allowing for operativity extensions, scalability and general portability. The datapath was conceived to keep functionalities as separated black boxes, each one autonomically operating. Every functional element expects processed pixels as input from the previous module and generates outputs for the following one. This solution allowed to succeed in an about 20 times faster SW-implementation running on a 50MHz Space-grade SPARC V8 processor, with a very low resource occupation in the FPGA device. In this context, improvements led to a drastic processor utilization time unloading, leaving additional place for extra tasks during mission control cycle period.
The technology is eventually analyzed in the real-life application of the DART/LICIACube autonomous planetary defense mission, proving how the design supports the mission-specific pipeline deployed for the critical NASA mission. The paper includes a final consideration that reflects on how technologies related to autonomous navigation are critical for small satellite platforms, and nowadays this aspect is calling for the need to design and tailor new solutions. This image processing pipeline wants to be an example of how new solution can enable multiple mission scenarios, until now considered prerogative of larger platforms.
On-the-Fly Hardware-Accelerated Image Processing System for Target Recognition
Utah State University, Logan, UT
Upcoming robotic exploration missions are characterized by constantly increasing spacecraft autonomy requirements. This approach includes the necessity to face new challenging tasks, such as autonomous navigation capabilities, adaptive activity scheduling and on-the-edge data processing. When these scenarios meet small satellite platforms, coming with a plethora of resource constraints, an optimized implementation of computing intensive functionalities is necessary to achieve usable performance. Argotec, an Italian aerospace company, designs and develops its own avionics systems to enable challenging inter-planetary small satellite missions. Within this context, Argotec developed a proprietary implementation of a high-throughput image processing pipeline to support vision-based autonomous navigation and attitude control.
The purpose of this paper is to present the functionality and the performance of this system. Recalling the building blocks being in the image processing chain, the paper starts by listing these blocks and their functionalities, discussing the reasoning behind their inclusion. These functionalities include data binning, low-pass filtering for edge smoothing, color depth compression, binarization, luminance histogram generation, and eventually multi-target labeling. The challenges of delivering the required performance, high-enough to sustain on-the-fly processing in couple with state-of-the-art space cameras, are presented through the step-by-step integration in a flash-based space-grade Microsemi RTG4 FPGA.
The hardware implementation was intentionally generated to be parametrizable and platform-independent, allowing for operativity extensions, scalability and general portability. The datapath was conceived to keep functionalities as separated black boxes, each one autonomically operating. Every functional element expects processed pixels as input from the previous module and generates outputs for the following one. This solution allowed to succeed in an about 20 times faster SW-implementation running on a 50MHz Space-grade SPARC V8 processor, with a very low resource occupation in the FPGA device. In this context, improvements led to a drastic processor utilization time unloading, leaving additional place for extra tasks during mission control cycle period.
The technology is eventually analyzed in the real-life application of the DART/LICIACube autonomous planetary defense mission, proving how the design supports the mission-specific pipeline deployed for the critical NASA mission. The paper includes a final consideration that reflects on how technologies related to autonomous navigation are critical for small satellite platforms, and nowadays this aspect is calling for the need to design and tailor new solutions. This image processing pipeline wants to be an example of how new solution can enable multiple mission scenarios, until now considered prerogative of larger platforms.