Session

Session VIII: Advanced Technologies 2 - Research & Academia

Location

Salt Palace Convention Center, Salt Lake City, UT

Abstract

Utah State University has been developing a complier to ease burden of implementing artificial intelligence (AI) and machine learning (ML) algorithms to FPGAs for both low-power and accelerated computation purposes. This system called “Architecture and Network Generalization for Edge computing and Low-power applications” (ANGEL) is aimed at the problem of the growing volume of data collected by small satellites, combined with the increasing interest in AI/ML algorithms to process this data onboard. Executing modern AI/ML algorithms onboard small satellites in GPUs or CPUs demands significant power—often stretching the capabilities of small satellite platforms. FPGAs, particularly the MicroSemi Polar Fire series, provide a low-power solution to perform AI/ML computing on orbit. The primary drawback of FPGAs is the significant effort and time required to design and implement advanced algorithms. This work presents a modular architecture and accompanying compiler that creates custom pipelines to process data using hardware-accelerated modules. The compiler takes in an ONNX model, a standard file format for storing AI/ML algorithms, and decomposes the model into discrete operational steps. For each of these steps, a hardware engineer designs a hardware accelerated block to run that specific operation. The original ONNX model is then compiled into a set of instructions that execute the AI/ML algorithm using the predefined hardware blocks. These instruction and hardware operations are incorporated into the modular FPGA architecture. The paper presents the work at USU on the compiler and presents results of implementing test models on FPGA hardware along with the analyses the performance of the algorithms. ANGEL is being targeted to the USU Low-power Array for CubeSat Edge Computing Architecture, Algorithms, and Applications (LACE-C3A) hardware to support the execution of large AI/ML models on small satellites. This project is funded by the NASA University SmallSat Technology Partnership Program. LACE-C3A is a specialized FPGA-based edge computing platform designed to facilitate AI and ML processing on small satellites.

Document Type

Event

Share

COinS
 
Aug 13th, 4:45 PM

ANGEL: An Architecture for Simplified AI/ML Implementation on Low-Power FPGAs for Small Satellites

Salt Palace Convention Center, Salt Lake City, UT

Utah State University has been developing a complier to ease burden of implementing artificial intelligence (AI) and machine learning (ML) algorithms to FPGAs for both low-power and accelerated computation purposes. This system called “Architecture and Network Generalization for Edge computing and Low-power applications” (ANGEL) is aimed at the problem of the growing volume of data collected by small satellites, combined with the increasing interest in AI/ML algorithms to process this data onboard. Executing modern AI/ML algorithms onboard small satellites in GPUs or CPUs demands significant power—often stretching the capabilities of small satellite platforms. FPGAs, particularly the MicroSemi Polar Fire series, provide a low-power solution to perform AI/ML computing on orbit. The primary drawback of FPGAs is the significant effort and time required to design and implement advanced algorithms. This work presents a modular architecture and accompanying compiler that creates custom pipelines to process data using hardware-accelerated modules. The compiler takes in an ONNX model, a standard file format for storing AI/ML algorithms, and decomposes the model into discrete operational steps. For each of these steps, a hardware engineer designs a hardware accelerated block to run that specific operation. The original ONNX model is then compiled into a set of instructions that execute the AI/ML algorithm using the predefined hardware blocks. These instruction and hardware operations are incorporated into the modular FPGA architecture. The paper presents the work at USU on the compiler and presents results of implementing test models on FPGA hardware along with the analyses the performance of the algorithms. ANGEL is being targeted to the USU Low-power Array for CubeSat Edge Computing Architecture, Algorithms, and Applications (LACE-C3A) hardware to support the execution of large AI/ML models on small satellites. This project is funded by the NASA University SmallSat Technology Partnership Program. LACE-C3A is a specialized FPGA-based edge computing platform designed to facilitate AI and ML processing on small satellites.