Session
Poster Session 3
Location
Salt Palace Convention Center, Salt Lake City, UT
Abstract
Small satellites (SmallSats) have revolutionized our modern digital infrastructure, such as Earth observation, navigation, real-time communications, and deep space exploration, due to their composable design, effectively reduced cost, and accelerated development cycles. However, SmallSat software must function autonomously, often under limited bandwidth, processing power, and computing resources. Pre-launch techniques such as Hardware-in-the-Loop (HIL), Software-in-the-Loop (SIL) and Unit testing are generally used, but often insufficient to detect errors, timing anomalies, fault propagation, or further telemetry-based degradations. This paper presents a Python-based validation framework influenced by recent testbeds like EIRSAT-1, ITASAT-2, and NOS3, integrating telemetry simulation, synthetic fault injection, resource logging, anomaly detection, and runtime tuning using machine learning techniques. Synthetic faults, such as memory saturation and packet delays, are injected across 30 test cycles. The structured logs were provided as input to train Random Forest and SVM models. The average latency rate reduces from 1.42s to 0.89s, and the fault rate reduces from 33% to 16.6% with Bayesian tuning. During practice simulations, the results give support to spot faults earlier and enhance performance. As a result, this research provides a helpful automated framework for developers and aerospace engineers to boost the effectiveness of important software used in small satellites.
Document Type
Event
AI-Driven Mission-Critical Software Optimization for Small Satellites: Integrating an Automated Testing Framework
Salt Palace Convention Center, Salt Lake City, UT
Small satellites (SmallSats) have revolutionized our modern digital infrastructure, such as Earth observation, navigation, real-time communications, and deep space exploration, due to their composable design, effectively reduced cost, and accelerated development cycles. However, SmallSat software must function autonomously, often under limited bandwidth, processing power, and computing resources. Pre-launch techniques such as Hardware-in-the-Loop (HIL), Software-in-the-Loop (SIL) and Unit testing are generally used, but often insufficient to detect errors, timing anomalies, fault propagation, or further telemetry-based degradations. This paper presents a Python-based validation framework influenced by recent testbeds like EIRSAT-1, ITASAT-2, and NOS3, integrating telemetry simulation, synthetic fault injection, resource logging, anomaly detection, and runtime tuning using machine learning techniques. Synthetic faults, such as memory saturation and packet delays, are injected across 30 test cycles. The structured logs were provided as input to train Random Forest and SVM models. The average latency rate reduces from 1.42s to 0.89s, and the fault rate reduces from 33% to 16.6% with Bayesian tuning. During practice simulations, the results give support to spot faults earlier and enhance performance. As a result, this research provides a helpful automated framework for developers and aerospace engineers to boost the effectiveness of important software used in small satellites.