Session
Technical Session VI: University Programs
Abstract
At the 16th AIAA/USU Conference on Small Satellites, researchers at Santa Clara University (SCU) proposed a distributed computing architecture for small or multi-spacecraft missions. This architecture extended existing I2C, Dallas 1-wire and RS232 data protocols and was adaptable to a number of microcontrollers. Since then, that architecture has been implemented on six university-class space missions at three different universities. As “early adopters”, these universities had the typical challenges of working with a new, evolving standard and adapting the standard to their hardware and mission needs. Each faced additional, program-specific challenges related to project size, scope and infrastructure as well as the student background/training. Still, because of this architecture, every school saw three improvements: accelerated integration and training of new students; rapid modifications of existing systems; and school-wide collaboration among robotics projects. This paper reviews SCU’s distributed computing architecture, discusses the details of its implementation at all three universities, and provides lessons learned/lessons applied to six spacecraft programs: Akoya-A/Bandit-A & Akoya- B/Bandit-C at Washington University in St. Louis, EMERALD & ONYX at SCU, and FASTRAC and ARTEMIS at the University of Texas-Austin. The merits of adopting this architecture as a standard for university-class spacecraft are also reviewed.
Presentation Slides
A Standardized, Distributed Computing Architecture: Results from Three Universities
At the 16th AIAA/USU Conference on Small Satellites, researchers at Santa Clara University (SCU) proposed a distributed computing architecture for small or multi-spacecraft missions. This architecture extended existing I2C, Dallas 1-wire and RS232 data protocols and was adaptable to a number of microcontrollers. Since then, that architecture has been implemented on six university-class space missions at three different universities. As “early adopters”, these universities had the typical challenges of working with a new, evolving standard and adapting the standard to their hardware and mission needs. Each faced additional, program-specific challenges related to project size, scope and infrastructure as well as the student background/training. Still, because of this architecture, every school saw three improvements: accelerated integration and training of new students; rapid modifications of existing systems; and school-wide collaboration among robotics projects. This paper reviews SCU’s distributed computing architecture, discusses the details of its implementation at all three universities, and provides lessons learned/lessons applied to six spacecraft programs: Akoya-A/Bandit-A & Akoya- B/Bandit-C at Washington University in St. Louis, EMERALD & ONYX at SCU, and FASTRAC and ARTEMIS at the University of Texas-Austin. The merits of adopting this architecture as a standard for university-class spacecraft are also reviewed.