Session

Technical Session 13: Future Missions/Capabilities

Location

Utah State University, Logan, UT

Abstract

This paper describes a work in progress. We are preparing our first cubesat mission, InnoCube, which we plan to launch in spring 2023. We are also in the process of moving the whole chair to another building, and creating a new mission control room. We take this as an opportunity to try and compare some novel approaches, that might make the work of the ground team easier.

Our mission, InnoCube, is designed to test a “skip the harness (skith)” approach, which means the system is comprised of multiple autonomous computing nodes communicating wirelessly with each other. Each of these nodes is running an operating system instance of Rodos (Real time Onboard Dependable Operating System). As we are going to launch at least 16 computers within our 3U cubesat, it will create a lot of telemetry to keep an eye on. Hence, we set out to create an environment that allows us to explore and compare different ways to represent all this data, in order to give human operators a good view, of what is happening, without overwhelming them. We aim to find out whether the possibilities of a virtual environment help or hinder operators in their work, and if, which of the virtual representations facilitate understanding of complex data.

In this paper we will describe the design and technologies we employ to build two systems: the regular mission control room featuring displays and standard human computer interfaces, and a virtual representation created in Unity, accessible via VR headset, in which operators are free to move around and interact using gestures. We explain how we work with Rodos and the Corfu framework, to derive the data to be displayed from the on-board-software and which representations we create with it. We depict the ways the components of the system interact and which measurements we will attempt, but the usability research itself will take place after the conference, when the integration is complete and is likely to be the topic of a later paper.

Share

COinS
 
Aug 12th, 11:00 AM

Creating a Setup to Assess the Use of Virtual Reality for Mission Control

Utah State University, Logan, UT

This paper describes a work in progress. We are preparing our first cubesat mission, InnoCube, which we plan to launch in spring 2023. We are also in the process of moving the whole chair to another building, and creating a new mission control room. We take this as an opportunity to try and compare some novel approaches, that might make the work of the ground team easier.

Our mission, InnoCube, is designed to test a “skip the harness (skith)” approach, which means the system is comprised of multiple autonomous computing nodes communicating wirelessly with each other. Each of these nodes is running an operating system instance of Rodos (Real time Onboard Dependable Operating System). As we are going to launch at least 16 computers within our 3U cubesat, it will create a lot of telemetry to keep an eye on. Hence, we set out to create an environment that allows us to explore and compare different ways to represent all this data, in order to give human operators a good view, of what is happening, without overwhelming them. We aim to find out whether the possibilities of a virtual environment help or hinder operators in their work, and if, which of the virtual representations facilitate understanding of complex data.

In this paper we will describe the design and technologies we employ to build two systems: the regular mission control room featuring displays and standard human computer interfaces, and a virtual representation created in Unity, accessible via VR headset, in which operators are free to move around and interact using gestures. We explain how we work with Rodos and the Corfu framework, to derive the data to be displayed from the on-board-software and which representations we create with it. We depict the ways the components of the system interact and which measurements we will attempt, but the usability research itself will take place after the conference, when the integration is complete and is likely to be the topic of a later paper.