Date of Award:

12-2024

Document Type:

Thesis

Degree Name:

Master of Science (MS)

Department:

Computer Science

Committee Chair(s)

Steve Petruzza

Committee

Steve Petruzza

Committee

Soukaina Filali Boubrahimi

Committee

John Edwards

Abstract

Understanding the internal mechanisms of neural networks, particularly Multi-Layer Perceptrons (MLP), is essential for their effective application in a variety of scientific domains. In particular, in the scientific visualization domain their adoption has recently shown to be a promising tool to predict particle trajectories in fluid dynamics simulation and aid the interactive visualization of flows. This research addresses the critical challenge of interpretability of such models.

While interpretability has been extensively explored in fields like computer vision and natural language processing, its application to time series data, particularly for particle tracing (or prediction of trajectories), has not garnered sufficient attention.

The overarching objective of this thesis is to augment the interpretability of MLP networks through interactive and comparative visualization of model ensembles. We aim to contribute to address the challenges associated with the ”black-box” nature of neural networks in this specific context. Our primary contribution lies in the development of a comprehensive visualization tool that integrates multiple linked views, including gradient visualization, particle trajectories, layer-wise activation, and weights visualization. This tool facilitates a more profound understanding of the intricate relationships between model components and model predictions.

In particular, the proposed framework provides a user-friendly interface for comparing different models trained to predict particle trajectories in fluid dynamics simulation ensembles.

This tool not only aids in understanding the MLP network behaviour, but also serves as a practical resource for researchers and practitioners wanting to analyze and use similar models.

Finally, we test our framework using a variety of different 2D flows with different degrees of complexity. This helps understanding the effectiveness of the tool in providing insights about which components of the model affects a particular prediction and also what the network is learning at different training epochs.

Checksum

70025ab46f3cfa8e57fe05fd270ae24d

Share

COinS