Date of Award:

5-2026

Document Type:

Dissertation

Degree Name:

Doctor of Philosophy (PhD)

Department:

Computer Science

Committee Chair(s)

Soukaina Filali Boubrahimi

Committee

Soukaina Filali Boubrahimi

Committee

Hamid Karimi

Committee

Kevin Moon

Committee

Steve Petruzza

Committee

Xiaojun Qi

Abstract

Machine learning systems increasingly make decisions that affect people’s lives, from medical diagnoses to loan approvals. When these systems analyze time series data—sequences of measurements collected over time, such as heart rhythms or sensor readings—users need to understand why a particular prediction was made. Counterfactual explanations address this need by answering “what-if” questions: what would need to change in the input for the system to make a different prediction? 

This dissertation develops methods for generating counterfactual explanations specifically designed for time series data. Time series often contain distinctive local patterns—short subsequences that distinguish one class from another. For example, a specific heartbeat shape might indicate a cardiac condition, or a particular motion pattern might identify a type of activity. Existing explanation methods, designed for simpler data types, may distort these important patterns when generating explanations. 

We introduce two families of methods. The first identifies discriminative patterns directly from the data and generates explanations by swapping patterns between classes. These methods find the characteristic shapes in a time series, remove those associated with the original prediction, and insert shapes associated with the desired outcome. The explanations are expressed in terms of these interpretable patterns. 

The second approach uses a neural network to learn compressed representations of each class and generates explanations by moving through this learned space. We design the network to preserve local patterns by incorporating information about pattern similarity across different classes. 

Together, these methods enable users to receive explanations that respect the local pattern structure of time series data. This makes the explanations more actionable and trustworthy for domain experts working with time series data in medical, industrial, scientific applications, etc.

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS