Date of Award:

12-2024

Document Type:

Dissertation

Degree Name:

Doctor of Philosophy (PhD)

Department:

Computer Science

Committee Chair(s)

Shuhan Yuan

Committee

Shuhan Yuan

Committee

Curtis Dyreson

Committee

Mario Harper

Committee

Mahdi Nasrullah Al-Ameen

Committee

Kevin Moon

Abstract

Anomaly detection is crucial in fields like cybersecurity, healthcare, and finance, as it helps identify unusual or potentially harmful events in data. With the rise of deep learning, advanced models have been developed for anomaly detection, but they often operate as "black boxes" that lack transparency and can be susceptible to malicious attacks. My research addresses these issues by creating methods that make deep learning-based anomaly detection more understandable and by investigating how such models can be compromised by backdoor attacks.

To improve transparency, I propose three methods that explain how these models detect anomalies. The first method, called Anomalous Entry Detection in Sequential Data via Counterfactual Explanations (CFDet), explains which specific entries in a sequence are abnormal. The second method, Globally and Locally Explainable Anomaly Detection (GLEAD), uses an attention-based approach to explain why a model flags certain sequences as anomalies and offers insights into common patterns of anomalies across a dataset. The third method, Explainable Sequential Anomaly Detection via Prototypes (ESAD), describes anomalies by matching them to predefined examples, making results more understandable.

My research also examines how anomaly detection systems can be vulnerable to backdoor attacks, where a hidden trigger causes the model to overlook certain anomalies. I developed three types of attacks to test the robustness of these models. For example, one method, Blog, targets log analysis models, making it possible for attackers to hide malicious events in system logs. Another method, BadSVDD, compromises one-class detection models by embedding subtle alterations in data that evade detection. Lastly, BadSAD manipulates image anomaly detection models so they mistakenly identify certain abnormal images as normal when a hidden trigger is present.

This work highlights the need to strengthen security and transparency in deep learning-based anomaly detection models, contributing methods that explain model decisions and expose critical vulnerabilities to backdoor attacks. These advances help improve the reliability and trustworthiness of anomaly detection in real-world applications.

Checksum

3bff800ade31c6550efaf1996a13eee6

Share

COinS