Class

Article

College

College of Science

Department

Computer Science Department

Faculty Mentor

Hamid Karimi

Presentation Type

Poster Presentation

Abstract

Fairness in machine learning has become a global concern due to the predominance of ML in automated decision-making systems. In comparison to group fairness, individual fairness, which aspires that similar individuals should be treated similarly, has received limited attention due to some challenges. One major challenge is the availability of a proper metric to evaluate individual fairness, especially for probabilistic classifiers. In this study, we propose a framework PCIndFair to assess the individual fairness of probabilistic classifiers. Unlike current individual fairness measures, our framework considers probability distribution rather than the final classification outcome, which is suitable for capturing the dynamic of probabilistic classifiers, e.g., neural networks. We perform extensive experiments on four standard datasets and discuss the practical benefits of the framework. This study can be helpful for machine learning researchers and practitioners flexibly assess their models' individual fairness.

Location

Logan, UT

Start Date

4-12-2023 2:30 PM

End Date

4-12-2023 3:30 PM

Share

COinS
 
Apr 12th, 2:30 PM Apr 12th, 3:30 PM

A New Framework to Assess the Individual Fairness of Probabilistic Classifiers

Logan, UT

Fairness in machine learning has become a global concern due to the predominance of ML in automated decision-making systems. In comparison to group fairness, individual fairness, which aspires that similar individuals should be treated similarly, has received limited attention due to some challenges. One major challenge is the availability of a proper metric to evaluate individual fairness, especially for probabilistic classifiers. In this study, we propose a framework PCIndFair to assess the individual fairness of probabilistic classifiers. Unlike current individual fairness measures, our framework considers probability distribution rather than the final classification outcome, which is suitable for capturing the dynamic of probabilistic classifiers, e.g., neural networks. We perform extensive experiments on four standard datasets and discuss the practical benefits of the framework. This study can be helpful for machine learning researchers and practitioners flexibly assess their models' individual fairness.