Evaluating Human Interaction with Explainable AI Systems
One current challenge to the industrial adoption of products and services based on artificial intelligence and machine learning (AI/ML) is that their inner workings are often not discernable to their end users. When end users cannot reason about their tools, they tend to make poor decisions about how to rely on them. This is particularly problematic in industrial contexts with significant safety or economic implications (e.g., predictive maintenance in nuclear power plants). In these high-stakes scenarios, understanding how an AI/ML system reaches its conclusions is a prerequisite for building trust in and adoption of such technologies. Thus, a key research question is how to leverage explanations to support end users’ understanding of AI/ML systems. However, despite the great academic, public and private interest in making results generated by AI/ML systems explainable to end users, there is no consensus on what explanations, if any, would help end users reach their intended (e.g., task performance) and unintended goals (e.g., trust calibration). In addition, previous empirical work is often characterized by a lack of practical efficacy on end users and by the tendency to not fully incorporate the cognitive aspects of explanations. This research project seeks to address these issues by bridging algorithmic solutions and end user needs for effective explanation.
Sponsor: Mitacs Accelerate Fellowship
Student PI: Davide Gentile (co-supervised by Professor Greg A. Jamieson)
Publications
Gentile, D., Donmez, B., & Jamieson, G. A. (2023). Human performance consequences of normative and contrastive explanations: An experiment in machine learning for reliability maintenance. Artificial Intelligence, 321, 103945.
Gentile, D., Jamieson, G. A., & Donmez, B. (2021). Evaluating human understanding in XAI systems. In Position Papers of the ACM CHI Workshop on Operationalizing Human-Centered Perspectives in Explainable AI (HCXAI Workshop), Online Virtual Conference.