Offered Subjects
Offered Theses
Counterfactual Explanations for AI Models in Medicine
- Type:
- Master Thesis Business Information Systems
- Status:
- offered
- Tutor:
Abstract
Implementing artificial intelligence (AI) models for medical tasks like diagnosis or prognosis promises to support medical staff in their decision making and improve the overall quality of healthcare. However, in order to achieve effective usage of AI for decision support, some potential problems have to be resolved. These include issues of trust, overconfidence, and legal requirements. A popular approach to make AI models more trustworthy, transparent, scrutable, and generally understandable lies in AI explanations. The research community of explainable AI has developed a wide array of methods that attempt to extract valuable insight about the reasoning of any given AI model. While most scholars have developed methods that attribute importance values to individual features to indicate their significance for a given prediction or for the global model behavior, others have taken inspiration from the social sciences and tried to construct more intuitive, human-like explanations. Among these are counterfactual explanations, also known as contrastive explanations. These provide alternative sets of minimally changed inputs that lead to a different model output. Presenting diverse counterfactuals enables insight into the model’s reasoning process in a different way to attribution-based approaches.
This Master thesis project consists of a systematic literature review (SLR) on contrastive explanations for AI models in the field of medicine. Based on this SLR, the student identifies requirements for the development of an interface that provides contrastive explanations for a specific medical task. Expert interviews with medical professionals will be conducted, recorded, transcribed, and analyzed (e.g., via tools like MAXQDA) to evaluate the developed interface.