Offered Subjects

Offered Theses

Filter:
  • Visualizing for Explainability in Treatment Effect Prediction for Cancer Prognosis
    Business Information Systems, Tutor: M.Sc. Luca Gemballa

    Similar to other predictive tasks, the field of treatment effect prediction (TEP), which attempts to not just predict a singular outcome, but the difference between two or more different counterfactual outcomes, can also benefit from improved performance through deep learning (DL) models. The downside to this manifests in the reduced interpretability of DL models, which can impede the usability of DL-based TEP in high-stakes decision-making contexts like medicine, that require human users to understand the tools they use and be able to detect whether a prediction is based on sound reasoning and thus trustworthy. Although researchers have developed a range of Explainable Artificial Intelligence (XAI) methods, these are subject to various concerns about model faithfulness and their actual usefulness to end users. We intend to specifically address the use case of TEP for the prognosis of cancer treatment outcomes, and explore how visualizations of treatment effects found in the available literature can support user understanding. 

    In a previous project, we curated a dataset of visualizations used to represent predictions of treatment effects. In this thesis project, the student will conduct a series of expert interviews with oncologists and discuss the curated visualizations concerning their helpfulness and accessibility. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).

  • A Qualitative Study on Visual Explanations in Medical Decision Support Systems
    Business Information Systems, Tutor: M.Sc. Luca Gemballa

    While researchers have already proposed a range of methods to explain the behavior of systems using Artificial Intelligence (AI), these methods have posed their own challenges. Questions about the faithfulness and robustness of their outputs have emerged, as well as concerns about these methods being almost as opaque to end users as the original AI system. In response to these challenges, we have approached domain-specific visualizations as explanatory components of medical decision support systems. We intend to research whether more straightforward means of explanation can better fulfill the needs of medical professionals and thus support the adoption of AI in clinical practice.

    To develop an improved understanding of the role of visualizations in medical decision support systems, the student conducts a series of interviews with experts on medical decision support systems during the thesis project. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).

  • Necessity and Sufficiency in Explainable AI Methods
    Business Information Systems, Tutor: M.Sc. Luca Gemballa

    The literature on artificial intelligence (AI) explanations comprises two primary explanation methods: attribution-based and counterfactual-based. Through the differences in these approaches, two criteria for good explanations are optimized: necessity and sufficiency. Methods looking for counterfactual explanations elicit necessary features, while methods that look at feature attribution focus on sufficient feature values. Mothilal et al. (2021) propose a framework unifying both methods to evaluate the different approaches with respect to those two criteria for good explanations. Research into metrics for evaluating explanations is relevant because, unlike most prediction and classification tasks, there is no ground truth to evaluate the correctness or quality of explanations. Mothilal et al. (2021) rely on three datasets from the credit-scoring domain and a case study on hospital admission to test their framework. We intend to build on this study and examine, whether the results presented by Mothilal et al. (2021) transfer to different datasets and explanation techniques. 

    This Master thesis project builds on previous work by reviewing novel methods for attribution-based and counterfactual-based explanations from the literature, applying these to a new selection of datasets from the medical domain, and evaluating whether more recent approaches to AI explainability better fulfill the criteria of necessity and sufficiency. 

    Reference:

    • Mothilal, R.K., Mahajan, D., Tan, C., & Sharma, A. (2021, July). Towards unifying feature attribution and counterfactual explanations: Different means to the same end. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 652-663).