Offered Theses

Filter:
  • What makes a Bird a Bird? Evaluating Prototypes against Feature Attribution Methods in a Bird Classification Task
    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa

    While feature attribution methods like SHAP and GradCAM see widespread application for image data, they do not constitute the only class of explanation methods for images. Another category of methods relies on learned prototypes that capture relevant patterns in the images. These prototypes are supposed to be more interpretable than traditional methods for explainable artificial intelligence (XAI) and offer better capacity to detect shortcomings in classification decisions. A typical application of prototype models is the CUB-200 dataset that contains images of 200 different bird species (Nauta et al., 2021). 

    In this established setting, the student will implement and train two models for bird classification and extract prototype and feature attribution explanations. These will be evaluated through a series of expert interviews with bird enthusiasts to investigate their alignment with human explanations and their understandability for experts in that field. The interviews will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).

    • Nauta, M., Van Bree, R., & Seifert, C. (2021). Neural prototype trees for interpretable fine-grained image recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14933-14943).
  • Explaining What’s Relevant: How Doctors Extract Information from Neural Network Explanations
    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa

    Although the field of explainable artificial intelligence (XAI) has developed a plethora of explanation methods since its inception, many of these cannot easily be transferred into practical use by non-experts. Having been developed by AI researchers with little attention to how human explanations work, common methods like LIME and SHAP offer insights in a form that does not align with the general expectations and needs of practitioners (Ehsan et al., 2024). Other methods like counterfactuals or narratives however are, at least in theory, closer to human explanations. Still, it is not properly understood how practitioners interpret the various XAI methods and what information they can extract from them. To investigate this problem, this study will look at explanation methods applied to neural networks for treatment outcome prediction in oncology and rheumatology. 

    The student will implement and train neural networks and three different explanation methods. To evaluate how they are perceived by medical professionals, the student will develop interview guidelines and conduct a series of expert interviews with medical professional from the fields of oncology and rheumatology. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).

    • Ehsan, U., Passi, S., Liao, Q. V., Chan, L., Lee, I. H., Muller, M., & Riedl, M. O. (2024). The who in XAI: how AI background shapes perceptions of AI explanations. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-32).

  • The Art of Feature Engineering: Comparing Hand-Crafted and Learned Features for Flow State Classification
    Master Thesis Business Information Systems, Tutor: M.Sc. Cosima von Uechtritz

    Flow, the state of optimal experience and complete absorption in an activity, is of growing interest in information systems research. Recent studies have shown that flow states can be classified using machine learning models trained on physiological data, such as heart rate and heart rate variability (HRV). For instance, Rissler et al. (2020) trained a flow classifier using a random forest model and achieved an accuracy of 70%. Traditional machine learning approaches often rely on hand-crafted features (HCFs), such as standard HRV metrics like SDNN or RMSSD. However, these features require expert knowledge and are labor-intensive to compute. Feature learning methods, such as deep neural networks, present a promising approach to overcome these limitations due to their capability to automatically extract relevant features. Therefore, feature learning approaches may outperform HCFs, in particular when dealing with large-scale, noisy, or unstructured data. 

    The aim of this thesis is to investigate the differences between HCFs and feature learning approaches for classifying flow states from physiological signals. Students working on this project will have access to a publicly available flow dataset.

    Rissler, R., Nadj, M., Li, M. X., Loewe, N., Knierim, M. T., & Maedche, A. (2020). To be or not to be in flow at work: physiological classification of flow using machine learning. IEEE transactions on affective computing, 14(1), 463-474.