Offered Subjects
Offered Theses
- The Art of Feature Engineering: Comparing Hand-Crafted and Learned Features for Flow State Classification
Business Information Systems, Tutor: M.Sc. Cosima von UechtritzFlow, the state of optimal experience and complete absorption in an activity, is of growing interest in information systems research. Recent studies have shown that flow states can be classified using machine learning models trained on physiological data, such as heart rate and heart rate variability (HRV). For instance, Rissler et al. (2020) trained a flow classifier using a random forest model and achieved an accuracy of 70%. Traditional machine learning approaches often rely on hand-crafted features (HCFs), such as standard HRV metrics like SDNN or RMSSD. However, these features require expert knowledge and are labor-intensive to compute. Feature learning methods, such as deep neural networks, present a promising approach to overcome these limitations due to their capability to automatically extract relevant features. Therefore, feature learning approaches may outperform HCFs, in particular when dealing with large-scale, noisy, or unstructured data.
The aim of this thesis is to investigate the differences between HCFs and feature learning approaches for classifying flow states from physiological signals. Students working on this project will have access to a publicly available flow dataset.
Rissler, R., Nadj, M., Li, M. X., Loewe, N., Knierim, M. T., & Maedche, A. (2020). To be or not to be in flow at work: physiological classification of flow using machine learning. IEEE transactions on affective computing, 14(1), 463-474.
- A Qualitative Study on Visual Explanations in Medical Decision Support Systems
Business Information Systems, Tutor: M.Sc. Luca GemballaWhile researchers have already proposed a range of methods to explain the behavior of systems using Artificial Intelligence (AI), these methods have posed their own challenges. Questions about the faithfulness and robustness of their outputs have emerged, as well as concerns about these methods being almost as opaque to end users as the original AI system. In response to these challenges, we have approached domain-specific visualizations as explanatory components of medical decision support systems. We intend to research whether more straightforward means of explanation can better fulfill the needs of medical professionals and thus support the adoption of AI in clinical practice.
To develop an improved understanding of the role of visualizations in medical decision support systems, the student conducts a series of interviews with experts on medical decision support systems during the thesis project. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
- Necessity and Sufficiency in Explainable AI Methods
Business Information Systems, Tutor: M.Sc. Luca GemballaThe literature on artificial intelligence (AI) explanations comprises two primary explanation methods: attribution-based and counterfactual-based. Through the differences in these approaches, two criteria for good explanations are optimized: necessity and sufficiency. Methods looking for counterfactual explanations elicit necessary features, while methods that look at feature attribution focus on sufficient feature values. Mothilal et al. (2021) propose a framework unifying both methods to evaluate the different approaches with respect to those two criteria for good explanations. Research into metrics for evaluating explanations is relevant because, unlike most prediction and classification tasks, there is no ground truth to evaluate the correctness or quality of explanations. Mothilal et al. (2021) rely on three datasets from the credit-scoring domain and a case study on hospital admission to test their framework. We intend to build on this study and examine, whether the results presented by Mothilal et al. (2021) transfer to different datasets and explanation techniques.
This Master thesis project builds on previous work by reviewing novel methods for attribution-based and counterfactual-based explanations from the literature, applying these to a new selection of datasets from the medical domain, and evaluating whether more recent approaches to AI explainability better fulfill the criteria of necessity and sufficiency.
Reference:
- Mothilal, R.K., Mahajan, D., Tan, C., & Sharma, A. (2021, July). Towards unifying feature attribution and counterfactual explanations: Different means to the same end. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 652-663).