Offered Subjects
Offered Theses
- Multimodal Human Monitoring via Webcam
Business Information Systems, Tutor: M.Sc. Cosima von UechtritzIn recent years, the development of smart cameras and their capabilities has accelerated, leading to their potential deployment in a wide range of applications. For example, they can be used to assess a driver’s level of attention or fatigue. Features relevant for evaluating such cognitive states include head and eye tracking, as well as body posture analysis. Open-source frameworks such as MediaPipe Holistic already demonstrate the technical feasibility of extracting these features. However, despite these advances, robust and well-integrated solutions are rarely used in practice.
This thesis, therefore, aims to conduct a systematic literature review to identify the state-of-the-art in combining different human monitoring features in a webcam system. Based on these insights, the thesis will then implement a prototype system that integrates at least two of these features.
- Understanding Smart Camera Systems for Human Activity Monitoring
Business Information Systems, Tutor: M.Sc. Cosima von UechtritzSmart camera-based monitoring systems are capable of tracking and monitoring human activities. In contrast to other monitoring systems, cameras have the advantage of being low-cost and unobtrusive. These features make them particularly suitable for applications such as:
- remote surveillance with an automated alert system for emergency detection in elderly care settings,
- contactless monitoring of vital signs such as heart rate and body temperature, in telemedicine,
- or the assessment of fatigue levels by analysing eye activity to help prevent driving accidents.
Despite the technological capabilities and benefits of these systems, their use in the real world is currently limited. Therefore, the aim of this thesis is to identify the possibilities and limitations of smart camera-based human monitoring systems through a systematic literature review. Then, based on these insights, the most promising approaches will be identified and selected for implementation.
- Visualizing for Explainability in Treatment Effect Prediction for Diabetes Prognosis
Business Information Systems, Tutor: M.Sc. Luca GemballaSimilar to other predictive tasks, the field of treatment effect prediction (TEP), which attempts to not just predict a singular outcome, but the difference between two or more different counterfactual outcomes, can also benefit from improved performance through deep learning (DL) models. The downside to this manifests in the reduced interpretability of DL models, which can impede the usability of DL-based TEP in high-stakes decision-making contexts like medicine, that require human users to understand the tools they use and be able to detect whether a prediction is based on sound reasoning and thus trustworthy. Although researchers have developed a range of Explainable Artificial Intelligence (XAI) methods, these are subject to various concerns about model faithfulness and their actual usefulness to end users. We intend to specifically address the use case of TEP for the prognosis of diabetes treatment, and explore how visualizations of treatment effects found in the available literature can support user understanding.
In a previous project, we curated a dataset of visualizations used to represent predictions of treatment effects. In this thesis project, the student will conduct a series of expert interviews with diabetologists and discuss the curated visualizations concerning their helpfulness and accessibility. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
- A Qualitative Study on Visual Explanations in Medical Decision Support Systems
Business Information Systems, Tutor: M.Sc. Luca GemballaWhile researchers have already proposed a range of methods to explain the behavior of systems using Artificial Intelligence (AI), these methods have posed their own challenges. Questions about the faithfulness and robustness of their outputs have emerged, as well as concerns about these methods being almost as opaque to end users as the original AI system. In response to these challenges, we have approached domain-specific visualizations as explanatory components of medical decision support systems. We intend to research whether more straightforward means of explanation can better fulfill the needs of medical professionals and thus support the adoption of AI in clinical practice.
To develop an improved understanding of the role of visualizations in medical decision support systems, the student conducts a series of interviews with experts on medical decision support systems during the thesis project. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
- Application of Explainable AI for Decision Making in the Financial Industry
Business Information Systems, Tutor: M.Sc. Luca GemballaAmong the high-stakes decision-making contexts that use artificial intelligence (AI), finance is one of the fields that sticks out. However, applications such as fraud detection, credit scoring, and stock price forecasting still require insight into the black box of modern deep learning models. Even if poor decisions in finance, for instance, due to bias or poor data quality, do not directly harm people, they can negatively impact human well-being. Hence, AI explanations to foster trust and improve decision making are required. We intend to research explainable AI (XAI) in finance, which involves an analysis of use cases, methods, and previous experimental evaluations.
To develop a better understanding of XAI in finance, the Bachelor student carries out a systematic literature review (SLR). This SLR is followed by a series of expert interviews to assess the current state of AI and XAI usage in the financial industry. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
- Necessity and Sufficiency in Explainable AI Methods
Business Information Systems, Tutor: M.Sc. Luca GemballaThe literature on artificial intelligence (AI) explanations comprises two primary explanation methods: attribution-based and counterfactual-based. Through the differences in these approaches, two criteria for good explanations are optimized: necessity and sufficiency. Methods looking for counterfactual explanations elicit necessary features, while methods that look at feature attribution focus on sufficient feature values. Mothilal et al. (2021) propose a framework unifying both methods to evaluate the different approaches with respect to those two criteria for good explanations. Research into metrics for evaluating explanations is relevant because, unlike most prediction and classification tasks, there is no ground truth to evaluate the correctness or quality of explanations. Mothilal et al. (2021) rely on three datasets from the credit-scoring domain and a case study on hospital admission to test their framework. We intend to build on this study and examine, whether the results presented by Mothilal et al. (2021) transfer to different datasets and explanation techniques.
This Master thesis project builds on previous work by reviewing novel methods for attribution-based and counterfactual-based explanations from the literature, applying these to a new selection of datasets from the medical domain, and evaluating whether more recent approaches to AI explainability better fulfill the criteria of necessity and sufficiency.
Reference:
- Mothilal, R.K., Mahajan, D., Tan, C., & Sharma, A. (2021, July). Towards unifying feature attribution and counterfactual explanations: Different means to the same end. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 652-663).