Theses in Process

Filter:
  • Lowering Barriers: Evaluating XAI-Enhanced Natural Language Interfaces for Public Financial Data
    Master Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa

    The public availability of open government data (OGD) constitutes a major achievement for transparent government work and official accountability. Using OGD opens the possibility for civil society stakeholders like researchers, citizens and journalists to not only have a watchful eye on public spending, but also to use the data for other projects or to co-design solutions in cooperation with government partners. However, while OGD is indeed publicly available through designated online portals, these portals do not yet completely fulfill their purpose. They are too complex, difficult to navigate, and lack vital features like visualization and data analysis tools, leading to entry barriers for non-technical citizens. Integrating natural language interactions into these portals could be a helpful feature to make them more accessible to the general public. In addition, explanations on how to adapt the natural language queries to better match the desired data might also be useful to ensure trustworthy and understandable interactions.

    In this thesis project, the student will follow a design science research (DSR) approach to develop an accessible OGD portal for financial data. First, the student will conduct a systematic literature review (SLR) on natural language features for explainable user interfaces. Then, further requirements will be elicited and refined through several formative steps involving interviews and design workshops. Finally, the developed prototype will be evaluated through a set of summative interviews. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).

  • Developing a Flow-adaptive System for E-Sports
    Master Thesis Business Information Systems, Tutor: M.Sc. Cosima von Uechtritz

    Flow is the experience of being fully focused on an activity and fluidity of action. In this state, individuals often lose track of time and perform at their optimal levels. Thus, the flow state is of particular interest to e-sport players looking to enhance their performance. Therefore, technologies that are able to detect the current flow state of the user and help them enter or maintain this state could provide a valuable benefit, leading to improved performance. This thesis employs a design science research approach to develop a flow-adaptive system.

  • Developing an App to Optimize Learning Strategies for Students
    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Cosima von Uechtritz

    Flow is the experience of being fully focused on an activity and fluidity of action. In this state, individuals often lose track of time and perform at their optimal levels. For students and other knowledge workers whose productivity relies on attention and engagement in the task, entering a flow state can improve their work efficiency and well-being. Therefore, technologies that help students enter a flow state could provide a valuable benefit, leading to reduced mental workload and improved performance 
     

  • A Machine Learning Approach to Flow State Classification in Gaming
    Master Thesis Business Information Systems, Tutor: M.Sc. Cosima von Uechtritz

    This thesis investigates the automatic detection of flow states in gaming using machine learning techniques. Building upon existing research that links physiological signals, such as heart rate and heart rate variability, to the experience of flow, a classifier is developed and trained on a publicly available flow dataset. The trained model is then evaluated in a laboratory study, in which participants are measured repeatedly across several gaming sessions and receive feedback on their flow states during play. The study aims to advance understanding of flow in gaming and to explore the potential impact and benefits of flow feedback for players.

  • Designing an Interactive Explainable AI Interface for Marketing Professionals: A Design Science Research Approach
    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa

    Artificial Intelligence (AI) is being increasingly utilized in various domains. The high performance of deep learning models in tasks such as prediction and classification has largely facilitated this trend. However, this evolution of models becoming increasingly complex has led to a problem of low model understandability, and thus a lack of trust, concerns about possible biases, and even potential regulatory obstacles. This can become a problem when marketers use AI to optimize their campaigns, e.g., by asking it for feedback on the effectiveness of their product branding or by applying AI to guide their resource allocation strategy. Using AI responsibly in this context requires understanding how the respective model reaches its conclusions. Which input features have a positive effect on the predicted buying power of potential customers? How large would the predicted size of the target population be under slightly different circumstances? Explainable AI (XAI) provides a range of methods to enhance model interpretability and promote understanding, enabling answers to these and related questions. To make better use of the available methods, research calls for the development of interactive systems that support a variety of follow-up and drill-down actions. Interactivity is designed to make explanations more human-centric, enabling users to engage in a dialogue with the AI system. 

    This thesis project reviews existing XAI methods for applications in the marketing domain in a systematic literature review (SLR). Based on these insights and interviews to elicit requirements for the project, the student develops an AI-based interactive system for marketing data that utilizes a selection of XAI methods found in the SLR. The student evaluates their developed system by conducting a series of expert interviews. These will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).

  • Visualizing for Explainability in Treatment Effect Prediction for Cancer Prognosis
    Master Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa

    Similar to other predictive tasks, the field of treatment effect prediction (TEP), which attempts to not just predict a singular outcome, but the difference between two or more different counterfactual outcomes, can also benefit from improved performance through deep learning (DL) models. The downside to this manifests in the reduced interpretability of DL models, which can impede the usability of DL-based TEP in high-stakes decision-making contexts like medicine, that require human users to understand the tools they use and be able to detect whether a prediction is based on sound reasoning and thus trustworthy. Although researchers have developed a range of Explainable Artificial Intelligence (XAI) methods, these are subject to various concerns about model faithfulness and their actual usefulness to end users. We intend to specifically address the use case of TEP for the prognosis of cancer treatment outcomes, and explore how visualizations of treatment effects found in the available literature can support user understanding. 

    In a previous project, we curated a dataset of visualizations used to represent predictions of treatment effects. In this thesis project, the student will conduct a series of expert interviews with oncologists and discuss the curated visualizations concerning their helpfulness and accessibility. The interviews must be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).

  • How do Doctors explain? – Mapping medical Explanations to Explainable AI
    Master Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa

    The field of Explainable Artificial Intelligence (XAI) has rapidly gained traction over the last years. Motivated by the promise of increased performance through using Artificial Intelligence (AI) and the associated problems of low interpretability hindering adoption in practice, methods, e.g., for generating feature-importance values or heatmaps to indicate important parts of images, have been proposed by researchers aiming to increase the usability of AI. However, these explanation methods have seen criticism about facilitating a “consumer-creator” gap (Ehsan et al., 2024), targeting the needs of data scientists and AI engineers, but not the needs of end users such as medical professionals. Researchers like Miller (2019) have pointed out similar problems with XAI being too static and not considering the nature of human explanations. 

    In this thesis project, the student will conduct a systematic literature review (SLR) on the role of theories from the social sciences on explanations in XAI research. Using this knowledge as a reference, the student will prepare interview guidelines and conduct a series of semi-structured interviews with doctors to gain a better understanding of how explanations work in the medical context. These interviews will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).

    Ehsan, U., Passi, S., Liao, Q. V., Chan, L., Lee, I. H., Muller, M., & Riedl, M. O. (2024, May). The who in XAI: how AI background shapes perceptions of AI explanations. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-32).

    Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.

  • Real-time Face Detection using AI – A Comparative Study for Personalized Health Management
    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Cosima von Uechtritz

    Health tracking with smartwatches or fitness trackers for personalized health management and self-optimization has become increasingly popular. Today, around 260.7 million users track their steps, heart rate, stress levels and other parameters on a daily basis (Statista Health Market Insights, 2024). However, many of these self-tracking solutions rely on invasive devices that require direct skin contact and are often high in cost. A promising alternative is remote health tracking via camera, which could open up new possibilities. For example, a health tracker integrated into the computer camera could be synchronized with a digital calendar, allowing meetings to be scheduled and rescheduled based on the current stress level.

    AI-based remote photoplethysmography algorithms are an innovative approach that enables contactless health monitoring using standard, low-cost cameras. A critical step in this process is the identification of specific areas of the face, known as region of interest (ROI), such as the forehead or cheeks. Stable tracking of the ROI is essential for extracting accurate and reliable heart rate signals. However, influencing factors, such as different lighting conditions, head movements, camera angle and position, make it difficult to obtain reliable measurements in real-world conditions. 

    The aim of this thesis is to investigate and compare open source methods for real-time face and ROI recognition. First, common frameworks will be identified through a systematic literature review, and then a prototype will be implemented to evaluate them under selected influencing factors.

    Statista Health Market Insights (2024). Statista Health Market Insights. de.statista.com/statistik/daten/studie/1460774/umfrage/nutzer-von-fitnesstrackern-weltweit/. Retrieved 08.05.2025

  • Development and Evaluation of an AI-Based Algorithm in the field of Digital Health
    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Cosima von Uechtritz

    The digital health market is increasingly moving from a niche market to a mainstream market, and is expected to grow at an annual growth rate of 6.88% to reach a projected market volume of USD 258.25 billion by 2029 (Statista Market Insights, 2024). Health monitoring is an important sub-segment within the digital health market. Smartwatches or smart rings that monitor heart rate or other fitness metrics are already widely used by people for various reasons.

    However, these devices have certain drawbacks as they require direct skin contact and are often very expensive. As a result, modern solutions for contactless and cost-effective health monitoring have gained considerable attention in recent years. In particular, new potential is emerging in areas such as road accident prevention and telemedicine, where non-invasive solutions are particularly valuable.

    Recent advances in artificial intelligence have significantly improved the accuracy of remote photoplethysmography (rPPG) algorithms. Using these algorithms, heart rate and other vital signs can be measured using a standard RGB camera, enabling completely contactless health monitoring.

    The aim of this work is to develop and validate an AI-based rPPG algorithm for real-time heart rate extraction. A prepared test dataset (UBFC-Phys) will be used to train and develop the algorithm. In addition, a small data sample will be collected using a reference measurement device (e.g. ECG chest strap) for validation purposes. The resulting data will then be compared and evaluated using selected performance indicators (e. g. mean absolute error (MAE), Pearson correlation coefficient).

    Statista Market Insights (2024). Digital Health. Statista. www.statista.com/outlook/hmo/digital-health/worldwide Retrieved 08.05.2025

  • Counterfactual Explanations for AI Models in Medicine
    Master Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa

    Implementing artificial intelligence (AI) models for medical tasks like diagnosis or prognosis promises to support medical staff in their decision making and improve the overall quality of healthcare. However, in order to achieve effective usage of AI for decision support, some potential problems have to be resolved. These include issues of trust, overconfidence, and legal requirements. A popular approach to make AI models more trustworthy, transparent, scrutable, and generally understandable lies in AI explanations. The research community of explainable AI has developed a wide array of methods that attempt to extract valuable insight about the reasoning of any given AI model. While most scholars have developed methods that attribute importance values to individual features to indicate their significance for a given prediction or for the global model behavior, others have taken inspiration from the social sciences and tried to construct more intuitive, human-like explanations. Among these are counterfactual explanations, also known as contrastive explanations. These provide alternative sets of minimally changed inputs that lead to a different model output. Presenting diverse counterfactuals enables insight into the model’s reasoning process in a different way to attribution-based approaches.   

    This Master thesis project consists of a systematic literature review (SLR) on contrastive explanations for AI models in the field of medicine. Based on this SLR, the student identifies requirements for the development of an interface that provides contrastive explanations for a specific medical task. Expert interviews with medical professionals will be conducted, recorded, transcribed, and analyzed (e.g., via tools like MAXQDA) to evaluate the developed interface.