Theses in Process
Theses in Process
- Applying Counterfactual Explanations for Employee Onboarding in the Context of Credit Approval Guidelines
Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca GemballaArtificial intelligence (AI) can provide substantial benefits in enhancing decision quality in a number of high-stakes domains, including credit approval. However, fully automated decision-making through AI systems is – at the moment – not possible due to issues around liability and ethical concerns. It follows that a human worker needs to be able to properly assess the decisions proposed by any AI system and detect faulty reasoning when it occurs. The field of explainable AI (XAI) is concerned with providing methods to scrutinize the black box of modern AI systems. While most publications focus on using XAI during operations and thus improving metrics such as trust or decision quality, employing XAI during employee onboarding is often overlooked. The aforementioned goals are still relevant in this context, but the onboarding process offers a different approach. While employees are still adjusting to a new environment, in this instance a new and unfamiliar systems for credit approval, enriching this learning process with XAI methods seems promising. Due to their correspondence to the way humans frame their explanations, we focus on counterfactual explanations to achieve the aim of an improved onboarding process.
This bachelor thesis aims to utilize counterfactual explanations for employee onboarding in the context of credit approval. To this end, a systematic literature review (SLR) on XAI for onboarding purposes is conducted. Based on these insights, an explanation interface utilizing counterfactual explanations is designed and evaluated by conducting a series of expert interviews. These will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
- Designing an Interactive XAI System to guide Decision Making in Marketing Campaigns
Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca GemballaArtificial Intelligence (AI) is being increasingly utilized in various domains. The high performance of deep learning models in tasks such as prediction and classification has largely facilitated this trend. However, this evolution of models becoming increasingly complex has led to a problem of low model understandability, and thus a lack of trust, concerns about possible biases, and even potential regulatory obstacles. This can become a problem when marketers use AI to optimize their campaigns, e.g., by asking it for feedback on the effectiveness of their product branding or by applying AI to guide their resource allocation strategy. Using AI responsibly in this context requires understanding how the respective model reaches its conclusions. Which input features have a positive effect on the predicted buying power of potential customers? How large would the predicted size of the target population be under slightly different circumstances? Explainable AI (XAI) provides a range of methods to enhance model interpretability and promote understanding, enabling answers to these and related questions. To make better use of the available methods, research calls for the development of interactive systems that support a variety of follow-up and drill-down actions. Interactivity is designed to make explanations more human-centric, enabling users to engage in a dialogue with the AI system.
This thesis project reviews existing XAI methods for applications in the marketing domain in a systematic literature review (SLR). Based on these insights and interviews to elicit requirements for the project, the student develops an AI-based interactive system for marketing data that utilizes a selection of XAI methods found in the SLR. The student evaluates their developed system by conducting a series of expert interviews. These will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
- How do Doctors explain? – Mapping medical Explanations to Explainable AI
Master Thesis Business Information Systems, Tutor: M.Sc. Luca GemballaThe field of Explainable Artificial Intelligence (XAI) has rapidly gained traction over the last years. Motivated by the promise of increased performance through using Artificial Intelligence (AI) and the associated problems of low interpretability hindering adoption in practice, methods, e.g., for generating feature-importance values or heatmaps to indicate important parts of images, have been proposed by researchers aiming to increase the usability of AI. However, these explanation methods have seen criticism about facilitating a “consumer-creator” gap (Ehsan et al., 2024), targeting the needs of data scientists and AI engineers, but not the needs of end users such as medical professionals. Researchers like Miller (2019) have pointed out similar problems with XAI being too static and not considering the nature of human explanations.
In this thesis project, the student will conduct a systematic literature review (SLR) on the role of theories from the social sciences on explanations in XAI research. Using this knowledge as a reference, the student will prepare interview guidelines and conduct a series of semi-structured interviews with doctors to gain a better understanding of how explanations work in the medical context. These interviews will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
Ehsan, U., Passi, S., Liao, Q. V., Chan, L., Lee, I. H., Muller, M., & Riedl, M. O. (2024, May). The who in XAI: how AI background shapes perceptions of AI explanations. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-32).
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.
- Explainable AI for Financial Time Series Anomaly Detection
Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca GemballaUnlike in high-stakes decision-making contexts like medicine or law enforcement, where tabular data is prevalent and commonly available, stock market analysis relies on transparent access to the associated longitudinal data. Similar to the development in different domains, researchers are also attempting to increase predictive performance through the use of artificial intelligence (AI) in the detection of anomalies in time series, thereby reducing the risk of erroneous decisions by human end users. However, the low interpretability of the underlying AI models, if not properly addressed, can also lead to problematic outcomes. If end users cannot detect erroneous reasoning within an AI model’s anomaly detection process, they either tend not to use the system due to their low confidence, or they tend to put too much trust into the system due to not being able to question its outputs. To mitigate both of these problems, researchers have developed Explainable AI (XAI) methods that aim to make AI models scrutable and understandable to human end users. A majority of these methods, though, are intended for use on tabular data.
This thesis project reviews existing XAI methods for time series data in a systematic literature review (SLR). Based on these insights and interviews to elicit requirements for the project, the student develops an AI-based anomaly detection system for stock market data that utilizes a selection of XAI methods found in the SLR. The student evaluates their developed system by conducting a series of expert interviews. These will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).
- Real-time Face Detection using AI – A Comparative Study for Personalized Health Management
Bachelor Thesis Business Information Systems, Tutor: M.Sc. Cosima von UechtritzHealth tracking with smartwatches or fitness trackers for personalized health management and self-optimization has become increasingly popular. Today, around 260.7 million users track their steps, heart rate, stress levels and other parameters on a daily basis (Statista Health Market Insights, 2024). However, many of these self-tracking solutions rely on invasive devices that require direct skin contact and are often high in cost. A promising alternative is remote health tracking via camera, which could open up new possibilities. For example, a health tracker integrated into the computer camera could be synchronized with a digital calendar, allowing meetings to be scheduled and rescheduled based on the current stress level.
AI-based remote photoplethysmography algorithms are an innovative approach that enables contactless health monitoring using standard, low-cost cameras. A critical step in this process is the identification of specific areas of the face, known as region of interest (ROI), such as the forehead or cheeks. Stable tracking of the ROI is essential for extracting accurate and reliable heart rate signals. However, influencing factors, such as different lighting conditions, head movements, camera angle and position, make it difficult to obtain reliable measurements in real-world conditions.
The aim of this thesis is to investigate and compare open source methods for real-time face and ROI recognition. First, common frameworks will be identified through a systematic literature review, and then a prototype will be implemented to evaluate them under selected influencing factors.
Statista Health Market Insights (2024). Statista Health Market Insights. de.statista.com/statistik/daten/studie/1460774/umfrage/nutzer-von-fitnesstrackern-weltweit/. Retrieved 08.05.2025
- Development and Evaluation of an AI-Based Algorithm in the field of Digital Health
Bachelor Thesis Business Information Systems, Tutor: M.Sc. Cosima von UechtritzThe digital health market is increasingly moving from a niche market to a mainstream market, and is expected to grow at an annual growth rate of 6.88% to reach a projected market volume of USD 258.25 billion by 2029 (Statista Market Insights, 2024). Health monitoring is an important sub-segment within the digital health market. Smartwatches or smart rings that monitor heart rate or other fitness metrics are already widely used by people for various reasons.
However, these devices have certain drawbacks as they require direct skin contact and are often very expensive. As a result, modern solutions for contactless and cost-effective health monitoring have gained considerable attention in recent years. In particular, new potential is emerging in areas such as road accident prevention and telemedicine, where non-invasive solutions are particularly valuable.
Recent advances in artificial intelligence have significantly improved the accuracy of remote photoplethysmography (rPPG) algorithms. Using these algorithms, heart rate and other vital signs can be measured using a standard RGB camera, enabling completely contactless health monitoring.
The aim of this work is to develop and validate an AI-based rPPG algorithm for real-time heart rate extraction. A prepared test dataset (UBFC-Phys) will be used to train and develop the algorithm. In addition, a small data sample will be collected using a reference measurement device (e.g. ECG chest strap) for validation purposes. The resulting data will then be compared and evaluated using selected performance indicators (e. g. mean absolute error (MAE), Pearson correlation coefficient).
Statista Market Insights (2024). Digital Health. Statista. www.statista.com/outlook/hmo/digital-health/worldwide Retrieved 08.05.2025
- Counterfactual Explanations for AI Models in Medicine
Master Thesis Business Information Systems, Tutor: M.Sc. Luca GemballaImplementing artificial intelligence (AI) models for medical tasks like diagnosis or prognosis promises to support medical staff in their decision making and improve the overall quality of healthcare. However, in order to achieve effective usage of AI for decision support, some potential problems have to be resolved. These include issues of trust, overconfidence, and legal requirements. A popular approach to make AI models more trustworthy, transparent, scrutable, and generally understandable lies in AI explanations. The research community of explainable AI has developed a wide array of methods that attempt to extract valuable insight about the reasoning of any given AI model. While most scholars have developed methods that attribute importance values to individual features to indicate their significance for a given prediction or for the global model behavior, others have taken inspiration from the social sciences and tried to construct more intuitive, human-like explanations. Among these are counterfactual explanations, also known as contrastive explanations. These provide alternative sets of minimally changed inputs that lead to a different model output. Presenting diverse counterfactuals enables insight into the model’s reasoning process in a different way to attribution-based approaches.
This Master thesis project consists of a systematic literature review (SLR) on contrastive explanations for AI models in the field of medicine. Based on this SLR, the student identifies requirements for the development of an interface that provides contrastive explanations for a specific medical task. Expert interviews with medical professionals will be conducted, recorded, transcribed, and analyzed (e.g., via tools like MAXQDA) to evaluate the developed interface.
- Exploring the Value of PPG-based Wearables in Digital Health: From Literature to Implementation
Bachelor Thesis Business Information Systems, Tutor: M.Sc. Cosima von UechtritzModern smartwatches and smart rings increasingly rely on photoplethysmography (PPG) algorithms to measure vital signs such as heart rate, sleep patterns and stress levels. These sensors are particularly attractive because they are non-invasive and inexpensive, which makes them suitable for integration into everyday life. In recent years, advances in signal processing and machine learning have significantly improved the ability to extract meaningful insights from raw PPG data. For example, the first commercially available smartwatches are now able to measure blood pressure, offering an exciting solution for health monitoring. However, most commercial consumer devices do not allow researchers and individuals to access the raw PPG signal, significantly limiting the potential for innovation.
The aim of this thesis is to first define the potential of PPG-based sensors through a systematic literature review. Subsequently, a prototype will be developed that enables the streaming of raw PPG data in real time.
Note: A PPG sensor is provided for technical implementation.
- Smart Camera-based Human Monitoring Systems: An Exploration of Use Cases in Applied Research
Bachelor Thesis Business Information Systems, Tutor: M.Sc. Cosima von UechtritzSmart camera-based monitoring systems are capable of tracking and monitoring human activities. In recent years, systems have been developed that enable contactless monitoring of vital signs such as heart rate and oxygen saturation, capture human behavior such as posture and eye activity, and are able to detect human emotions. In contrast to other monitoring systems, cameras have the advantage of being low-cost and unobtrusive, which facilitates their implementation in different environments. Despite the technological capabilities of these systems, their use in the real world is currently limited.
Therefore, the aim of this Bachelor thesis is to identify use cases and their potential in real-life scenarios. In order to successfully complete this Bachelor thesis, use cases will first be identified through a systematic literature review and then their potential will be explored through an online survey (e.g. LimeSurvey).
- AI Explanations in the Context of Medical Decision Support Systems
Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca GemballaIn order to properly utilize performance improvements through the adoption of artificial intelligence (AI) models, a number of conditions must be met. Since modern deep learning systems are opaque and inscrutable to human users, problems of mistrust and corresponding non-use can arise. But even if the adoption of AI technology into clinical practice is not hindered by such barriers, problems may arise due to an attitude of overconfidence and overreliance on AI results. The explainable AI (XAI) community strives to develop methods that help to create an appropriate level of trust in AI systems. Such methods are particularly important in the medical application context, as incorrect diagnostic and prognostic decisions can have significant negative consequences for the patients concerned. We intend to research XAI in the context of medical decision support systems. This includes developing an understanding of the application of XAI to different data types and diseases, and whether there has been experimental evaluation of the impact of XAI in AI-based decision support.
To develop a better understanding of XAI in the context of medical decision support systems, a systematic literature review (SLR) is carried out in this Bachelor’s thesis. To collect additional data and enhance the knowledge about XAI use cases in medical practice, the student conducts a series of expert interviews for requirements elicitation.
- A Qualitative Analysis of a Flow-adaptive System for Notification Management
Master Thesis Business Information Systems, Tutor: Prof. Dr. Mario NadjNotifications from instant messaging applications can interrupt employees' productive time. While there are different ways to influence the notification behavior of instant messengers, such as turning off the application or muting notifications for certain periods of time, these measures require self-discipline and/or often result in missing notifications when not in flow. We have developed an adaptive instant messaging blocker that aims to solve this problem by recognizing the user's flow state at predefined intervals, based on their physiological data and using machine learning methods. As soon as a flow state is recognized, the “do not disturb” status is automatically activated for the duration of the flow state.
We conducted interviews with knowledge workers to evaluate the developed system. Therefore, a qualitative analysis (with MAXQDA) is to be carried out in this Master's thesis in order to evaluate the system on the basis of the interviews conducted.