Theses in Process

Theses in Process

Filter:
  • Explainable AI for Financial Time Series Anomaly DetectionDetails

    Unlike in high-stakes decision-making contexts like medicine or law enforcement, where tabular data is prevalent and commonly available, stock market analysis relies on transparent access to the associated longitudinal data. Similar to the development in different domains, researchers are also attempting to increase predictive performance through the use of artificial intelligence (AI) in the detection of anomalies in time series, thereby reducing the risk of erroneous decisions by human end users. However, the low interpretability of the underlying AI models, if not properly addressed, can also lead to problematic outcomes. If end users cannot detect erroneous reasoning within an AI model’s anomaly detection process, they either tend not to use the system due to their low confidence, or they tend to put too much trust into the system due to not being able to question its outputs. To mitigate both of these problems, researchers have developed Explainable AI (XAI) methods that aim to make AI models scrutable and understandable to human end users. A majority of these methods, though, are intended for use on tabular data.

    This thesis project reviews existing XAI methods for time series data in a systematic literature review (SLR). Based on these insights and interviews to elicit requirements for the project, the student develops an AI-based anomaly detection system for stock market data that utilizes a selection of XAI methods found in the SLR. The student evaluates their developed system by conducting a series of expert interviews. These will be recorded, transcribed, and analyzed (e.g., via tools like MAXQDA).


    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa
  • AI Explanations in the Context of Medical Decision Support SystemsDetails

    In order to properly utilize performance improvements through the adoption of artificial intelligence (AI) models, a number of conditions must be met. Since modern deep learning systems are opaque and inscrutable to human users, problems of mistrust and corresponding non-use can arise. But even if the adoption of AI technology into clinical practice is not hindered by such barriers, problems may arise due to an attitude of overconfidence and overreliance on AI results. The explainable AI (XAI) community strives to develop methods that help to create an appropriate level of trust in AI systems. Such methods are particularly important in the medical application context, as incorrect diagnostic and prognostic decisions can have significant negative consequences for the patients concerned. We intend to research XAI in the context of medical decision support systems. This includes developing an understanding of the application of XAI to different data types and diseases, and whether there has been experimental evaluation of the impact of XAI in AI-based decision support.

    To develop a better understanding of XAI in the context of medical decision support systems, a systematic literature review (SLR) is carried out in this Bachelor’s thesis. To collect additional data and enhance the knowledge about XAI use cases in medical practice, the student conducts a series of expert interviews for requirements elicitation. 


    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa
  • Interactive Interfaces for medical Treatment Effect Prediction ExplanationsDetails

    The application of deep learning can lead to increased prediction performance in a wide variety of use cases. However, in high stakes decision contexts like for example medicine, an increase in performance on its own can be insufficient to foster trust towards a predictive model. If lack of trust has the effect of the high performance decision support system around the deep learning model not being used, nothing is gained. Developing an interface with the intention of convincing users of its plausibility on the other hand risks overtrust and an abandonment of critical thinking, especially among less experienced professionals. Explainable artificial intelligence (XAI) methods are a way to support appropriate levels of trust by giving users tools to detect faulty reasoning by an otherwise inscrutable deep learning model. We have developed visualizations as explanations of treatment effect predictions. During our evaluation of these visualizations, several experts voiced their interest in interactive components to explore the model's reasoning and underlying data to develop a better understanding. 

    To build on our visualizations a systematic literature review (SLR) on interactivity in decision support systems is conducted in this Bachelor's thesis. On the basis of this SLR the student develops an interactive explanation interface for a medical treatment effect prediction case and evaluates it in a series of expert interviews.


    Bachelor Thesis Business Information Systems, Tutor: M.Sc. Luca Gemballa
  • A Qualitative Analysis of a Flow-adaptive System for Notification ManagementDetails

    Notifications from instant messaging applications can interrupt employees' productive time. While there are different ways to influence the notification behavior of instant messengers, such as turning off the application or muting notifications for certain periods of time, these measures require self-discipline and/or often result in missing notifications when not in flow. We have developed an adaptive instant messaging blocker that aims to solve this problem by recognizing the user's flow state at predefined intervals, based on their physiological data and using machine learning methods. As soon as a flow state is recognized, the “do not disturb” status is automatically activated for the duration of the flow state.

    We conducted interviews with knowledge workers to evaluate the developed system. Therefore, a qualitative analysis (with MAXQDA) is to be carried out in this Master's thesis in order to evaluate the system on the basis of the interviews conducted.


    Master Thesis Business Information Systems, Tutor: Prof. Dr. Mario Nadj