Kategorie: ‘Theses’
BEAM: Blockchain Escrow for Automated Marketplaces
Aim of the Thesis:
The aim of this bachelor thesis is to design, implement, and evaluate a blockchain-based escrow mechanism to support secure and transparent payment handling for data challenges in the Blockchain4DataMarketPlace. The proposed system will automate fund management, support dispute resolution processes, and provide transparent documentation of transactions, ensuring fairness and trust between Data Owners and Data Scientists.
Impact analysis of EV charging station-related attacks on the distribution grid
The growing adoption of electric vehicles (EVs) is reshaping power grids by introducing cybersecurity challenges that could be exploited to target both the EV and the power grid. This thesis focuses on vulnerabilities associated with EV charging and the effects of cyberattacks on grid stability and resilience. Of particular interest is the Open Charge Point Protocol (OCPP), and how malicious actors exploit it for their attacks. The work involves developing a simulation framework to replicate interactions between EVs, charging stations and the power grid. Using this simulator, multiple attack types are to be implemented and analyzed to evaluate the effects of them on the charging infrastructure.
Leveraging Solid Pods for Sovereign Data Sharing in the Cultural Sector
Exploring the potential of Solid Pods for the cultural sector offers an exciting opportunity to address challenges in data management, privacy, and interoperability. Solid Pods, a technology framework designed to enable individuals to store and control their data, promises transformative applications in the cultural sector. This thesis investigates how Solid Pods can be applied to improve data sharing, user autonomy, and collaboration between cultural organizations and individuals. The work involves analyzing the specific needs of the cultural sector (interview partners and domain experts will be provided), evaluating existing Solid Pod implementations, and designing potential use cases tailored to the unique requirements of this field. Through a mix of theoretical research and practical experimentation, this thesis aims to discover new ways to empower the cultural sector through decentralized and user-centric data sharing solutions.
Development of a hybrid Intrusion Detection System for EV Charging Stations
Interactive Guides with Mixed Reality Agents and Large Language Models
Conveying information in a tour traditionally either requires a real human or an audio recording. However, the human guide might not always be available and the quality in the tours varies. With audio recordings, extra care has to be taken to convey to the user which element is currently being talked about since there are no visual clues. With recent advancements in mobile augmented reality (AR), another option becomes viable where mixed reality (MR) agents can point out landmarks, give directions, and provide guidance based on location and surroundings, making the user’s experience immersive and interactive. MR agents are virtual, human-like entities interacting with users within augmented or virtual environments, blending digital content with the real world. Large Language Models (LLMs) can add conversational intelligence to the experience. They understand natural language inputs, answer questions, and provide detailed, personalized information based on user preferences. Together, MR agents and LLMs could create a system where the agent visually guides users through real-world environments. At the same time, the LLM delivers rich, context-specific information, responds to queries, and adapts content dynamically to the user. In the context of a tour guide, this integration enables users to explore cities or other locations with the MR agent directing them and visually pointing out features, while the LLM provides detailed explanations, answers, and personalized insights.
The Role of Virtual Agents in Supporting the Method of Loci for Enhanced Memorization Techniques
Learning content by heart can be facilitated by the method of Loci. In this mnemonic technique, the learner converts pieces of information into mental imagery. The imagined representations are then anchored in a location. If the learner then traverses a path through this location, the information can be remembered by recalling the mental imagery. However, building a suitable mental environment with a good path and coming up with helpful representations can be challenging for beginners. Hence, the adoption of this memorization method among students is still sparse. To understand the method and practice it, immersive environments in virtual reality can be utilized to construct rooms in which virtual representations can be placed. This visualization can also help during the learning process as placed content is saved on a hard-drive and so, the chosen layout is not lost if the learner still forgets portions of it. The immersive environment can repeatedly be visited to strengthen the memory and to interactively construct the learning content. Apart from this, virtual agents in the form of human avatars can be included in the virtual reality setting. They can lead the learner through the rooms to help with establishing a fixed route through the space. Moreover, they can give further auditory and visual information so that the visual representation of an item can be translated back into the original information.
Immersive Vocabulary Learning with Large Language Models
The emergence of large language models (LLMs), along with recent advances in mixed reality (MR) and virtual reality (VR), enable new opportunities for applying virtual agents in education. These simulated humans can imitate real-life situations and interactions with native speakers, which leads to an immersive and engaging learning experience. Especially in VR, interactions can be simulated with the agents. This way, languages can be learned and practiced in realistic scenarios. The LLM has the potential to overcome the limitations of learning applications with pre-scripted scenarios as the LLM can react dynamically to the learner’s actions and can lead to personalized interactions.
Evaluating Visualizations for Human-LLM Interactions in an Academic Teaching Context
Large Language Models (LLM) can be applied to transform a natural language (NL)-based text input query into a NL-based text answer. A common use case are personal assistants, e.g., for learning activities. In such teaching contexts they can process knowledge recorded in plain text documents, create summarizations, or teach knowledge according to a curriculum. However, interfaces for LLMs are currently text-based chats. This can be enhanced by showing body language, e.g., gestures which support the conveyed content. With the help of desktop-based virtual agents, the chat interface can be turned into a video call where the LLM is personified by an agent which is able to respond with gestures in addition to the output text.
DECODE: Data Explainability Concepts and Ontological Design Evaluation
The aim of this thesis is to evaluate and extend a developing ontology of explainable data principles, an ongoing work aimed at establishing a structured framework for Data Explainability in AI systems. The current version of this ontology is in its early stages, primarily focused on defining key principles of Data Explainability and exploring their role in enhancing trust and transparency in AI. This thesis will build on this preliminary work by refining the ontology’s structure, proposing new principles or dimensions where needed, and assessing its applicability across various AI domains to make it a robust foundation for responsible AI deployment.
FLUX: Feedback Latency and Utilization Examination — Optimizing Real-Time AI Pipelines
The aim of this thesis is to extend the existing latency analysis of a psychomotor feedback engine within our existing MLOps pipeline [1] [2]. Building upon preliminary latency estimations, this thesis will focus on systematically evaluating each processing step in the pipeline, assessing both theoretical and practical contributions to the overall latency and throughput. By modeling and analyzing latency sources, the goal is to propose and validate optimization strategies that can improve real-time performance for sensor-based AI applications. A particular emphasis will be placed on the throughput of parallel data processing within the infrastructure to ensure timely and efficient feedback delivery.