Interactions between Virtual Agents and Smart Objects in Mixed Reality Learning Applications

December 16th, 2021

Mixed reality agents have a large potential for mixed reality learning applications, e.g. by providing a natural user interface to a chatbot or as a tutor who can demonstrate practical actions and guide the user through a complex environment. When creating such a virtual agent, one challenge concerns the scalability of interactions with the environment. The virtual agent must be able to interact with a large variety of different objects that can be both virtual but also real in the case of augmented reality. Hence, a consistent data description language and a decision-making architecture are required, so that the agent can understand the possible interaction affordances in the environment. Previous approaches from game development e.g. include “smart objects” in the Sims franchise which provide such distributed world knowledge.

Thesis Type Bachelor
Student Danylo Bekhter
Status Running
Supervisor(s) Ralf Klamma
Advisor(s) Benedikt Hensen

Your task in this Bachelor thesis is to create a data description for “smart objects” in mixed reality which informs the agent about the interaction abilities in the environment. The description should contain information like the place of the interaction, conditions to execute the action, necessary animations that need to be placed and possible follow-up actions that should be executed. The data should be made accessible for our existing virtual agent framework. In order to allow content creators to add objects to the agent simulation, editors should be implemented that produce the necessary data. For virtual objects, this editor should allow the creator to save the interaction abilities for this object. In augmented reality, a mixed reality editor is necessary that allows users to tag interaction affordances in the real world and to describe how they can be used. Once these descriptions are set up, the agent can interact both with objects in virtual and augmented reality. The goal of the implemented solution is to enable the agents to demonstrate activities in learning environments. The given implementation can be created in a mixed-reality enhanced learning use case to demonstrate the realized features.


Must: Good knowledge of Unity and C#
Beneficial: Experience with mixed reality applications