Skip to content. | Skip to navigation

Sections
Personal tools
You are here: Home Theses Agent-Object Interactions in Mixed Reality Learning Applications

Contact

Prof. Dr. S. Decker
RWTH Aachen
Informatik 5
Ahornstr. 55
D-52056 Aachen
Tel +49/241/8021501
Fax +49/241/8022321

How to find us

Annual Reports

Disclaimer

Webmaster

 

 

Agent-Object Interactions in Mixed Reality Learning Applications

Thesis type
  • Bachelor
Status Running
Supervisor(s)
Advisor(s)

Mixed reality agents have a large potential for mixed reality learning applications, e.g. by providing a natural user interface to a chatbot or as a tutor who can demonstrate practical actions and guide the user through a complex environment. When creating such a virtual agent, one challenge concerns the scalability of interactions with the environment. The virtual agent must be able to interact with a large variety of different objects that can be both virtual but also real in the case of augmented reality. Hence, a consistent data description language and a decision-making architecture are required, so that the agent can understand the possible interaction affordances in the environment. Previous approaches from game development e.g. include "smart objects" in the Sims franchise which provide such distributed world knowledge.

 

Your task in this Bachelor thesis is to create a data description for "smart objects" in mixed reality which informs the agent about the interaction abilities in the environment. The description should contain information like the place of the interaction, conditions to execute the action, necessary animations that need to be placed and possible follow-up actions that should be executed. The data should be made accessible for our existing virtual agent framework. In order to allow content creators to add objects to the agent simulation, editors should be implemented that produce the necessary data. For virtual objects, this editor should allow the creator to save the interaction abilities for this object. In augmented reality, a mixed reality editor is necessary that allows users to tag interaction affordances in the real world and to describe how they can be used. Once these descriptions are set up, the agent can interact both with objects in virtual and augmented reality. The goal of the implemented solution is to enable the agents to demonstrate activities in learning environments. The given implementation can be created in a mixed-reality enhanced learning use case to demonstrate the realized features.

Prerequisites

Must: Good knowledge of Unity and C#
Beneficial: Experience with mixed reality applications

Related projects

Document Actions