Categories
Pages
-

DBIS

A Cognitive Modeling-Based Architecture for Realistic Agents in Mixed Reality

December 16th, 2021

Courses in higher education, e.g. at universities, face a challenge regarding scalability. Ideally, every student should be able to contact a mentor to benefit from the mentor’s experience and to reach the learning goals. However, if the number of participants in a course rises, the limited resources of the institution are quickly exhausted. This leads to a high workload for academic staff and decreases the mentoring quality since there is less time for mentors to address the individual needs of the students. A solution to this problem is socio-technical support for mentoring processes which combines social processes like peer mentoring and technological processes, e.g. for student’s feedback. As a result, text-based chat bots were created which can answer student’s questions and give feedback about exercises. We would now like to enhance the interaction with such bots by upgrading them to Mixed Reality agents. Such agents are shown as an avatar in a Mixed Reality environment and can interact with virtual content, users and other bots. They form a natural user interface where students can talk to the agent to get advice and it can also make autonomous decisions to guide the learning process. If the agent cannot answer a question, it can call a human mentor to join the conversation. This system considerably lowers the workload of the mentors while improving the mentoring experience of the students.

Thesis Type
  • Bachelor
Student
Dascha Blehm
Status
Finished
Submitted in
2021
Proposal on
11/06/2021 11:00 am
Proposal room
Presentation on
16/11/2021 1:00 pm
Supervisor(s)
Ralf Klamma
Stefan Decker
Advisor(s)
Benedikt Hensen
Contact
hensen@dbis.rwth-aachen.de

In this thesis, the foundation for such Mixed Reality mentors should be created. Based on existing implementations of chat bots and mixed reality avatars, a framework should be created which defines the behavior of the agents. An architecture should be ideated, implemented and tested where knowledge by the mentors can be queried based on speech input by the students. One possible structure is the cognitive modeling approach by John Fudge which divides avatars into different layers of abstraction that provide the low-level functionality and build up a complex cognitive behavior level at the top. The result of the query should be output as spoken language. At the same time, background procedures should steer the agent’s gestures and body language and coordinate them based on the spoken content.


Prerequisites:

Must: Good knowledge of C#
Beneficial: Experience with the Unity 3D engine, Java, artificial intelligence, 3D modeling