The immense growth of scientific literature makes it nearly impossible for researchers to keep pace with all new developments in their domains. An automated scientific Question Answering (QA) system could substantially expedite the process of literature review, hypothesis generation, and knowledge extraction. With the emergence of Large Language Models (LLMs) like GPT, BERT, and their successors, the landscape of QA has significantly shifted.
Thesis Type |
|
Status |
Finished |
Presentation room |
Seminar room I5 6202 |
Supervisor(s) |
Stefan Decker |
Advisor(s) |
Yongli Mou |
Contact |
mou@dbis.rwth-aachen.de |
Large Language Models (LLMs) like GPT-4 offer a lot of promise in many natural language processing tasks, such as Question Answering (QA). However, they are facing several challenges when specifically deployed for scientific QA. Hallucination refers to the phenomenon when the model generates text that is factually incorrect, not present in the training data, or misaligned with reality. This can have serious consequences in scientific QA which requires precision, accuracy, and reliability. Recently, retrieval-augmented generation (RAG) models have emerged as a promising approach by combining the power of large language models with external retrieval or search mechanisms.
The goals of this thesis are listed as followings:
- A comprehensive exploration of existing methods in scientific QA and methods to reduce hallucination of LLMs.
- Framework Development: Design and implementation of scientific QA using LLMs with and without retrieval-augmented. This involves selecting an appropriate retrieval mechanism, understanding the data needs of scientific literature, and tailoring the model’s training and fine-tuning processes.
If you are interested in this thesis, do not hesitate to contact us via mou@dbis.rwth-aachen.de
Knowledge in Deep Learning, Natural Language Processing and Large Language Models
Programming language – Python
Deep Learning Framework – PyTorch