The aim of this thesis is to evaluate and extend a developing ontology of explainable data principles, an ongoing work aimed at establishing a structured framework for Data Explainability in AI systems. The current version of this ontology is in its early stages, primarily focused on defining key principles of Data Explainability and exploring their role in enhancing trust and transparency in AI. This thesis will build on this preliminary work by refining the ontology’s structure, proposing new principles or dimensions where needed, and assessing its applicability across various AI domains to make it a robust foundation for responsible AI deployment.
Thesis Type |
|
Status |
Open |
Supervisor(s) |
Stefan Decker |
Advisor(s) |
Michal Slupczynski |
Contact |
slupczynski@dbis.rwth-aachen.de |
Background and Related Work:
Explainability is increasingly recognized as essential in AI, particularly in contexts where decisions affect individuals and society. Existing research highlights the link between explainability and trust, noting that transparent data practices can significantly impact user confidence in AI systems. Foundational work in explainable AI (XAI) by researchers like Doshi-Velez and Kim has outlined concepts such as interpretability, traceability, and fairness, underscoring the importance of user-centered explainability frameworks. The ongoing work on the ontology for Data Explainability seeks to capture these and related principles in a structured format. However, this ontology remains a work in progress, currently limited in scope and coverage.
This thesis aims to build on this initial ontology, filling in gaps and expanding it to make it applicable across a range of AI applications. Through an iterative evaluation and extension process, this research will improve the ontology’s relevance, accessibility, and practical utility.
Expected Contribution:
This thesis will contribute a refined and validated ontology of explainable data principles, transforming it from a preliminary framework into a robust tool that promotes transparency and trust in AI systems. By offering an improved structure for Data Explainability, this work will lay a stronger foundation for future research, policy-making, and development of explainable and responsible AI tools.
If you are interested in this thesis, a related topic or have additional questions, please do not hesitate to send a message to slupczynski@dbis.rwth-aachen.de
Please apply with a meaningful CV and a recent transcript of your academic performance.
- Skills in qualitative and quantitative research methods
Ability to conduct expert interviews, user studies, and surveys for evaluating ontology effectiveness. - Experience in machine learning and data science
Understanding of data processing and machine learning workflows that rely on data explainability. - (nice to have) Background in ontology development and evaluation
Basic familiarity with ontology structuring, evaluation methods, and alignment techniques.