Categories
Pages
-

DBIS

Explainable Data – Trust, Transparency and Bias Mitigation in ML

February 8th, 2024

This bachelor thesis aims to delve into the critical intersection of trust, transparency, and bias mitigation in machine learning (ML) systems through the lens of explainable data. The proliferation of ML algorithms across various domains has underscored the importance of understanding how these systems make decisions, especially when they impact individuals or societal outcomes.

Thesis Type
  • Bachelor
Student
Tala Alloush
Status
Running
Presentation room
Seminar room I5 6202
Supervisor(s)
Stefan Decker
Advisor(s)
Michal Slupczynski
Contact
slupczynski@dbis.rwth-aachen.de

Drawing from the foundational concepts of explainability, this thesis will explore the development of an RDF-based ontology as a means to describe and enhance the interpretability of ML models. By translating the intricate reasoning traces of these models into simple, understandable language, stakeholders could be empowered to comprehend the underlying processes driving ML decisions, fostering a more robust human-data interaction.

Building upon existing research in the field, this thesis will investigate the role of stakeholder involvement in shaping data-driven ML systems, emphasizing the importance of human-data interaction for ensuring ethical and fair outcomes. It will also delve into the nuances of feature engineering, data science, and data mining techniques, examining their impact on model transparency and bias mitigation.

Furthermore, the thesis will address key principles of FAIR data and data provenance, emphasizing the need for data transparency, accessibility, and reproducibility. By integrating these principles into the development of the RDF-based ontology, the thesis aims to enhance the traceability and reproducibility of ML processes, thereby bolstering trust in these systems.

Overall, this thesis seeks to contribute to the ongoing discourse surrounding the responsible deployment of ML technologies by providing insights into how explainable data can promote trust, transparency, and fairness in decision-making processes.

Potentially relevant materials:

  • Chari, S., Seneviratne, O., Gruen, D.M., Foreman, M.A., Das, A.K., McGuinness, D.L. (2020). Explanation Ontology: A Model of Explanations for User-Centered AI. In: Pan, J.Z., et al. The Semantic Web – ISWC 2020. ISWC 2020. Lecture Notes in Computer Science(), vol 12507. Springer, Cham. https://doi.org/10.1007/978-3-030-62466-8_15

Prerequisites:
  • RDF
  • Literature research