Thesis Type |
|
Status |
Finished |
Submitted in |
2022 |
Presentation on |
08/02/2022 10:30 am |
Supervisor(s) |
Stefan Decker |
Advisor(s) |
Yongli Mou |
Contact |
mou@dbis.rwth-aachen.de |
Data silos and data privacy are two major challenges in standard machine learning approaches. Federated learning is a new distributed privacy-preserving machine learning paradigm, with the main idea of collaboratively training machine learning models on the data that are distributed across multiple devices while preventing data leakage.
However, current federate learning systems are mostly orchestrated in a centralized architecture, in which a central server is used to coordinate all the participating nodes during the learning process. Such systems are vulnerable to malicious attacks, e.g., poisoning attacks, backdoor attacks, and inference attacks. The objective of this thesis is to design and implement a trust mechanism in the federated learning system to prevent from sybil-based poisoning attacks.
If you are interested in this thesis, a related topic, or have additional questions, please do not hesitate to send a message to mou@dbis.rwth-aachen.de