Skip to content. | Skip to navigation

Sections
Personal tools
You are here: Home Theses Trust Mechanism against Poisoning Attacks in Federated Learning

Contact

Prof. Dr. S. Decker
RWTH Aachen
Informatik 5
Ahornstr. 55
D-52056 Aachen
Tel +49/241/8021501
Fax +49/241/8022321

How to find us

Annual Reports

Disclaimer

Webmaster

 

 

Trust Mechanism against Poisoning Attacks in Federated Learning

Thesis type
  • Master
Status Running
Supervisor(s)
Advisor(s)

Federated learning is a privacy-preserving machine learning where many clients collaboratively train a model under the orchestration of a central server, while keeping the training data decentralized. However, federated learning is very vulnerable to adversarial manipulations on malicious clients due to its distributed nature. For example, malicious clients can corrupt the global model via poisoning their local training data ( data poisoning attacks) or their model updates sent to the server (model poisoning attacks). Existing federated learning methods, such as FedAvg, suffer from limitations of above-mentioned sophisticated poisoning attacks on malicious clients.

 

The goal of this thesis is to design a new federated learning method to achieve robustness against malicious attacks both model poisoning and data poisoning. Hopefully, this method can support online and offline training. The performance against various types of attacks will be evaluated and systematically compared with the state-of-the-art trust mechanism.

 

If you are interested in this thesis, a related topic or have additional questions, please do not hesitate to send a message to mou@dbis.rwth-aachen.de

Prerequisites

Basic Knowledge in machine learning and general statistics will be fundamental.
Programming skills such as C++, Java, and Python are strongly required.
Experience with deep learning frameworks is prefered.

Document Actions