Issue |
Security and Safety
Volume 2, 2023
Security and Safety in Unmanned Systems
|
|
---|---|---|
Article Number | 2023006 | |
Number of page(s) | 13 | |
Section | Information Network | |
DOI | https://doi.org/10.1051/sands/2023006 | |
Published online | 30 June 2023 |
Research Article
MPHM: Model poisoning attacks on federal learning using historical information momentum
1
School of Cyber Science and Engineering, Zhengzhou University; SongShan Laboratory, Zhengzhou, 450000, China
2
College of Intelligence and Computing, Tianjin University, 300072 Tianjin, China
3
School of Computer and Artificial Intelligence, Zhengzhou University, 450000 Zhengzhou, China
4
Zhengzhou Zhengda lnformation Technology Co., Ltd, 450001 Zhengzhou, China
* Corresponding authors (email: yfgao@zzu.edu.cn)
Received:
22
December
2022
Revised:
9
March
2023
Accepted:
20
April
2023
Federated learning(FL) development has grown increasingly strong with the increased emphasis on data for individuals and industry. Federated learning allows individual participants to jointly train a global model without sharing local data, which significantly enhances data privacy. However, federated learning is vulnerable to poisoning attacks by malicious participants. Since federated learning does not have access to the participants’ training process, i.e., attackers can compromise the global model by uploading elaborate malicious local updates to the server under the guise of normal participants. Current model poisoning attacks usually add small perturbations to the local model after it is trained to craft harmful local updates and the attacker finds the appropriate perturbation size to bypass robust detection methods and corrupt the global model as much as possible. In contrast, we propose a novel model poisoning attack based on the momentum of history information (MPHM), that is, the attacker makes new malicious updates by dynamically crafting perturbations using the historical information in the local training, which will make the new malicious updates more effective and stealthy. Our attack aims to indiscriminately reduce the testing accuracy of the global model with minimal information. Experiments show that in the classical defense case, our attack can significantly corrupt the accuracy of the global model compared to other advanced poisoning attacks.
Key words: Federated learning / Poisoning attacks / Security / Privacy
Citation: Chen Z, Shi YC, et al. MPHM: Model poisoning attacks on federal learning using historical information momentum. Security and Safety 2023; 2: 2023006. https://doi.org/10.1051/sands/2023006
© The Author(s) 2023. Published by EDP Sciences and China Science Publishing & Media Ltd.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.