Open Access
Issue |
Security and Safety
Volume 2, 2023
Security and Safety in Unmanned Systems
|
|
---|---|---|
Article Number | 2023006 | |
Number of page(s) | 13 | |
Section | Information Network | |
DOI | https://doi.org/10.1051/sands/2023006 | |
Published online | 30 June 2023 |
- Yang Q, Yang L, Chen T and Tong Yongxin. Federated machine learning: Concept and applications. ACM Trans Intell Syst Technol 2019; 10: 1–19. [Google Scholar]
- Li T, Sahu AK, Talwalkar A and Smith V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process Mag 2020; 37: 50–60. [NASA ADS] [Google Scholar]
- Savazzi S, Nicoli M, Bennis M, Kianoush S and Barbieri L. Opportunities of federated learning in connected, cooperative, and automated industrial systems. IEEE Commun Mag 2021; 59: 16–21. [CrossRef] [Google Scholar]
- Zhang K, Song X, Zhang C and Yu S. Challenges and future directions of secure federated learning: a survey. Front Comput Sci 2022; 16: 1–8. [Google Scholar]
- McMahan B, Moore E, Ramage D, Hampson S and Arcas BA. Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, PMLR, 2017, 1273–1282. [Google Scholar]
- Alazab M, Swarna Priya RM and Parimala M, et al. Federated learning for cybersecurity: Concepts, challenges, and future directions. IEEE Trans Ind Inf 2022; 18: 3501–3509. [CrossRef] [Google Scholar]
- Doku R and Rawat DB. Mitigating data poisoning attacks on a federated learning-edge computing network. In: 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC), IEEE, 2021, 1–6. [Google Scholar]
- Ahmed J, Razzaque MdA, Rahman MdM, Alqahtani SA and Hassan MM. A stackelberg game-based dynamic resource allocation in edge federated 5g network. IEEE Access, 2021; 10: 10460–10471. [Google Scholar]
- Ma Z, Ma J, Miao Y, Liu X, Choo KKR and Deng R. Pocket diagnosis: Secure federated learning against poisoning attack in the cloud. IEEE Trans Serv Comput 2021; 15: 3429–3442. [Google Scholar]
- Kuo TT and Pham A, Detecting model misconducts in decentralized healthcare federated learning. Int J Med Inf 2022; 158: 104658. [CrossRef] [Google Scholar]
- Niknam S, Dhillon HS and Reed JH, Federated learning for wireless communications: motivation, opportunities, and challenges. IEEE Commun Mag 2020; 58: 46–51. [CrossRef] [Google Scholar]
- Chen JH, Chen MR, Zeng GQ and Weng JS, Bdfl: a byzantine-fault-tolerance decentralized federated learning method for autonomous vehicle. IEEE Trans Veh Technol 2021; 70: 8639–8652. [CrossRef] [Google Scholar]
- Fang M, Cao X, Jia J and Gong N. Local model poisoning attacks to Byzantine-Robust federated learning. In: 29th USENIX Security Symposium (USENIX Security 20), 2020, 1605–1622. [Google Scholar]
- Jagielski M, Oprea A, Biggio B, Liu C, Nita-Rotaru C and Li B. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In: 2018 IEEE Symposium on Security and Privacy (SP), IEEE, 2018, 19–35. [CrossRef] [Google Scholar]
- Bagdasaryan E, Veit A, Hua Y, Estrin D and Shmatikov V. How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, PMLR, 2020, 2938–2948. [Google Scholar]
- So J, Güler B and Avestimehr AS. Byzantine-resilient secure federated learning. IEEE J Sel Areas Commun 2020; 39: 2168–2181. [Google Scholar]
- Mothukuri V, Parizi RM, Pouriyeh S, Huang Y, Dehghantanha A and Srivastava G. A survey on security and privacy of federated learning. Future Gener Comput Syst 2021; 115: 619–640. [CrossRef] [Google Scholar]
- Yin D, Chen Y, Kannan R and Bartlett P. Byzantine-robust distributed learning: Towards optimal statistical rates. In: International Conference on Machine Learning, PMLR, 2018, 5650–5659. [Google Scholar]
- Shejwalkar V and Houmansadr A. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In: Internet Society 2021, 18. [Google Scholar]
- Baruch G, Baruch M and Goldberg Y. A little is enough: Circumventing defenses for distributed learning. Adv Neural Inf Proc Syst 2019; 32: 8635–8645. [Google Scholar]
- Guerraoui R and Rouault S. The hidden vulnerability of distributed learning in byzantium. In: International Conference on Machine Learning, PMLR, 2018, 3521–3530. [Google Scholar]
- Shejwalkar V, Houmansadr A, Kairouz P and Ramage D. Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning. In: 2022 IEEE Symposium on Security and Privacy (SP), IEEE, 2022, 1354–1371. [CrossRef] [Google Scholar]
- Cao X and Gong NZ. Mpaf: Model poisoning attacks to federated learning based on fake clients. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 3396–3404. [Google Scholar]
- Sun Z, Kairouz P, Suresh AT and McMahan HB. Can you really backdoor federated learning? [arXiv:1911.07963], 2019. [Google Scholar]
- Zhang J, Chen J, Wu D, Chen B and Yu S. Poisoning attack in federated learning using generative adversarial nets. In: 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), IEEE. 2019, 374–380. [Google Scholar]
- Wang H, Sreenivasan K, Rajput S, Vishwakarma H, Agarwal S, Sohn JY, Lee K and Papailiopoulos D. Attack of the tails: Yes, you really can backdoor federated learning. Adv Neural Inf Proc Syst 2020; 33: 16070–16084. [Google Scholar]
- Zhang S, Yin H, Chen T, Huang Z, Nguyen QVH and Cui L. Pipattack: Poisoning federated recommender systems for manipulating item promotion. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022, 1415–1423. [CrossRef] [Google Scholar]
- Blanchard P, El Mhamdi EM, Guerraoui R and Stainer J. Machine learning with adversaries: Byzantine tolerant gradient descent. Adv Neural Inf Proc Syst 2017; 30. [Google Scholar]
- Xie C, Koyejo O and Gupta I. Generalized byzantine-tolerant sgd. [arXiv:1802.10116], 2018. [Google Scholar]
- Muñoz-González L, Biggio B, Demontis A, Paudice A, Wongrassamee V, Lupu EC and Roli F. Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, 27–38. [CrossRef] [Google Scholar]
- Tolpegin V, Truex S, Gursoy ME and Liu L. Data poisoning attacks against federated learning systems. In: European Symposium on Research in Computer Security Springer, 2020, 480–501. [Google Scholar]
- Nguyen TD, Rieger P, Miettinen M and Sadeghi AR, Poisoning attacks on federated learning-based iot intrusion detection system. In: Proc. Workshop Decentralized IoT Syst. Secur. (DISS), 2020, 1–7. [Google Scholar]
- Gong X, Chen Y, Huang H, Liao Y, Wang S and Wang Q. Coordinated backdoor attacks against federated learning with model-dependent triggers. IEEE Network, 2022; 36: 84–90. [CrossRef] [Google Scholar]
- Sun G, Cong Y, Dong J, Wang Q, Lyu L and Liu J. Data poisoning attacks on federated machine learning. IEEE Internet of Things J 2021; 9: 11365–11375. [Google Scholar]
- Xiao X, Tang Z, Li C, Xiao B and Li K. Sca: Sybil-based collusion attacks of iiot data poisoning in federated learning. IEEE Trans Ind Inf 2022; 19: 2608–2618. [Google Scholar]
- Nuding F and Mayer R. Data poisoning in sequential and parallel federated learning. In: Proceedings of the 2022 ACM on International Workshop on Security and Privacy Analytics, 2022, 24–34. [CrossRef] [Google Scholar]
- Zhou X, Xu M, Wu Y and Zheng N. Deep model poisoning attack on federated learning. Future Internet 2021; 13: 73. [CrossRef] [Google Scholar]
- Krizhevsky A and Hinton G. Learning Multiple Layers of Features from Tiny Images, 2009. [Google Scholar]
- Cohen G, Afshar S, Tapson J and Van Schaik A. Emnist: Extending mnist to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017, 2921–2926. [CrossRef] [Google Scholar]
- Caldas S, Duddu SMK, Wu P, Li T, Konečný J, McMahan HB, Smith V and Talwalkar A. Leaf: A benchmark for federated settings. [arXiv:1812.01097], 2018. [Google Scholar]
- Hsu TMH, Qi H and Brown M. Measuring the effects of non-identical data distribution for federated visual classification. [arXiv:1909.06335], 2019. [Google Scholar]
- Krizhevsky A, Sutskever I and Hinton GE. Imagenet classification with deep convolutional neural networks. Commun ACM 2017; 60: 84–90. [CrossRef] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.