Issue |
Security and Safety
Volume 3, 2024
Security and Safety in Artificial Intelligence
|
|
---|---|---|
Article Number | 2024005 | |
Number of page(s) | 17 | |
Section | Digital Finance | |
DOI | https://doi.org/10.1051/sands/2024005 | |
Published online | 21 June 2024 |
Research Article
VAEFL: Integrating variational autoencoders for privacy preservation and performance retention in federated learning
1
School of Computer Science, Fudan University, Shanghai, 200438, China
2
Institute of FinTech, Fudan University, Shanghai, 200438, China
* Corresponding author (email: yegn@fudan.edu.cn)
Received:
30
December
2023
Revised:
14
March
2024
Accepted:
24
April
2024
Federated Learning (FL) heralds a paradigm shift in the training of artificial intelligence (AI) models by fostering collaborative model training while safeguarding client data privacy. In sectors where data sensitivity and AI model security are of paramount importance, such as fintech and biomedicine, maintaining the utility of models without compromising privacy is crucial with the growing application of AI technologies. Therefore, the adoption of FL is attracting significant attention. However, traditional FL methods are susceptible to Deep Leakage from Gradients (DLG) attacks, and typical defensive strategies in current research, such as secure multi-party computation and differential privacy, often lead to excessive computational costs or significant decreases in model accuracy. To address DLG attacks in FL, this study introduces VAEFL, an innovative FL framework that incorporates Variational Autoencoders (VAEs) to enhance privacy protection without undermining the predictive prowess of the models. VAEFL strategically partitions the model into a private encoder and a public decoder. The private encoder, remaining local, transmutes sensitive data into a latent space fortified for privacy, while the public decoder and classifier, through collaborative training across clients, learn to derive precise predictions from the encoded data. This bifurcation ensures that sensitive data attributes are not disclosed, circumventing gradient leakage attacks and simultaneously allowing the global model to benefit from the diverse knowledge of client datasets. Comprehensive experiments demonstrate that VAEFL not only surpasses standard FL benchmarks in privacy preservation but also maintains competitive performance in predictive tasks. VAEFL thus establishes a novel equilibrium between data privacy and model utility, offering a secure and efficient FL approach for the sensitive application of FL in the financial domain.
Key words: Federated learning / variational autoencoders / deep leakage from gradients / AI model security / privacy preservation
Citation: Li Z, Liu Y, Li J, et al. VAEFL: Integrating variational autoencoders for privacy preservation and performance retention in federated learning. Security and Safety 2024; 3: 2024005. https://doi.org/10.1051/sands/2024005
© The Author(s) 2024. Published by EDP Sciences and China Science Publishing & Media Ltd.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.