Issue
Security and Safety
Volume 4, 2025
Security and Safety of Data in Cloud Computing
Article Number 2024017
Number of page(s) 4
Section Other Fields
DOI https://doi.org/10.1051/sands/2024017
Published online 30 January 2025
  1. Shokri R, Stronati M and Song C et al. Membership inference attacks against machine learning models. In: IEEE Symposium on Security and Privacy (S &P), IEEE, 2017, 3–18. [Google Scholar]
  2. Salem A, Zhang Y and Humbert M et al. ML-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Network and Distributed System Security Symposium (NDSS), Internet Society, 2019. [Google Scholar]
  3. He X, Wen R and Wu Y et al. Node-level membership inference attacks against graph neural networks. CoRR abs/2102.05429, 2021. [Google Scholar]
  4. Li Z and Zhang Y. Membership leakage in label-only exposures. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), ACM, 2021,880–895. [Google Scholar]
  5. Liu Y, Zhao Z and Backes M et al. Membership inference attacks by exploiting loss trajectory. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), ACM, 2022, 2085–2098. [Google Scholar]
  6. He X, Li Z and Xu W et al. Membership-doctor: comprehensive assessment of membership inference against machine learning models. CoRR abs/2208.10445, 2022. [Google Scholar]
  7. Wu Y, Yu N and Li Z et al. Membership inference attacks against text-to-image generation models. CoRR https://arxiv.org/abs/2210.00968, 2022. [Google Scholar]
  8. Nasr M, Shokri R and Houmansadr A. Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: IEEE Symposium on Security and Privacy (S &P), IEEE, 2019, 1021–1035. [Google Scholar]
  9. Leino K and Fredrikson M. Stolen memories: leveraging model memorization for calibrated white-box membership inference. In: USENIX Security Symposium (USENIX Security), USENIX, 2020, 1605–1622. [Google Scholar]
  10. Li J, Li N and Ribeiro B. Membership inference attacks and defenses in classification models. In: ACM Conference on Data and Application Security and Privacy (CODASPY), ACM, 2021, 5–16. [Google Scholar]
  11. Song L and Mittal P. Systematic evaluation of privacy risks of machine learning models. In: USENIX Security Symposium (USENIX Security). USENIX, 2021. [Google Scholar]
  12. Carlini N, Chien S and Nasr M et al. Membership inference attacks from first principles. In: IEEE Symposium on Security and Privacy (S &P), IEEE, 2022, 1897–1914. [Google Scholar]
  13. Li H, Li Z and Wu S et al. SeqMIA: sequential-metric based membership inference attack. CoRR https://arxiv.org/abs/2407.15098, 2024. [Google Scholar]
  14. Choquette Choo CA, Tramèr F and Carlini N et al. Label-only membership inference attacks. In: International Conference on Machine Learning (ICML), PMLR, 2021, 1964–1974. [Google Scholar]
  15. Wu Y, Qiu H and Guo S et al. You only query once: an efficient label-only membership inference attack. In: The Twelfth International Conference on Learning Representations, 2024. [Google Scholar]
  16. Chaudhuri K, Monteleoni C and Sarwate AD. Differentially private empirical risk minimization. J Mach Learn Res, 2011, 1069–1109. [PubMed] [Google Scholar]
  17. Dwork C, McSherry F and Nissim K et al. Calibrating noise to sensitivity in private data analysis. In: Theory of Cryptography Conference (TCC), Springer, 2006, 265–284. [Google Scholar]
  18. Iyengar R, Near JP and Song DX et al. Towards practical differentially private convex optimization. In: IEEE Symposium on Security and Privacy (S &P), IEEE, 2019, 299–316. [Google Scholar]
  19. Truex S, Liu L and Emre Gursoy M et al. Towards demystifying membership inference attacks. CoRR https://arxiv.org/abs/1807.09173, 2018. [Google Scholar]
  20. Nasr M, Shokri R and Houmansadr A. Machine learning with membership privacy using adversarial regularization. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), ACM, 2018, 634–646. [Google Scholar]
  21. Jia J, Salem A and Backes M et al. MemGuard: defending against black-box membership inference attacks via adversarial examples. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), ACM, 2019, 259–274. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.