Open Access
Issue |
Security and Safety
Volume 3, 2024
Security and Safety in Artificial Intelligence
|
|
---|---|---|
Article Number | 2024011 | |
Number of page(s) | 27 | |
Section | Information Network | |
DOI | https://doi.org/10.1051/sands/2024011 | |
Published online | 20 October 2024 |
- Grigorescu S, Trasnea B and Cocias T et al. A survey of deep learning techniques for autonomous driving. J Field Rob 2020; 37: 362–386 [CrossRef] [Google Scholar]
- Shvets AA, Rakhlin A and Kalinin AA et al. Automatic instrument segmentation in robot-assisted surgery using deep learning. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), IEEE, 2018, 624–628. [Google Scholar]
- Athalye A, Carlini N and Wagner D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: International Conference on Machine Learning PMLR 2018: 274–283. [Google Scholar]
- Uesato J, O'donoghue B and Kohli P et al. Adversarial risk and the dangers of evaluating against weak attacks. In: International Conference on Machine Learning, PMLR 2018: 5025–5034. [Google Scholar]
- Gu T, Liu K and Dolan-Gavitt B et al. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access 2019; 7: 47230–47244. [CrossRef] [Google Scholar]
- Huang W, Zhao X and Huang X. Embedding and extraction of knowledge in tree ensemble classifiers. Mach Learn 2022; 111: 1925–1958. [CrossRef] [Google Scholar]
- Lu Y, Kamath G and Yu Y. Indiscriminate data poisoning attacks on neural networks, Transactions on Machine Learning Research https://openreview.net/forum?id=x4hmIsWu7e [Google Scholar]
- Shejwalkar V, Houmansadr A and Kairouz P et al. Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning. In: 2022 IEEE Symposium on Security and Privacy (SP), IEEE, 2022, 1354–1371. [Google Scholar]
- Jin G, Yi X and Huang W et al. Enhancing adversarial training with second-order statistics of weights. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15273–15283. [Google Scholar]
- Qiu H, Zeng Y and Zheng Q et al. An efficient preprocessing-based approach to mitigate advanced adversarial attacks, IEEE Trans Comput 2021; 73: 645–655. [Google Scholar]
- Li Y, Lyu X and Koren N et al. Neural attention distillation: Erasing backdoor triggers from deep neural networks. In: International Conference on Learning Representations, 2020. [Google Scholar]
- Chan PP, He ZM and Li H et al. Data sanitization against adversarial label contamination based on data complexity. Int J Mach Learn Cybernetics 2018; 9: 1039–1052. [CrossRef] [Google Scholar]
- Spyridopoulos T, Karanikas G and Tryfonas T et al. A game theoretic defence framework against dos/ddos cyber attacks. Comput. Security 2013; 38: 39–50. [CrossRef] [Google Scholar]
- Pal A and Vidal R. A game theoretic analysis of additive adversarial attacks and defenses. Adv Neural Inf Proc Syst 2020; 33: 1345–1355 [Google Scholar]
- Wu J. Cyberspace endogenous safety and security. Engineering 2022; 8 7. [Google Scholar]
- Jiangxing WU. Development paradigms of cyberspace endogenous safety and security, Science China 2022; 005: 065. [Google Scholar]
- Szegedy C, Zaremba W and Sutskever I et al. Intriguing properties of neural networks, ArXiv preprint [arXiv:1312.6199]. [Google Scholar]
- Xi B. Adversarial Classification. [Google Scholar]
- Kurakin A, Goodfellow I, Bengio S, Adversarial machine learning at scale, ArXiv preprint [arXiv:1611.01236]. [Google Scholar]
- Szegedy C, Zaremba W and Sutskever I et al. Intriguing properties of neural networks 2014, ArXiv preprint [arXiv:1312.6199]. [Google Scholar]
- Goodfellow IJ, Shlens J and Szegedy C. Explaining and harnessing adversarial examples. In: ICML, 2015. [Google Scholar]
- Carlini N and Wagner D. Towards Evaluating the Robustness of Neural Networks, 2017. [Google Scholar]
- Madry A, Makelov A and Schmidt L et al. Towards deep learning models resistant to adversarial attacks, 2017 [Google Scholar]
- Papernot N, McDaniel P and Jha S et al. The limitations of deep learning in adversarial settings. In: Proceedings of the 1st IEEE European Symposium on Security and Privacy, IEEE, 2016, 372–387. [Google Scholar]
- Demontis A, Melis M and Pintor M et al. Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In: Proceedings of the 28th USENIX Conference on Security Symposium, SEC'19, USENIX Association, USA, 2019, 321–338. [Google Scholar]
- Brendel W, Rauber J and Bethge M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models, 2018, ArXiv preprint [arXiv:1712.04248]. [Google Scholar]
- Croce F and Hein M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks, 2020, ArXiv preprint [arXiv:2003.01690]. [Google Scholar]
- Gu T, Dolan-Gavitt B and Garg S. Badnets: Identifying vulnerabilities in the machine learning model supply chain, 2017, ArXiv preprint [arXiv:1708.06733]. [Google Scholar]
- Liu Y, Ma X and Bailey J et al. Reflection backdoor: A natural backdoor attack on deep neural networks. In: Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part X 16, Springer, 2020, 182–199. [Google Scholar]
- Nguyen A and Tran A. Wanet-imperceptible warping-based backdoor attack, 2021, ArXiv preprint [arXiv:2102.10369]. [Google Scholar]
- Li S, Xue M and Zhao BZH et al. Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Trans Depend Sec Comput 2020; 18: 2088–2105. [Google Scholar]
- Turner A, Tsipras D and Madry A. Label-consistent backdoor attacks, 2019, ArXiv preprint [arXiv:1912.02771]. [Google Scholar]
- Wu J. Endogenous Safety and Security in Cyberspace: mimic Defense and Generalized Robust Control, 2020. [Google Scholar]
- Wu J. Cyberspace endogenous safety and security. Engineering 2022; 15: 179–185. [CrossRef] [Google Scholar]
- Hu H, Wu J and Wang Z et al. Mimic defense: a designed-in cybersecurity defense framework. IET Inf Secur 2018; 12: 226–237. [CrossRef] [Google Scholar]
- Wu J. Cyberspace mimic defense: Generalized robust control and endogenous security, Cyberspace Mimic Defense, 2020, https://api.semanticscholar.org/CorpusID:208520469 [CrossRef] [Google Scholar]
- Feng F, Zhou X and Li B et al. Modelling the mimic defence technology for multimedia cloud servers. Secur Commun Net 2020; 2020: 1–22. [CrossRef] [Google Scholar]
- Wei D, Xiao L and Shi L et al. Mimic web application security technology based on dhr architecture. In: International Conference on Artificial Intelligence and Intelligent Information Processing (AIIIP 2022), Vol. 12456, SPIE, 2022, 118–124. [Google Scholar]
- Kariyappa S and Qureshi MK, Improving adversarial robustness of ensembles with diversity training, 2019, ArXiv preprint [arXiv:1901.09981] [Google Scholar]
- Pang T, Xu K and Du C et al. Improving adversarial robustness via promoting ensemble diversity. In: International Conference on Machine Learning, PMLR, 2019, 4970–4979. [Google Scholar]
- Huang B, Ke Z and Wang Y et al. Adversarial defence by diversified simultaneous training of deep ensembles. In: Proceedings of the AAAI conference on artificial intelligence, Vol. 35, 2021, 7823–7831. [Google Scholar]
- Zhao X, Huang W and Huang X et al. Baylime: Bayesian local interpretable model-agnostic explanations. In: Uncertainty in artificial intelligence, PMLR, 2021, 887–896 [Google Scholar]
- Chen X, Huang W and Peng Z et al. Diversity supporting robustness: Enhancing adversarial robustness via differentiated ensemble predictions. Comput Secur 2024; 142: 103861. [CrossRef] [Google Scholar]
- Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell PAMI 1986; 8: 679–698 [CrossRef] [Google Scholar]
- Ojala T, Pietikäinen M and Harwood D. A comparative study of texture measures with classification based on feature distributions. Pattern Recog 1996; 29; 51–59 [CrossRef] [Google Scholar]
- Haralick RM, Shanmugam K and Dinstein I. Textural features for image classification. Stud Media Commun SMC 1973; 3 610–621. [Google Scholar]
- Tejankar A, Sanjabi M and Wang Q et al. Defending against patch-based backdoor attacks on self-supervised learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, 12239–12249. [Google Scholar]
- Selvaraju RR, Cogswell M and Das A et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, 2017, 618–626. [Google Scholar]
- Li Y, Lyu X and Koren N et al. Anti-backdoor learning: Training clean models on poisoned data. Adv Neural Inf Proc Syst 2021; 34: 14900–14912. [Google Scholar]
- Chen T, Kornblith S and Norouzi M et al. A simple framework for contrastive learning of visual representations. In: International conference on machine learning, PMLR, 2020, 1597–1607. [Google Scholar]
- Grill JB, Strub F and Altché F et al. Bootstrap your own latent-a new approach to self-supervised learning. Adv Neural Inf Proc Syst 2020; 33 21271–21284 [Google Scholar]
- Chen X, Fan H and Girshick R et al. Improved baselines with momentum contrastive learning, 2020, ArXiv preprint [arXiv:2003.04297] [Google Scholar]
- Jia J, Liu Y and Gong NZ. Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning. In: 2022 IEEE Symposium on Security and Privacy (SP), IEEE, 2022, 2043–2059. [Google Scholar]
- Zheng R, Tang R and Li J et al. Data-free backdoor removal based on channel lipschitzness. In: European Conference on Computer Vision, Springer, 2022, 175–191. [Google Scholar]
- Saha A, Tejankar A and Koohpayegani SA et al. Backdoor attacks on self-supervised learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 13337–13346. [Google Scholar]
- Li C, Pang R and Xi Z et al. An embarrassingly simple backdoor attack on self-supervised learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, 4367–4378. [Google Scholar]
- Wei J and Zou K. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In: Conference on Empirical Methods in Natural Language Processing, 2019, https://api.semanticscholar.org/CorpusID:59523656 [Google Scholar]
- Socher R, Perelygin A and Wu J et al. Recursive deep models for semantic compositionality over a sentiment treebank. In: Conference on Empirical Methods in Natural Language Processing, 2013, https://api.semanticscholar.org/CorpusID:990233 [Google Scholar]
- Zampieri M, Malmasi S and Nakov P et al. Predicting the type and target of offensive posts in social media. In: North American Chapter of the Association for Computational Linguistics, 2019, https://api.semanticscholar.org/CorpusID:67856299 [Google Scholar]
- Zhang X, Zhao JJ and LeCun Y. Character-level convolutional networks for text classification. In: Neural Information Processing Systems, 2015, https://api.semanticscholar.org/CorpusID:368182 [Google Scholar]
- Devlin J, Chang MW and Lee K et al. Bert: Pre-training of deep bidirectional transformers for language understanding. In: North American Chapter of the Association for Computational Linguistics, 2019. [Google Scholar]
- Kurita K, Michel P and Neubig G. Weight poisoning attacks on pretrained models, 2020, ArXiv preprint [arXiv:2004.06660], https://api.semanticscholar.org/CorpusID:215754328 [Google Scholar]
- Dai J, Chen C and Li Y. A backdoor attack against lstm-based text classification systems. IEEE Access 2019; 7: 138872–138878. https://api.semanticscholar.org/CorpusID:168170110 [CrossRef] [Google Scholar]
- Qi F, Li M and Chen Y et al. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In: Annual Meeting of the Association for Computational Linguistics, 2021, https://api.semanticscholar.org/CorpusID:235196099 [Google Scholar]
- Qi F, Chen Y and Zhang X et al. Mind the style of text! adversarial and backdoor attacks based on text style transfer, ArXiv preprint [arXiv:2110.07139], https://api.semanticscholar.org/CorpusID:238857078 [Google Scholar]
- Yang W, Lin Y and Li P et al. Rethinking stealthiness of backdoor attack against nlp models. In: Annual Meeting of the Association for Computational Linguistics, 2021, https://api.semanticscholar.org/CorpusID:236459933 [Google Scholar]
- Zhang Z, Wang J and Zhao L. Curriculum Learning for Graph Neural Networks: Which Edges Should We Learn First. In: Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Vol. 36, Curran Associates, Inc., 2023, 51113–51132. [Google Scholar]
- Zhu J, Jin J and Loveland D et al. How does Heterophily Impact the Robustness of Graph Neural Networks? Theoretical Connections and Practical Implications. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022, 2637–2647. [Google Scholar]
- Xie B, Chang H and Zhang Z et al. Adversarially Robust Neural Architecture Search for Graph Neural Networks. In: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, 8143–8152. [Google Scholar]
- Chen L, Li J and Peng Q et al. Understanding Structural Vulnerability in Graph Convolutional Networks. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI), 2021, 2249–2255. [Google Scholar]
- McCallum A, Nigam K and Rennie J et al. Automating the Construction of Internet Portals with Machine Learning. Inf Retr 2000; 3: 127–163. [CrossRef] [Google Scholar]
- Sen P, Namata G and Bilgic M et al. Collective Classification in Network Data. AI Magazine 2008; 29: 93–106. [CrossRef] [Google Scholar]
- Zügner D and Günnemann S. Adversarial Attacks on Graph Neural Networks via Meta Learning. In: Proceedings of the International Conference on Learning Representations (ICLR), 2019. [Google Scholar]
- Zügner D, Akbarnejad A, Günnemann S, Adversarial Attacks on Neural Networks for Graph Data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2018, 2847–2856. [Google Scholar]
- Kipf TN and Welling M, Semi-Supervised Classification with Graph Convolutional Networks. In: Proceedings of the International Conference on Learning Representations (ICLR), 2017. [Google Scholar]
- Wu H, Wang C and Tyshetskiy Y et al. Adversarial examples for graph data: Deep insights into attack and defense. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019, 4816–4823. [Google Scholar]
- Entezari N, Al-Sayouri SA and Darvishzadeh A et al. All you need is Low (rank): Defending against adversarial attacks on graphs. In: Proceedings of the 13th International Conference on Web Search and Data Mining (WSDM), 2020, 169–177. [Google Scholar]
- Jin W, Ma Y and Liu X et al. Graph Structure Learning for Robust Graph Neural Networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2020, 66–74. [Google Scholar]
- Qian R, Lai X and Li X. 3d object detection for autonomous driving: A survey. Pattern Recog 2022; 130: 108796 [CrossRef] [Google Scholar]
- Chen L. Multi-stage feature fusion object detection method for remote sensing image. Acta Electron Sin 2023; 51: 3520–3528 [Google Scholar]
- Peng Z, Chen X and Huang W et al. Shielding object detection: Enhancing adversarial defense through ensemble methods. In: 2024 5th Information Communication Technologies Conference (ICTC), 2024, 88–97. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.