Issue
Security and Safety
Volume 3, 2024
Security and Safety in Artificial Intelligence
Article Number 2024002
Number of page(s) 26
Section Intelligent Transportation
DOI https://doi.org/10.1051/sands/2024002
Published online 18 March 2024
  1. LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proc IEEE 1998; 86: 2278–2324. [CrossRef] [Google Scholar]
  2. Krizhevsky A, Sutskever I and Hinton GE. Imagenet classification with deep convolutional neural networks. In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2012, 1097–1105. [Google Scholar]
  3. He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 770–778. [Google Scholar]
  4. Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017, 4700–4708. [Google Scholar]
  5. Hu J, Shen L and Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, 7132–7141. [Google Scholar]
  6. Ren S, He K, Girshick R, et al. Faster R-CNN: Towards Real-time Object Detection with Region Proposal Networks, 2015, 91–99. [Google Scholar]
  7. Liu W, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector. In: European conference on computer vision (ECCV), 2016, 21–37. [Google Scholar]
  8. Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 779–788. [Google Scholar]
  9. He K, Gkioxari G, Dollár P, et al. Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, 2961–2969. [Google Scholar]
  10. Redmon J and Farhadi A. YOLO9000: Better, faster, stronger. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017, 7263–7271. [Google Scholar]
  11. Redmon J and Farhadi A. YOLOv3: An incremental improvement, arXiv preprint https://arxiv.org/abs/1804.02767, 2018. [Google Scholar]
  12. Bochkovskiy A, Wang CY and Mark Liao HY. YOLOv4: Optimal speed and accuracy of object detection, arXiv preprint https://arxiv.org/abs/2004.10934, 2020. [Google Scholar]
  13. Long J, Shelhamer E and Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2015, 3431–3440. [Google Scholar]
  14. Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017, 2881–2890. [Google Scholar]
  15. Chen LC, Papandreou G, Kokkinos I, et al. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 2017; 40: 834–848. [Google Scholar]
  16. He K, Zhang X, Ren S, et al. Identity mappings in deep residual networks. In: European Conference on Computer Vision (ECCV), Springer, 2016, 630–645. [Google Scholar]
  17. Simonyan K and Zisserman A. Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations (ICLR), 2015. [Google Scholar]
  18. Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations (ICLR), 2014. [Google Scholar]
  19. Goodfellow IJ, Shlens J and Szegedy C. Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations (ICLR), 2015. [Google Scholar]
  20. Zhai R, Cai T, He D, et al. Adversarially robust generalization just requires more unlabeled data, arXiv preprint https://arxiv.org/abs/1906.00555, 2019. [Google Scholar]
  21. Alayrac JB, Uesato J, Huang PS, et al. Are labels required for improving adversarial robustness? In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2019, 12192–12202. [Google Scholar]
  22. Najafi A, Maeda SI, Koyama M, et al. Robustness to adversarial perturbations in learning from incomplete data. In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2019, 5542–5552. [Google Scholar]
  23. Carmon Y, Raghunathan A, Schmidt L, et al. Unlabeled data improves adversarial robustness. In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2019, 11190–11201. [Google Scholar]
  24. Chen T, Kornblith S, Norouzi M, et al. A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning (ICML), PMLR, 2020, 1597–1607. [Google Scholar]
  25. Chen T, Kornblith S, Swersky K, et al. Big self-supervised models are strong semi-supervised learners. In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2020. [Google Scholar]
  26. He K, Fan H, Wu Y, et al. Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, 9726–9735. [Google Scholar]
  27. Chen X, Fan H, Girshick RB, et al. Improved baselines with momentum contrastive learning, arXiv preprint https://arxiv.org/abs/2003.04297, 2020. [Google Scholar]
  28. ISO. Road vehicles – safety of the intended functionality. In: International Organization for Standardization: ISO/DIS 21448, 2021. [Google Scholar]
  29. Krizhevsky A and Hinton G. A Learning Multiple Layers of Features from Tiny Images, 2009, http://www.cs.toronto.edu/~kriz/cifar.html [Google Scholar]
  30. LeCun Y and Cortes C. MNIST Handwritten Digit Database, 2010. [Google Scholar]
  31. Caesar H, Bankiti V, Lang AH, et al. nuScenes: A multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, 11621–11631. [Google Scholar]
  32. Geiger A, Lenz P, Stiller C, et al. Vision meets robotics: The kitti dataset. Int J Robot Res 2013; 32: 1231–1237. [CrossRef] [Google Scholar]
  33. Zou Z, Shi Z, Guo Y, et al. Object detection in 20 years: A survey, arXiv preprint https://arxiv.org/abs/1905.05055, 2019. [Google Scholar]
  34. Zhang S, Wen L, Bian X, et al. Single-shot refinement neural network for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, 4203–4212. [Google Scholar]
  35. Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2014, 580–587. [Google Scholar]
  36. Lin TY, Goyal P, Girshick R, et al. Focal loss for dense object detection. In: Proceedings of the IEEE international Conference on Computer Vision (ICCV), 2017, 2980–2988. [Google Scholar]
  37. Tanay T and Griffin LD. A boundary tilting persepective on the phenomenon of adversarial examples, arXiv preprint https://arxiv.org/abs/1608.07690, 2016. [Google Scholar]
  38. Akhtar N and Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 2018; 6: 14410–14430. [CrossRef] [Google Scholar]
  39. Papernot N, McDaniel P, Jha S, et al. The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy (EuroS&P), IEEE, 2016, 372–387. [Google Scholar]
  40. Carlini N and Wagner D. Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Pprivacy (S &P), IEEE, 2017, 39–57. [Google Scholar]
  41. Moosavi-Dezfooli SM, Fawzi A and Frossard P. Deepfool: A simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 2574–2582. [Google Scholar]
  42. Baluja S and Fischer I. Adversarial transformation networks: learning to generate adversarial examples, arXiv preprint https://arxiv.org/abs/1703.09387, 2017. [Google Scholar]
  43. Liu X, Yang H, Liu Z, et al. DPATCH: An adversarial patch attack on object detectors. In: Workshop on Artificial Intelligence Safety co-located with the Thirty-Third AAAI Conference on Artificial Intelligence, Volume 2301 of CEUR Workshop Proceedings, 2019. [Google Scholar]
  44. Saha A, Subramanya A, Patil K, et al. Role of spatial context in adversarial robustness for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, 784–785. [Google Scholar]
  45. Hendrycks D and Dietterich TG. Benchmarking neural network robustness to common corruptions and perturbations. In: 7th International Conference on Learning Representations (ICLR), 2019. [Google Scholar]
  46. Michaelis C, Mitzkus B, Geirhos R, et al. Benchmarking robustness in object detection: Autonomous driving when winter is coming, arXiv preprint https://arxiv.org/abs/1907.07484, 2019. [Google Scholar]
  47. Caron M, Misra I, Mairal J, et al. Unsupervised learning of visual features by contrasting cluster assignments. In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2020. [Google Scholar]
  48. Grill JB, Strub F, Altché F, et al. Bootstrap your own latent – A new approach to self-supervised learning. In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2020. [Google Scholar]
  49. Miyato T, Maeda SI, Koyama M, et al. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Trans Pattern Anal Mach Intell 2018; 41: 1979–1993. [Google Scholar]
  50. Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations (ICLR), 2018. [Google Scholar]
  51. Blum A and Mitchell T. Combining labeled and unlabeled data with co-training. In: Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT), 1998, 92–100. [Google Scholar]
  52. Lee DH, et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on Challenges in Representation Learning of International Conference on Machine Learning ICML, 2013, 896. [Google Scholar]
  53. van den Oord A, Li Y and Vinyals O. Representation learning with contrastive predictive coding, arXiv preprint https://arxiv.org/abs/1807.03748, 2018. [Google Scholar]
  54. Deng J, Dong W, Socher R, et al. Imagenet: A large-scale hierarchical image database. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2009, 248–255. [Google Scholar]
  55. He T, Zhang Z, Zhang H, et al. Bag of tricks for image classification with convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, 558–567. [Google Scholar]
  56. Han W, Feng R, Wang L, et al. A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification. ISPRS J Photogr Remote Sens 2018; 145: 23–43. [CrossRef] [Google Scholar]
  57. Zoph B, Cubuk ED, Ghiasi G, et al. Learning data augmentation strategies for object detection. In: European Conference on Computer Vision (ECCV), 2020, 566–583. [Google Scholar]
  58. Cubuk ED, Zoph B, Mané D, et al. Autoaugment: Learning augmentation policies from data, arXiv preprint https://arxiv.org/abs/1805.09501, 2018. [Google Scholar]
  59. Everingham M, Van Gool L, Williams CKI, et al. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results, http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html [Google Scholar]
  60. Tian Z, Shen C, Chen H, et al. Fcos: Fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, 9627–9636. [Google Scholar]
  61. Duan K, Bai S, Xie L, et al. Centernet: Keypoint triplets for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), 2019, 6569–6578. [Google Scholar]
  62. Terven J and Cordova-Esparza D. A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond, arXiv preprint https://arxiv.org/abs/2304.00501, 2023. [Google Scholar]
  63. Tarvainen A and Valpola H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2017, 1195–1204. [Google Scholar]
  64. Liu YC, Ma CY, He Z, et al. Unbiased teacher for semi-supervised object detection. In: 9th International Conference on Learning Representations (ICLR), 2021. [Google Scholar]
  65. Kirillov A, Mintun E, Ravi N, et al. Segment anything, arXiv preprint https://arxiv.org/abs/2304.02643, 2023. [Google Scholar]
  66. Tsipras D, Santurkar S, Engstrom L, et al. Robustness may be at odds with accuracy. In: 7th International Conference on Learning Representations (ICLR), 2019. [Google Scholar]
  67. Ilyas A, Santurkar S, Tsipras D, et al. Adversarial examples are not bugs, they are features. In: Annual Conference on Neural Information Processing Systems (NeurIPS), 2019, 125–136. [Google Scholar]
  68. Kerbl B, Kopanas G, Leimkühler T, et al. 3D Gaussian splatting for real-time radiance field rendering. ACM Trans Graph 2023; 42: 1–14. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.