| Issue |
Security and Safety
Volume 4, 2025
Security and Safety in Network Simulation and Evaluation
|
|
|---|---|---|
| Article Number | 2025006 | |
| Number of page(s) | 14 | |
| Section | Information Network | |
| DOI | https://doi.org/10.1051/sands/2025006 | |
| Published online | 28 July 2025 | |
- Chen Z. Research on internet security situation awareness prediction technology based on improved RBF neural network algorithm. J Comput Cognit Eng 2022; 1: 103–08. [Google Scholar]
- Verma R, Kumari A, Anand A et al. Revisiting shift cipher technique for amplified data security. J Comput Cognit Engineering 2024; 3: 8–14. [Google Scholar]
- Zheng Y, Li Z, Xu X et al. Dynamic defenses in cyber security: Techniques, methods and challenges. Digital Commun Networks 2022; 8: 422–35. [Google Scholar]
- Chen Z, Kang F, Xiong X et al. A survey on penetration path planning in automated penetration testing. Appl Sci 2024; 14: 8355. [Google Scholar]
- Han X, Pasquier T, Seltzer M. Provenance-based intrusion detection: opportunities and challenges. In: 10th USENIX Workshop on the Theory and Practice of Provenance (TaPP 2018), 2018. [Google Scholar]
- Li Z, Chen QA, Yang R et al. Threat detection and investigation with system-level provenance graphs: A survey. Comput Secur 2021; 106: 102282. [Google Scholar]
- Stefinko Y, Piskozub A, Banakh R. Manual and automated penetration testing. Benefits and drawbacks. Modern tendency. In: 2016 13th international conference on modern problems of radio engineering, telecommunications and computer science (TCSET). IEEE 2016, 488–91. [Google Scholar]
- Zhou S, Liu J, Zhou X et al. Intelligent Penetration Testing Path Discovery Based on Deep Reinforcement Learning. Comput Sci 2021; 48: 40–6 (in Chinese). [Google Scholar]
- Bertoglio DD, Gil A, Acosta J et al. Towards new challenges of modern Pentest. In: International conference on WorldS4, Singapore, Springer Nature Singapore, 2023, 21–33. [Google Scholar]
- Chen K, Lu H, Fang BX et al. Survey on automated penetration testing technology research. Ruan Jian Xue Bao/J Software 2024; 35: 2268–88 (in Chinese). [Google Scholar]
- Polatidis N, Pavlidis M, Mouratidis H. Cyber-attack path discovery in a dynamic supply chain maritime risk management system. Comput Standards Interfaces 2018; 56: 74–82. [Google Scholar]
- Yu Z, Li S, Bai Y et al. REMSF: a robust ensemble model of malware detection based on semantic feature fusion. IEEE Int Things J 2023; 10: 16134–143. [Google Scholar]
- Rehman MU, Ahmadi H, Hassan WU. FLASH: A comprehensive approach to intrusion detection via provenance graph representation learning. In: 2024 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 2024, 139–39. [Google Scholar]
- Chen T, Dong C, Lv M et al. APT-KGL: An intelligent APT detection system based on threat knowledge and heterogeneous provenance graph learning. IEEE Trans Dependable Secure Comput, 2022, 1–15. [Google Scholar]
- Wang S, Wang Z, Zhou T et al. Threatrace: Detecting and tracing host-based threats in node level through provenance graph learning. IEEE Trans Inform Forensics Secur 2022; 17: 3972–987. [Google Scholar]
- Gao Y, Li X, Peng H et al. Hincti: A cyber threat intelligence modeling and identification system based on heterogeneous information network. IEEE Trans Knowledge Data Eng 2020; 34: 708–22. [Google Scholar]
- Silver D, Huang A, Maddison CJ et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016; 529: 484–89. [CrossRef] [Google Scholar]
- Vinyals O, Babuschkin I, Czarnecki WM et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 2019; 575: 350–54. [Google Scholar]
- Zhang HJ, Zhao J, Wang R et al. Multi-objective reinforcement learning algorithm and its application in drive system. In: Proc. 34th Annu. IEEE Conf. Ind. Electron., Orlando, FL, USA, 2008, 274–79. [Google Scholar]
- Tang C, Abbatematteo B, Hu J, et al. Deep reinforcement learning for robotics: A survey of real-world successes[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2025; 39: 28694–28698. [Google Scholar]
- Arulkumaran K, Deisenroth MP, Brundage M et al. Deep reinforcement learning: A brief survey. IEEE Signal Process Mag 2017; 34: 26–38. [Google Scholar]
- Sutton RS, Barto AG. Reinforcement Learning: An introduction, MIT press, 2018. [Google Scholar]
- Greenwald L, Shanley R. Automated planning for remote penetration testing. In: Proc. of the 2009 IEEE Military Communications Conf., Boston, IEEE, 2009, 1–7. [Google Scholar]
- Alhamed M, Rahman MMH. A systematic literature review on penetration testing in networks: future research directions. Appl Sci 2023; 13: 6986. [Google Scholar]
- Watkins CJCH, Dayan P. Q-learning. Mach Learn 1992; 8: 279–92. [Google Scholar]
- Massimo Zennaro F, Erdodi L. Modeling penetration testing with reinforcement learning using capture-the-flag challenges: trade-offs between model-free learning and a priori knowledge. arXiv preprint arXiv:https://arxiv.org/abs/2005.12632, 2020. [Google Scholar]
- Ou X, Govindavajhala S, Appel AW. MulVAL: A logic-based network security analyzer. In: Proceedings of the 14th Conference on USENIX Security Symposium Volume 14, Baltimore, MD, USA, 31 July–5 August 2005, 8. [Google Scholar]
- Yousefi M, Mtetwa N, Zhang Y et al. A reinforcement learning approach for attack graph analysis. In: 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), IEEE, 2018, 212–17. [Google Scholar]
- Erdödi L, Sommervoll, Zennaro FM. Simulating SQL injection vulnerability exploitation using Q-learning reinforcement learning agents. J Inform Secur Appl 2021; 61: 102903. [Google Scholar]
- Zhou T, Zang Y, Zhu J et al. NIG-AP: A new method for automated penetration testing. Front Inform Technol Electron Eng 2019; 20: 1277–88. [Google Scholar]
- Puterman ML. Markov decision processes: discrete stochastic dynamic programming, John Wiley & Sons, 2014. [Google Scholar]
- Mnih V, Kavukcuoglu K, Silver D et al. Human-level control through deep reinforcement learning. Nature, 2015; 518: 529–33. [NASA ADS] [CrossRef] [Google Scholar]
- Mnih V. Playing atari with deep reinforcement learning, arXiv preprint arXiv:https://arxiv.org/abs/1312.5602, 2013. [Google Scholar]
- Li Q, Hu M, Hao H et al. INNES: An intelligent network penetration testing model based on deep reinforcement learning. Appl Intell 2023; 53: 27110–127. [Google Scholar]
- Hu Z, Beuran R, Tan Y. Automated penetration testing using deep reinforcement learning. In: 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS &PW), IEEE, 2020, 2–10. [Google Scholar]
- Sarraute C, Buffet O, Hoffmann J. POMDPs make better hackers: Accounting for uncertainty in penetration testing. Proc AAAI Conf Artif Intell 2012; 26: 1816–24. [Google Scholar]
- Shmaryahu D, Shani G, Hoffmann J et al. Simulated penetration testing as contingent planning, Proc Int Conf Automated Planning Sched 2018; 28: 241–49. [Google Scholar]
- Ghanem MC, Chen TM. Reinforcement learning for intelligent penetration testing. In: 2018 Second World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4), IEEE, 2018, 185–92. [Google Scholar]
- Schwartz J, Kurniawati H, El-Mahassni E. Pomdp+ information-decay: Incorporating defender’s behaviour in autonomous penetration testing. Proc Int Conf Automated Planning Sched 2020; 30: 235–43. [Google Scholar]
- Kaelbling LP, Littman ML, Cassandra AR. Planning and acting in partially observable stochastic domains. Artif Intell 1998; 101: 99–134. [Google Scholar]
- Coulom R. Efficient selectivity and backup operators in Monte-Carlo tree search. In: International Conference on Computers and Games, Berlin, Heidelberg, Springer Berlin Heidelberg, 2006, 72–83. [Google Scholar]
- Mikolov T. Efficient estimation of word representations in vector space, arXiv preprint arXiv:https://arxiv.org/abs/1301.3781, 2013, 3781. [Google Scholar]
- Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of machine learning research, 2008; 9: 2579–2605. [Google Scholar]
- Hu T, Zang Y, Cao R et al. Research on attack path discovery algorithm based on multi-heuristic information fusion. J Cyber Secur 2021; 6: 202–11 (in Chinese). [Google Scholar]
- Deng G, Liu Y, Mayoral-Vilches V et al. PentestGPT: Evaluating and harnessing large language models for automated penetration testing. In: 33rd USENIX Security Symposium (USENIX Security 24), 2024, 847–64. [Google Scholar]
- Shen X, Wang L, Li Z et al. PentestAgent: Incorporating LLM Agents to Automated Penetration Testing, arXiv preprint arXiv:https://arxiv.org/abs/2411.05185, 2024. [Google Scholar]
- Hasegawa K, Hidano S, Fukushima K. AutoRed: Automating red team assessment via strategic thinking using reinforcement learning. In: Proceedings of the Fourteenth ACM Conference on Data and Application Security and Privacy, 2024, 325–36. [Google Scholar]
- Singh AV, Rathbun E, Graham E et al. Hierarchical multi-agent reinforcement learning for cyber network defense, arXiv preprint arXiv:https://arxiv.org/abs/2410.17351, 2024. [Google Scholar]
- Schmid M, Moravcik M, Burch N et al. Student of games: A unified learning algorithm for both perfect and imperfect information games. Sci Adv 2023; 9: eadg3256. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.