Open Access
Issue |
Security and Safety
Volume 3, 2024
Security and Safety in Artificial Intelligence
|
|
---|---|---|
Article Number | 2024020 | |
Number of page(s) | 25 | |
Section | Other Fields | |
DOI | https://doi.org/10.1051/sands/2024020 | |
Published online | 31 October 2024 |
- Bengio Y, Hinton G and Yao A et al. Managing extreme AI risks amid rapid progress. Science 2024; 384: 842–845. [CrossRef] [PubMed] [Google Scholar]
- Wörsdörfer M. Mitigating the adverse effects of AI with the European Union’s artificial intelligence act: Hype or hope? Glob Bus Organ Excell 2024; 43: 106–126. [CrossRef] [Google Scholar]
- Fraunhofer IKS, Heidemann L and Herd B et al. The European Artificial Intelligence Act. Whitepaper-EU-AI-Act-Fraunhofer-IKS-4.pdf. 2024. [Google Scholar]
- European Union. Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing Directive 2006/42/EC of the European Parliament and of the Council and Council Directive 73/361/EEC. 2023; 66, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L:2023:165:FULL [Google Scholar]
- de Koning M, Machado T and Ahonen A et al. A comprehensive approach to safety for highly automated off-road machinery under Regulation 2023/1230. Safety Sci 2024; 175: 106517 [CrossRef] [Google Scholar]
- Castellanos-Ardila JP, Punnekkat S, Hansson H and Backeman P. Safety argumentation for machinery assembly control software. In: International Conference on Computer Safety, Reliability, and Security, Springer, 2024, pp. 251–266. [Google Scholar]
- Cao Y, An Y, Su S and Sun Y. Is the safety index of modern safety integrity level(SIL) truly appropriate for the railway? Accid Anal Prev 2023; 192: 107267. [CrossRef] [PubMed] [Google Scholar]
- Malm T, Venho-Ahonen O and Hietikko M et al. From risks to requirements: Comparing the assignment of functional safety requirements, 2015. [Google Scholar]
- Okoh P and Myklebust T. Mapping to IEC 61508 the hardware safety integrity of elements developed to ISO 26262. Safety and Reliability (Taylor & Francis, 2024), pp. 1–17. [Google Scholar]
- Diemert S, Millet L, Groves J and Joyce J. Safety integrity levels for artificial intelligence. In: International Conference on Computer Safety, Reliability, and Security, Springer, 2023, pp. 397–409. [Google Scholar]
- Dalrymple D, Skalse J and Bengio Y et al. Towards guaranteed safe AI: A framework for ensuring robust and reliable AI systems, 2024 ArXiv preprint [arXiv: https://arxiv.org/abs/2405.06624], 2024. [Google Scholar]
- Future of Life Institute. AI Governance Scorecard and Safety Standards Policy. Evaluating proposals for AI governance and providing a regulatory framework for robust safety standards, measures and oversight, 2023. https://futureoflife.org/wp-content/uploads/2023/11/FLI_Governance_Scorecard_and_Framework.pdf [Google Scholar]
- Abbasinejad R, Hourfar F, Kacprzak D, Almansoori A and Elkamel A. SIL calculation in gas processing plants based on systematic faults and level of maturity. Proc Safety Environ Protect 2023; 174: 778–795. [CrossRef] [Google Scholar]
- Shubinsky I, Rozenberg E and Baranov L. Safety-critical railway systems. Reliability Modeling in Industry 4, Elsevier, 2023, pp. 83–122. [Google Scholar]
- Golpayegani D, Pandit HJ and Lewis D. To be high-risk, or not to be–semantic specifications and implications of the AI act’s high-risk ai applications and harmonised standards. In: Paper presented at: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023: pp. 905–915. [Google Scholar]
- European Union Aviation Safety Agency (EASA). EASA Concept Paper: guidance for Level 1 AND 2 machine learning applications Issue 02. 2024, https://horizoneuropencpportal.eu/sites/default/files/2024-06/easa-concept-paper-guidance-for- level-1-and-2-machine-learning-applications-2024.pdf [Google Scholar]
- DIN, DKE: German Standardization Roadmap on Artificial Intelligence. 2022. www.din.de/go/roadmap-ai. [Google Scholar]
- Bacciu D, Carta A, Gallicchio C and Schmittner C. Safety and Robustness for Deep Neural Networks: An Automotive Use Case. In: International Conference on Computer Safety, Reliability, and Security, Springer, 2023, pp. 95–107. [Google Scholar]
- Perez-Cerrolaza J, Abella J and Borg M et al. Artificial intelligence for safety-critical systems in industrial and transportation domains: A survey. ACM Comput Surv 2024; 56: 1–40. [CrossRef] [Google Scholar]
- Brando A, Serra I and Mezzetti E et al. On neural networks redundancy and diversity for their use in safety-critical systems. Computer 2023; 56: 41–50. [CrossRef] [Google Scholar]
- Oveisi S, Gholamrezaie F and Qajari N et al. Review of artificial intelligence-based systems: evaluation, standards, and methods. Adv. Stand Appl Sci 2024; 2: 4–29. [Google Scholar]
- Kelly J, Zafar SA and Heidemann L et al. Navigating the EU AI Act: A methodological approach to compliance for safety-critical products. In: IEEE Conference on Artificial Intelligence (CAI) 2024; 979–984, doi: 10.1109/CAI59869.2024.00179. [Google Scholar]
- Wei R, Foster S and Mei H et al. ACCESS: Assurance case centric engineering of safety–critical systems. J Syst Soft 2024; 213: 112034 [CrossRef] [Google Scholar]
- Zhang X, Jiang W and Shen C et al. A Survey of deep learning library testing methods, ArXiv preprint [arXiv: https://arxiv.org/abs/2404.17871], 2024. [Google Scholar]
- Mattioli J, Sohier H and Delaborde A et al. An overview of key trustworthiness attributes and KPIs for trusted ML-based systems engineering. AI and Ethics; 4: 15–25. [Google Scholar]
- Iyenghar P. Exploring the impact of dataset accuracy on machinery functional safety: Insights from an AI-Based predictive maintenance system. ENASE 2024: 484–497, DOI: 10.5220/0012683600003687. [Google Scholar]
- Habbal A, Ali MK and Abuzaraida MA. Artificial Intelligence Trust, risk and security management (AI TRiSM): Frameworks, applications, challenges and future research directions. Exp Syst Appl 2024; 240: 122442. [CrossRef] [Google Scholar]
- Giudici P, Centurelli M and Turchetta S. Artificial Intelligence risk measurement. Exp Syst Appl 2024; 235: 121220. [CrossRef] [Google Scholar]
- Bjelica MZ. Systems, Functions and Safety: A Flipped Approach to Design for Safety, Springer Nature, 2023. [Google Scholar]
- Morales-Forero A, Bassetto S and Coatanea E. Toward safe AI. AI Soc 2023; 38: 685–696. [CrossRef] [Google Scholar]
- Zeller M, Waschulzik T, Schmid R and Bahlmann C. Toward a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain. AI and Ethics; 4: 123–130. [Google Scholar]
- Abella J, Perez J and Englund C et al. SAFEXPLAIN: Safe and explainable critical embedded systems based on AI. In: 2023 Design, Automation and Test in Europe Conference & Exhibition (DATE), 2023, pp. 1–6. [Google Scholar]
- Tambon F, Laberge G and An L et al. How to certify machine learning based safety-critical systems? A systematic literature review. Automated Software Eng 2022; 29: 38. [CrossRef] [Google Scholar]
- Malgieri G and Pasquale F. Licensing high-risk artificial intelligence: Toward ex ante justification for a disruptive technology. Computer Law & Security Review 2024; 52: 105899. [CrossRef] [Google Scholar]
- Ihirwe F, Di Ruscio D, Di Blasio K, Gianfranceschi S and Pierantonio A. Supporting model-based safety analysis for safety-critical IoT systems. J Comput Languages 2024; 78: 101243. [CrossRef] [Google Scholar]
- Al-Hawawreh M, Moustafa N. Explainable deep learning for attack intelligence and combating cyber–physical attacks. Ad Hoc Net 2024; 153: 103329. [CrossRef] [Google Scholar]
- Stettinger G, Weissensteiner P and Khastgir S. Trustworthiness Assurance Assessment for High-Risk AI-Based Systems. IEEE Access 2024; 12: 22718–22745. [CrossRef] [Google Scholar]
- Gaur M and Sheth A. Building trustworthy NeuroSymbolic AI Systems: Consistency, reliability, explainability, and safety. AI Mag 2024; 45: 139–155. [Google Scholar]
- Wang H, Shao W and Sun C et al. A Survey on an emerging safety challenge for autonomous vehicles: Safety of the intended functionality. Engineering 2024; 33: 17–34. [CrossRef] [Google Scholar]
- Ahamad S and Gupta R. Uncertainty modelling in performability prediction for safety-critical systems. Arab J Sci Eng 2024: 1–15, https://doi.org/10.1007/s13369-024-09019-0. [Google Scholar]
- Schneeberger D, Röttger R and Cabitza F et al. The tower of babel in explainable artificial intelligence (XAI). In: International Cross-Domain Conference for Machine Learning and Knowledge Extraction. Springer Nature, 2023, pp. 65–81. [Google Scholar]
- Seed W and Omlin C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl Based Syst 2023; 263: 110273. [CrossRef] [Google Scholar]
- Hassija V, Chamola V and Mahapatra A et al. Interpreting black-box models: a review on explainable artificial intelligence. Cog Comput 2024; 16: 45–74. [CrossRef] [Google Scholar]
- Tursunalieva A, Alexander DL and Dunne R. Making sense of machine learning: A Review of interpretation techniques and their applications. Appl Sci 2024; 14: 496. [CrossRef] [Google Scholar]
- Ali S, Abuhmed T and El-Sappagh S et al. Explainable artificial intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence. Inf Fus 2023; 99: 101805 [CrossRef] [Google Scholar]
- Saranya A and Subhashini R. A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends. Decis Anal J 2023; 7: 100230. [CrossRef] [Google Scholar]
- Das A and Rad P. Opportunities and challenges in explainable artificial intelligence (xai): A survey, ArXiv preprint [arXiv: https://arxiv.org/abs/2006.11371], 2020. [Google Scholar]
- Schwalbe G and Finzel B. A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts. Data Mining Knowl Discov 2023; 38: 3043–3101. [Google Scholar]
- Guidotti R, Monreale A and Ruggieri S et al. A survey of methods for explaining black box models. ACM Comput Surv (CSUR) 2018; 51: 1–42. [Google Scholar]
- Islam MR, Ahmed MU, Barua S and Begum S. A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl Sci 2022; 12: 1353. [CrossRef] [Google Scholar]
- Arrieta AB, Díaz-Rodríguez N and Del Ser J et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fus 2020; 58: 82–115. [CrossRef] [Google Scholar]
- Mittelstadt B, Russell C and Wachter S. Explaining explanations in AI. In: Proceedings of the conference on fairness, accountability, and transparency 2019, pp. 279–288. [Google Scholar]
- Cao S, Sun X and Widyasari R et al. A Systematic literature review on explainability for machine/deep learning-based software engineering research, ArXiv preprint [arXiv: https://arxiv.org/abs/2401.14617], 2024 [Google Scholar]
- Greisbach A and Klüver C. Determining feature importance in self-enforcing networks to achieve explainable AI (xAI). In: Proceedings 32 Workshop Computational Intelligence, Karlsruhe, KIT Scientific Publishing, 2022, pp. 237–256. [Google Scholar]
- Li M, Sun H, Huang Y and Chen H. Shapley value: from cooperative game to explainable artificial intelligence. Auton Intell Syst 2024; 4: 1–12. [CrossRef] [Google Scholar]
- Atakishiyev S, Salameh M, Yao H and Goebel R. Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions, IEEE Access, 2024. [Google Scholar]
- Minh D, Wang HX, Li YF and Nguyen TN. Explainable artificial intelligence: a comprehensive review. Artif Intell Rev 2022; 55: 3503–3568 [CrossRef] [Google Scholar]
- Sharma NA, Chand RR and Buksh Z et al. Explainable AI frameworks: Navigating the present challenges and unveiling innovative applications. Algorithms 2024; 17: 227. [CrossRef] [Google Scholar]
- Dwivedi R, Dave D and Naik H et al. Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Comput Surv 2023; 55: 1–33 [CrossRef] [Google Scholar]
- Sanneman L and Shah JA. The situation awareness framework for explainable AI (SAFE-AI) and human factors considerations for XAI systems. Int J Human–Comput Inter 2022; 38: 1772–1788. [Google Scholar]
- Nannini L, Balayn A and Smith AL. Explainability in ai policies: A critical review of communications, reports, regulations, and standards in the EU, US, and UK. In: Proceedings of the 2023 ACM Conference on fairness, accountability, and transparency 2023, 1198–1212. [Google Scholar]
- Rech P. Artificial neural networks for space and safety-critical applications: Reliability issues and potential solutions. IEEE Transactions on Nuclear Science, 2024. [Google Scholar]
- Petkovic D. It is Not “Accuracy vs. Explainability”–We need both for trustworthy AI systems. IEEE Trans Technol Soc 2023; 4: 46–53. [CrossRef] [Google Scholar]
- Wang Y and Chung SH. Artificial intelligence in safety-critical systems: a systematic review. Indus Manag Data Syst 2022; 122: 442–470. [CrossRef] [Google Scholar]
- Cabitza F, Campagner A and Malgieri G et al. Quod erat demonstrandum? – Towards a typology of the concept of explanation for the design of explainable AI. Exp Syst Appl 2023; 213: 118888. [CrossRef] [Google Scholar]
- Baron C and Louis V. Framework and tooling proposals for Agile certification of safety-critical embedded software in avionic systems. Comput Indus 2023; 148: 103887. [CrossRef] [Google Scholar]
- Guiochet J, Machin M and Waeselynck H. Safety-critical advanced robots: A survey. Robot Auton Syst 2017; 94: 43–52. [CrossRef] [Google Scholar]
- Gaurav K, Singh BK and Kumar V. Intelligent fault monitoring and reliability analysis in safety–critical systems of nuclear power plants using SIAO-CNN-ORNN. Multimedia Tools Appl 2024; 83: 61287–61311. [CrossRef] [Google Scholar]
- Rodvold DM. A software development process model for artificial neural networks in critical applications. In: IJCNN’99, International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339) 1999, Vol. 5, pp. 3317–3322. [Google Scholar]
- Eilers D, Burton S, Schmoeller da Roza F and Roscher K. Safety assurance with ensemble-based uncertainty estimation and overlapping alternative predictions in reinforcement learning, 2023. [Google Scholar]
- Weaver R, McDermid J and Kelly T. Software safety arguments: Towards a systematic categorisation of evidence. In: International System Safety Conference, Denver, CO 2002. [Google Scholar]
- Schwalbe G and Schels M. Concept enforcement and modularization as methods for the ISO 26262 safety argumentation of neural networks, 2020. [CrossRef] [Google Scholar]
- Chelouati M, Boussif A, Beugin J and El Koursi E-M. Graphical safety assurance case using Goal Structuring Notation (GSN)–challenges, opportunities and a framework for autonomous trains. Reliabil Eng Syst Safety 2023; 230: 108933. [CrossRef] [Google Scholar]
- Fahmy H, Pastore F, Briand L and Stifter T. Simulator-based explanation and debugging of hazard-triggering events in DNN-based safety-critical systems. ACM Trans Softw Eng Methodol 2023; 32: 1–47. [CrossRef] [Google Scholar]
- Ahmad K, Abdelrazek M and Arora C et al. Requirements engineering for artificial intelligence systems: A systematic mapping study. Inf Softw Technol 2023; 158: 107176. [CrossRef] [Google Scholar]
- Klüver C and Klüver J. Self-organized learning by self-enforcing networks. In: Advances in Computational Intelligence: 12th International Work-Conference on Artificial Neural Networks, IWANN 2013, LNCS 7902, Springer, 2013, pp. 518–529. [Google Scholar]
- Zinkhan D, Greisbach A, Zurmaar B, Klüver C and Klüver J. Intrinsic explainable self-enforcing networks using the icon-d2-ensemble prediction system for runway configurations. Eng Proc 2023; 39: 41. [Google Scholar]
- Klüver C, Werner C, Nowara P, Castel B and Israel R. Self-enforcing networks for monitoring safety-critical systems: A prototype development. In: Klüver C, & Klüver J (eds.) New algorithms for practical problems: Springer Vieweg, 2025 (in German). [Google Scholar]
- Figiel, A. and Klačková, I. Safety requirements for mining systems controlled in automatic mode. Acta Montan Slovaca 2020; 25 [Google Scholar]
- Galy, B. and Giraud, L. Risk mitigation strategies for automated current and future mine hoists. Saf Sci 2023; 167: 106267 [CrossRef] [Google Scholar]
- Ferrucci F. Design and implementation of the safety system of a solar-driven smart micro-grid comprising hydrogen production for electricity & cooling co-generation. Int J Hydrogen Energy 2024; 51: 1096–1119. [CrossRef] [Google Scholar]
- Shapley LS. A value for n-person games. In: Contributions to the Theory of Games (AM-28), Princeton University Press, 1953, Vol. 2, pp. 307–318. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.