Issue |
Security and Safety
Volume 3, 2024
Security and Safety in Artificial Intelligence
|
|
---|---|---|
Article Number | 2024020 | |
Number of page(s) | 25 | |
Section | Other Fields | |
DOI | https://doi.org/10.1051/sands/2024020 | |
Published online | 31 October 2024 |
Research Article
A requirements model for AI algorithms in functional safety-critical systems with an explainable self-enforcing network from a developer perspective
1
CoBASC Research Group, Essen, 45130, Germany
2
Pepperl+Fuchs Group, Mannheim, 68307, Germany
3
TÜV Nord Group, Essen, 45307, Germany
* Corresponding authors (email: cobasc@rebask.de)
Received:
30
August
2024
Revised:
30
October
2024
Accepted:
30
October
2024
The requirements for ensuring functional safety have always been very high. Modern safety-related systems are becoming increasingly complex, making also the safety integrity assessment more complex and time-consuming. This trend is further intensified by the fact that AI-based algorithms are finding their way into safety-related systems or will do so in the future. However, existing and expected standards and regulations for the use of AI methods pose significant challenges for the development of embedded AI software in functional safety-related systems. The consideration of essential requirements from various perspectives necessitates an intensive examination of the subject matter, especially as different standards have to be taken into account depending on the final application. There are also different targets for the “safe behavior” of a system depending on the target application. While stopping all movements of a machine in industrial production plants is likely to be considered a “safe state”, the same condition might not be considered as safe in flying aircraft, driving cars or medicine equipment like heart pacemaker. This overall complexity is operationalized in our approach in such a way that it is straightforward to monitor conformity with the requirements. To support safety integrity assessments and reduce the required effort, a Self-Enforcing Network (SEN) model is presented in which developers or safety experts can indicate the degree of fulfillment of certain requirements with possible impact on the safety integrity of a safety-related system. The result evaluated by the SEN model indicates the achievable safety integrity level of the assessed system, which is additionally provided by an explanatory component.
Key words: Functional safety / Safety-critical systems / Requirements for AI methods / Explainable self-enforcing networks (SEN)
Citation: Klüver C, Greisbach A, Kindermann M and Püttmann B. A requirements model for AI algorithms in functional safety-critical systems with an explainable self-enforcing network from a developer perspective. Security and Safety 2024; 3: 2024020. https://doi.org/10.1051/sands/2024020
© The Author(s) 2024. Published by EDP Sciences and China Science Publishing & Media Ltd.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.