Open Access
Issue
Security and Safety
Volume 2, 2023
Article Number 2022009
Number of page(s) 16
Section Intelligent Transportation
DOI https://doi.org/10.1051/sands/2022009
Published online 27 January 2023

© The Author(s) 2023. Published by EDP Sciences and China Science Publishing & Media Ltd.

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

The integration of electrical and electronic systems with communication technologies, such as over-the-air (OTA), Telematics, and vehicle-to-vehicle (V2X), gave rise to automotive industry. As a result, the emerging connected vehicles embracing more powerful perception and behavioral capabilities facilitate many interesting and convenient services for driving. However, the increased level of connectivity and automation also brings undesirable events: failures (safety) and intentional attacks (security).

It is well recognized that vehicles are safety-critical systems whose failure can lead to injuries and loss of life. For a long time, tremendous efforts are focused on vehicle safety concerns that have large impacts on the environment and even threaten human lives. However, only part of the accidental component failures or software errors were traditionally addressed. Today, the functional safety of automated vehicles is facing more severe challenges due to the rapid increase in the amount of code. A modern autonomous car now contains more than 100 million lines of software code and is expected to have around 300 million lines of code by 20301. In fact, mature software development teams produce two to five bugs per thousand lines of code1.

As automated vehicles become increasingly connected, security risks are increasing because of the potential for deliberate harm by adversaries. The attack surface includes short-range and long-range automotive wireless interfaces such as bluetooth, remote keyless entry, RFIDs, WIFI, global positioning system (GPS), satellite radio, etc. [1]. As an example, vulnerabilities in the application interface of an on-board diagnostics (OBD) dongle can allow an attacker to inject malicious code into it [1]. Another example is that a compromised compact disc (CD) player can offer an effective vector for attacking other automotive components since many automotive media systems are now controller area network (CAN) bus interconnected [1].

As a result, researchers have become increasingly aware of the security-related risks that threaten automated vehicles. Different from the traditional cyber security on the Internet, the security of autonomous vehicles is more critical since it may threaten human lives. Considerable research effort is being invested in identifying cybersecurity vulnerabilities, recommending potential mitigation techniques, as well as highlighting the knowledge discrepancies that can be used as a guideline to address the cybersecurity problems in connected vehicles [24].

In fact, safety and security are interrelated and both essential for automated vehicles. Typical electrical/electronic (E/E) architecture categorizes components by functions, such as perception, decision-making, communication, control, and execution. Functional safety and security features of in-vehicle components are ignored. As a result, safety-critical and security-critical in-vehicle components cannot be protected by existing safety mechanisms and security mechanisms. For example, the advanced driving assistance system (ADAS) is a typical safety mechanism that can reduce accidents and injuries significantly. However, it has also become a high-value target for cyberattacks, which aggravates the security problem. Another example is the security mechanism such as the firewall or intrusion detection system (IDS) which is typically employed as part of in-car communication. The design flaws of these mechanisms may cause malfunctions, interrupting the in-car network and causing safety accidents.

Nowadays, many researchers of co-engineering safety and security are trying to mitigate the risk caused by accident malfunctions or malicious attacks. Since connected automated vehicles are safety-critical systems, it is necessary to satisfy safety and security simultaneously as they can affect each other. However, safety and security techniques are developing independently. Consequently, researchers are increasingly interested in how techniques from safety would complement or conflict with those from security.

To enhance the safety and security of connected automated vehicles (CAVs) simultaneously, we propose a novel generalized robust control technology – DHR architecture, which can not only detect unknown failures and ensure functional safety, but also detect unknown attacks to protect cyber security. The contributions of this paper can be summarized as:

  • (1)

    We investigate the current status of integrated safety and security analysis and explore the relationship between safety and security.

  • (2)

    We propose a new taxonomy of in-vehicle components based on safety and security features, which is helpful in developing joint safety and security enhancement technology.

  • (3)

    We implement a prototype of DHR on an automated bus and conducted two test cases to validate the effectiveness of the DHR architecture when facing functional failures and cyberattacks.

  • (4)

    We provide an in-depth analysis of quantification for CAVs performance using DHR architecture and point out some challenges and future research directions. This is non-trivial as quantitative safety and security analysis is pivotal for effective safety and security management.

2. Background

To solve conflicts between safety mechanisms and security mechanisms, we first identify inter-relationships between safety and security. Then, we review existing safety mechanisms and security mechanisms, and the corresponding weaknesses are summarized.

2.1. Inter-relationships between safety and security in CAVs

For CAVs, safety aims at protecting the vehicle from accidental failures to avoid hazards, and security focuses on protecting the vehicle from intentional attacks [5]. Both safety and security are related to the risk of electrical and electronic systems in CAVs. To better illustrate the distinction between safety and security, we describe them as a conceptual grid representing the two aspects in Figure 1.

  • Security risks are malicious. Attackers may gain unauthorized access to the vehicle’s function through mobile apps remotely and control vehicles to damage the environment or threaten human life. Moreover, attackers can deceive the CAV to make wrong judgments by projecting or setting fake environmental information such as a fake pedestrian or lane markers projected on a road by a projector-equipped drone [6].

  • Safety risks are accidental. On the one hand, accidental components failure of CAVs may cause serious damage to the environment. On the other hand, extreme weather such as heavy rain or snow may cause malfunctions in automated vehicles.

In the past, vehicles could only rely on reliability technologies such as active safety mechanisms and passive safety mechanisms to guarantee safety. However, the safety of today’s CAV not only relies on traditional safety technology but also depends on cyber security technology.

thumbnail Figure 1.

Safety vs. security in CAVs

2.2. Safety mechanisms in CAVs

Traditionally, vehicles rely on reliable technologies such as active safety mechanisms and passive safety mechanisms to guarantee safety. Active schemes aim to prevent vehicles from crashing, such as driving assistance schemes including automated braking [7], backup camera [8], adaptive headlamps [9], lane departure warning systems [10], etc. Passive schemes aim to protect the driver and passengers from crash injuries, such as the airbag, crumple zone, headrest, seat belt, and a laminated windshield.

However, the safety of CAV not only relies on traditional safety technology but also depends on cybersecurity technology. For example, in 2015, two American hackers attacked a Jeep. The important functions such as engine and brakes are remotely taken over via mobile phone network [11]. They controlled the accelerator to let the Jeep stop on the highway leaving the driver in a rather dangerous situation. Therefore, the cybersecurity of CAV is also critical for safety.

Conventional vehicles mostly focus on functional safety for mechanical failure. With the increasing automation and connectivity of CAVs, more efforts are needed to identify the safety risks raised by the software failures of the communication components and the autonomous driving components and propose appropriate defense mechanisms.

2.3. Security mechanisms in CAVs

Security attacks on CAVs include attacks on networks and attacks on the vehicle itself. Many research works have been carried out, focusing on a particular kind of security attack such as a CAN attack [12]. Traditional security technologies such as authentication [13], detection [14], and cryptography [15] are usually employed to deal with these attacks. To our knowledge, these technologies usually need prior knowledge and cannot deal with unknown vulnerabilities such as 0-day attacks [16] in real time.

As described previously, the security and safety of CAVs are interrelated, e.g., security attacks can result in CAV functional failures and cause safety problems. It is not hard to imagine the potential destruction of the environment and property when the CAV falls into the wrong hands through cyberattacks. Thus, mechanisms that jointly consider safety and security are desirable.

2.4. Integrated safety and security mechanisms in CAVs

Due to the importance and relevance of safety and security in CAVs, several studies have recently emerged that aim to identify, assess, and manage risks related to both safety and security. These studies can be classified based on the overall goal [17]:

  • Security-informed safety approaches: Methods that incorporate security techniques into safety techniques to achieve a safe system.

  • Safety-informed security approaches: Methods that incorporate safety techniques into security techniques to achieve a secure system.

  • Combined safety and security approaches: Methods that combine safety techniques and security techniques to achieve a both safe and secure system.

Standard SAE J3061 [18] is a cyber security guidebook for vehicle systems that also provides a way to incorporate the process of functional safety standard ISO 26262. SAHARA [19] and US2 [20] further investigate safety and security issues and choose appropriate countermeasures. However, such process integration cannot change the “add-on” characteristic of the security defense technology. The “add-on” security technologies will not only increase the software code of CAVs and bugs to increase the potential safety risks but also bring computational overhead.

Recently, many researchers proposed approaches of safety and security co-engineering to harmonize the conflicts [17] between safety mechanisms [710] and security mechanisms [1315]. Table 1 describes the characterization of recent studies in safety and security co-engineering for CAVs. More details can be referred to [21]. The model depicts the approach on which the analysis is based, including graphical, formal, and both graphical and formal [22]. The lifecycle explains the approach adopted in which phase of the system lifecycle, such as requirement (RE), risk analysis (RA), and any phase-generic (GE) [23]. Conflict resolution means the approach facilitates the identification and study of potential conflicts between safety and security aspects.

Table 1.

Characterization

Survey on security and safety co-engineering for CAVs shows the conclusion as follows [21]:

  • Few methods are in compliance with safety and security standards. Automated vehicles operating on the road must follow safety and security standards.

  • Lack of quantitative approaches. It is well known that analyzing security threats quantitatively is a challenge in most cases in the real world. Therefore, combining quantitative and qualitative methods for safety and security co-engineering is worth exploring.

  • Lack of guidance on resolving conflicts between safety and security mechanisms. This is a challenge worth studying.

Different from them, we propose a novel mechanism that can effectively guarantee functional safety and cyber security simultaneously. The proposed method can not only deal with known vulnerabilities, but also unknown vulnerabilities.

3. A New taxonomy for in-vehicle components

To develop a joint safety and security mechanism, the typical electrical/electronic (E/E) architecture for CAV is first analyzed. Then, a new taxonomy of in-vehicle components is proposed based on their safety/security attributes, which means the in-vehicle components would suffer from what kind of risk, such as accidental failures, or intentional attacks. This is especially helpful in developing the joint mechanism.

thumbnail Figure 2.

Typical E/E architecture for CAVs

3.1. Typical E/E architecture for CAV

Figure 2 shows a typical CAV system architecture, which is composed of four kinds of components.

  • Perception components: This layer includes different perception components used to obtain environmental information, such as vision sensor, LIDAR, Millimeter-Wave RADAR, ultrasonic sensor, and infrared sensor.

  • Decision-making components: These components perform as the brain of an automated vehicle. It is important to make critical decisions based on the driving environment. Functions like sensor fusion, path planning, semantic understanding, positioning, and tracking are provided for better decision-making.

  • Communication components: This layer includes different components used for inter-vehicle and intra-vehicle communication. For example, Telematics BOX (T-BOX), CAN network, and in-vehicle gateway are typical communication components.

  • Control and execution components: These components are responsible for controlling the electronic subsystems, such as the drive-by-wire system that consists of steering, braking, accelerator, gears, and intelligent cockpit.

3.2. Safety/security attributes of in-vehicle components

Since traditional taxonomy categorizes components by functions, functional safety and security features of in-vehicle components are ignored. As a result, safety-critical and security-critical in-vehicle components cannot be protected by existing safety mechanisms and security mechanisms. Thus, a new taxonomy of in-vehicle components based on safety/security attributes is proposed, which contributes to the development of a joint safety and security mechanism. As illustrated in Figure 3, we define three types of safety/security attributes and classify the in-vehicle components as follows.

  • Pure safety-critical components are safety-related components including mechanical components and E/E components that are unreachable by cyberattacks. Mechanic components such as the steering wheel, brake pedal, accelerator pedal, and gearshift lever are pure safety-critical components. Some electronic control units (ECU) can also be regarded as pure safety-critical components when cyberattacks are unreachable.

  • Pure security-critical components are security-related components that are reachable by cyberattacks and are isolated from safety domains such as the power domain and control domain. The highly isolated entertainment system is usually a pure security-critical component. Cyberattacks on this component usually result in damage to the confidentiality of information but will not affect the safety of CAVs.

  • Safety-critical & security-critical components are safety-related components that are reachable by cyberattacks. We argue that the ADAS and the in-vehicle communication components, such as Telematics BOX (T-BOX), are the safety-critical & security-critical components. Failure of these components, as well as cyberattacks on these components, can cause accidents and casualties. With the development of CAVs, the scope of safety-critical & security-critical components would constantly expand.

Pure safety-critical components and pure security-critical components can be protected by existing safety and security mechanisms. However, for the safety-critical & security-critical components, novel mechanisms should be developed to effectively guarantee safety and security at the same time. In the following section, a dynamic heterogeneous redundancy (DHR) architecture is proposed for CAVs to achieve both safety and security.

thumbnail Figure 3.

Safety/security attributes of in-vehicle components

4. DHR architecture-new exploration of generalized robust control technology

No fatality in the aviation industry has been for over a quarter of a century. The architectures in aviation relied on hardware and process redundancy. For example, safety-critical aircraft FBW systems use masking, redundancy, and reconfiguration to maintain normal operation after a failure [42]. As for automated vehicle systems, they can be cheaper and less sophisticated by designing a Fail-Operational architecture. Inspired by this, we propose a DHR architecture for CAVs.

4.1. DHR architecture for CAV

The basic concept of DHR is the “relative correctness” axiom. That is, any system has a variety of software and hardware flaws. When multiple completely heterogeneous systems perform the same task at the same time, in the same place, the possibility of failure caused by the same flaw is extremely low. Therefore, the most consistent results of the multiple heterogeneous systems are relatively correct.

Based on the “relative correctness”, multiple heterogeneous executors can be employed to accomplish the same CAV component function. According to [43], for two executors with sufficient heterogeneity, the probability that they failed for the same flaw is generally 1 × 10−4 or less. For example, with three heterogeneous executors, when executor A failed at a certain moment, the probability that the other two executors also failed with the same flaw is extremely low. When inconsistency of executors’ output is detected, the consistent output of most executors is taken as the correct output and the abnormal executor is detected. The above judgment process is called a “consensus mechanism”. Moreover, the abnormal executors will be replaced by the normal executors with the same function.

thumbnail Figure 4.

DRS defense system based on DRS architecture vs. Defense system based on DHR architecture

The defense system based on the DHR architecture is shown in Figure 4, which contains an input agent, executor set, arbiter, feedback controller, and component pool.

The input agent distributes tasks to the executor set which consists of multiple heterogeneous executors with the same function. For CAVs, the perception and decision unit (autopilot module) can be chosen as an executor, which performs key and fundamental functions in safety-critical & security-critical components.

The arbiter judges the content consistency of executors’ outputs according to the consensus mechanism. The feedback controller determines whether to send an instruction to the input agent and chooses a normal executor from the component pool to replace the abnormal executor. The component pool consists of multiple functionally equivalent heterogeneous executors.

With the DHR architecture, integrated safety and security can be obtained. In CAVs, safety risks are usually raised by design defects and security threats are usually raised by vulnerability. Both the design defects and the vulnerabilities can cause abnormal behaviors in executors. With the DHR architecture, abnormal executors can be detected by the consensus mechanism and the system can restore to normal states by reconstructing abnormal executors dynamically.

Moreover, using the DHR architecture based on a consensus mechanism, unknown design defects and vulnerabilities can also be detected as long as they cause abnormality.

thumbnail Figure 5.

DHR prototype for CAVs

4.2. A practical DHR prototype for CAVs

A prototype has been developed based on the DHR architecture, where three L2 [44] advanced driver-assistance systems (ADASs) are employed as executors and one L2 ADAS is employed as a component pool. As shown in Figure 5, the first L2 executor consists of lidars, cameras, hardware, and software computing, and is implemented by Infineon platforms. The second L2 executor is implemented by an FPGA platform, including cameras and radars. The third L2 executor is implemented by a Freescale platform, including cameras and radars. The DHR prototype has been deployed on the “All Star” autonomous electric minibus2.

The above heterogeneous executors are employed for the following reasons:

  • (1)

    The L2 ADAS is the most common automated driving system in the market, with many heterogeneous suites available.

  • (2)

    The price of L2 ADAS is an order of magnitude lower than that of L3 and L4 automated driving systems.

As to the arbiter, it was developed on a customized industrial control computer, running on Ubuntu 18.04 OS, equipped with Intel Core i5-6500 2.5 GHz x 4 CPU. The arbiter’s functions are programmed with C language. In this arbiter, three L2 ADAS were connected via the CAN bus.

In the practical development process, various environmental factors need to be considered to design a consensus mechanism. The consensus mechanism proposed in Algorithm 1.

Algorithm 1overall algorithm of a consensus mechanism

Input: executors outputs X = {(p1, d1), (p2, d2), (p3, d3)}

Vehicle speed v Exception meta queue cahce queue

Output: arbitration result (normal or abnormal)

Initialization: similarity threshold T, public perception area A

01: preprocess perception results {p1, p2, p3 according to A

02: for all i ∈ {1,2,3} do # calculate the output perception similarity in pairs

03:  compute perception similarity(pi,pj), j=(i+1)mod3

04:  if perception similarity(pi,pj)<T, then

05:  perception of executors i and j are inconsistent

06: else

07:  perception of executors i and j are consistent

08:  end if

09: end for

10: if the three executors' perceptions are not consistent with each other

11: insert one perception exception to cahce queue

12: end if

13: calculate fused perception P according to {p1, p2, p3}

14: judge whether braking is required according to P and v

15: if braking is required then

16: for all i ∈ {1, 2, 3} do # calculate the output decision similarity in pairs

17: compute decision similarity(di,dj), j=(i+1)mod3

18:  if decision similarity(di,dj)!=100%, then

19: the decision of executors i and j are inconsistent

20: else

21: the decision of executors i and j are consistent

22: end if

23: end for

24: if three executors' decisions are not consistent with each other

25: insert one decision exception to cahce queue

26: end if

27: obtain the final arbitration result according to the cahce queue analysis

In order to avoid false alarms caused by inconsistency in the perception range of each executor, the public perception field A is initialized based on experience and vehicle speed. Next, the consensus mechanism is divided into perception arbitration and decision arbitration. Perception arbitration aims to judge whether each executor perceives an obstacle. Decision arbitration decides what kind of instructions the car should follow, such as acceleration, braking, turning left, or turning right.

In the perception arbitration, based on the obstacle detection results of executors, a calculation method of perceptual similarity is empirically proposed. The perceptual similarity between the executors is calculated in pairs. If the similarity is lower than the threshold, it is considered that the perception is inconsistent. And if the similarity is greater than the threshold, the perception is considered consistent. Finally, if the perception results among the three executors are consistent, the feedback is normal, otherwise, the inconsistency will be added to the cache queue.

In the next decision-making decision, the decision-making similarity between the executors is calculated pairwise. If the similarity is lower than the threshold, the decision is considered inconsistent. And if the similarity is greater than the threshold, the decision is considered consistent. Finally, if the perception results among the three executors are consistent, the feedback is normal, and the inconsistency is added to the cache queue. Based on the above, to distinguish whether the current inconsistency is a normal fluctuation or an attack/fault, the cumulative inconsistency times in the recent historical time (1 s) are analyzed based on the cache queue. If the inconsistency times are greater than the threshold, it is considered that an abnormality is currently found. Otherwise, it is assumed that no exception has occurred.

4.3. Evaluation

Two tests have been conducted to validate the DHR. The tests were set for 5 min. The arbiter detected the abnormality and made a decision every 100 ms, and 3000 decisions were made in 5 min. The comparison was one executor in working.

Test 1: This test aims to validate the effectiveness of the DHR architecture when facing functional failures. One obstacle was set on the road. In this test, E3’s instruction message is blocked to simulate the executor’s functional failure.

thumbnail Figure 6.

The arbitration process of DHR under functional failure (one obstacle)

Figure 6 shows the arbitration process of DHR architecture under functional failure. The red border represents the number and location information of obstacles perceived by executors. E1 and E2’s perception results reported that there was one obstacle ahead, and suggested braking. However, E3’s perception was empty due to the functional failure. The arbiter compared the output of executors every 100 ms and chose the consistent output of the most executors as the final output. As shown in the box with a yellow border in Figure 6, the arbiter detects the abnormality of executors, and E3 would be replaced with a normal one from the component pool. Therefore, with the DHR architecture, the obstacle can be detected and the bus will get around the obstacle, as shown in Figure 7a. However, for the comparing system with only E3, zero obstacles will be detected due to the functional failure, and the comparing system will make a wrong decision, and the bus hit the obstacle, as shown in Figure 7b.

thumbnail Figure 7.

The field test under functional failure (one obstacle): (a) One executor; (b) DHR architecture

Test 2: This case validates the effectiveness of the DHR architecture when facing cyberattacks. The adversarial sensor attack on LiDAR-based perception was carried out on executor 3. The attack goal was set as spoofing obstacles close to the front of the “All Star” bus. OSRAM SFH 213FA, OSRAM SPL PL90, PCO-7114, and AFG3251 were used to inject spoofed points into the LiDAR sensor [45].

Figure 8 shows the process of adversarial sensor attacks on LiDAR-based perception. The adversaries spoof a fake obstacle in front of a victim’s automated car by strategically transmitting laser signals to the victim’s LiDAR sensor.

thumbnail Figure 8.

Adversarial sensor attack on LiDAR-based perception

thumbnail Figure 9.

The arbitration process of DHR under cyberattack (one fake obstacle)

Figure 9 shows the arbitration process of the DHR architecture under cyberattack. The contents in the red border include the number and location information of obstacles perceived by executors. It is shown that the perceptions of E1 and E2 were not spoofed since they are not based on LiDAR and their decisions were consistent. However, the perception of E3 was inconsistent with the other two, which reported that there was one obstacle ahead, and suggested braking. As shown in the box with a yellow border, with the DHR architecture, the arbiter can detect the abnormality of E3. So E3 would be replaced with a normal one from the component pool, and the attack no longer took effect. As a result, with the DHR architecture, the perception results collected from the executor set reported that there was no obstacle ahead, and the decision was to keep moving forward, as shown in Figure 10a. However, for the comparing system with only E3, the abnormality of E3 cannot be detected, so the comparing system will make a wrong decision, and the bus braked and stopped, as shown in Figure 10b.

thumbnail Figure 10.

The field test under cyberattack: (a) Working under DHR architecture; (b) Working under one executor

Table 2 shows the experiment results for the detection success rate when the minimum confirming time is 300 ms. The correct decision success rate means DHR architecture makes the correct decision to make the system in normal operation. Abnormal executor detection success rate means whether the executor in failure can be detected. As shown, both the two success rates can achieve 100% for functional and cyberattacks. Since the duration of the two failures can be considered to be infinite, the abnormality can be detected through the consensus mechanism. The abnormal executor will be replaced after three consecutive confirmations. Natural factors refer to the robustness of the obstacle detection algorithm of heterogeneous executors or the temporary shielding on the vehicle camera caused by the wiper (raindrops, mosquitoes). The duration of the executor’s failure is finite under natural factors. Note that the Abnormal executor detection success rate is zero when the failure duration of natural factors is less than 300 ms. Nevertheless, the DHR architecture can always make the correct decision by choosing the consistent output of the most executors as the final output.

Table 2.

Study results on the detection success rate

The above cases validate the effectiveness of the DHR architecture. These cases consider the circumstance that only one executor failed due to the functional failure or the cyberattack. Note that the probability that multiple executors failed at the same time is extremely low due to the heterogeneity and dynamics of the DHR. In fact, the probability that multiple executors fail due to the same defect approaches 0 when the degree of heterogeneity is large enough.

Through this DHR architecture, the “All Star” bus can not only detect unknown failures and ensure functional safety, but also detect unknown attacks to protect cyber security.

5. Potential research directions for quantifiable validation of generalized robust control technology

In functional safety, hardware failures generally refer to random failures, which can be quantified by traditional reliability theory. Software failures in functional safety are systematic failures that cannot be directly quantified. Risk in functional safety can often be viewed as a function of failure, consequence, and probability of occurrence. Risk in cybersecurity may be viewed as a function of the severity of cyber threats, the vulnerability of the target system, the consequences of an attack, and the probability of an attack occurring. Risk quantification in cybersecurity is much more difficult than in functional safety.

Safety and security domains both try to make risk assessments quantitatively through probability theory. Cyber security threats are often expressed as estimated property damage, while safety hazards are the possibility that leads to accidents [4]. However, rarely co-analyzing the safety and security of automated vehicles quantitatively.

As discussed in safety and security co-engineering, it’s challenging to quantify safety and security for CAVs simultaneously. It should be noted that standards in the auto industry, such as ISO 26262, ISO/PAS 21448, and SAE J3061, cannot quantify safety requirements, especially when it comes to the death toll [42].

Table 3 illustrates the projected death rates for automated vehicles [46]. The benchmark refers to the human pilot who is highly trained for safety. In the benchmark, the fatality rate reaches 5 × 10−7/hour per vehicle, i.e., one hundred people died every day. If the reliability of CAVs can be quantified, adopting effective CAVs countermeasures can save more than three hundred thousand lives as long as the failure rate is one order of magnitude smaller than that of human pilots.

Table 3.

Estimated fatality rates for automated vehicles

Based on the in-depth analysis of DHR architecture workflow, some pioneering quantification research directions are discussed as follows and We will explore them in future work.

  • Markov reliability models. The scheme of DHR can be modeled by a continuous time Markov chain (CTMC), which can simulate the transition between different states of the system. The final stable state distribution of CTMC can be derived, which reflects the reliability of DHR architecture over time.

  • Combined reliability models. In this case, a new perspective from the arbitration cycle is introduced into the quantification of DHR. In the DHR scheme, the arbiter detects inconsistent executors and makes correct judgments within one fixed time threshold. However, the attack outputs during the disjoint arbitration cycle can be interrelated. Mathematical modeling of security and describing the interactions between security and safety is challenging.

Conflict of Interest

The authors declare that they have no conflict of interest.

Data Availability

No data are associated with this article.

Authors’ Contributions

Yufeng Li and Qi Liu designed the model and wrote this paper. Xuehong Chen and Chenghong Cao discussed the recent development and corrected typos in the paper.

Acknowledgments

Thanks to Yuanyuan Liu and the anonymous reviewers for their helpful comments and suggestions.

Funding

This work was supported by the Shanghai Sailing Program (21YF1413800 and 20YF1413700), the National Science Foundation of China (no. 62002213), the Program of Industrial Internet Visualized Asset Management and Operation Technology and Products, Shanghai Science and Technology Innovation Action Plan (No. 21511102502, No. 21511102500), and Henan Science and Technology Major Project (No. 221100240100).


References

  1. Checkoway S, McCoy D and Kantor B et al. Comprehensive experimental analyses of automotive attack surfaces. In: Proc. 20th USENIX Security, San Francisco, CA, USA, 2011, 6. [Google Scholar]
  2. Koscher K, Czeskis A and Roesner F et al. Experimental security analysis of a modern automobile. In: Proc. IEEE Symposium on Security and Privacy, Oakland, CA, USA, May 2010, 447–62. [Google Scholar]
  3. Yadav A, Bose G and Bhange R et al. Security, vulnerability and protection of vehicular onboard diagnostics. Int J Secur Appl 2016; 10: 405–22. [Google Scholar]
  4. Petit J and Shladover SE. Potential cyberattacks on automated vehicles. IEEE Trans Intell Transp Syst 2015; 16: 546–56. [Google Scholar]
  5. Cui J, Sabaliauskaite G and Lin SL et al. Collaborative analysis framework of safety and security for autonomous vehicles. IEEE Access 2019; 7: 148672–83. [CrossRef] [Google Scholar]
  6. Nassi B, Mirsky Y and Nassi D et al. Phantom of the ADAS: Securing advanced driver-assistance systems from split- second phantom attacks. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event USA, 2020, 293–308. [CrossRef] [Google Scholar]
  7. Keller CG, Dang T and Fritz H et al. Active pedestrian safety by automatic braking and evasive steering. IEEE Trans Intell Transp Syst 2011; 12: 1292–304. [CrossRef] [Google Scholar]
  8. Kidd DG and Brethwaite A. Visibility of children behind 2010-2013 model year passenger vehicles using glances, mirrors, and backup cameras and parking sensors. Accid Anal Prev 2014; 66: 158–67. [CrossRef] [PubMed] [Google Scholar]
  9. Prasetyo WT, Santoso P and Lim R. Adaptive cars headlamps system with image processing and lighting angle control. In: Proceedings of Second International Conference on Electrical Systems, Technology and Information 2015 (ICESTI 2015), Springer, Singapore, 2016, 415–22. [CrossRef] [Google Scholar]
  10. Mahajan RN and Patil A. Lane departure warning system. Int J Eng Tech Res 2015; 3: 120–3. [Google Scholar]
  11. Schellekens M. Car hacking: Navigating the regulatory landscape. Comput Law Secur Rev 2016; 32: 307–15. [CrossRef] [Google Scholar]
  12. Hoppe T, Kiltz S and Dittmann J. Security threats to automotive CAN networks - practical examples and selected short-term countermeasures. In: Harrison MD and Sujan MA (eds.). SAFECOMP 2008, volume 5219 of LNCS. Springer-Verlag, Sept. 2008, 235–48. [Google Scholar]
  13. Azam F, Yadav SK and Priyadarshi N et al. A comprehensive review of authentication schemes in vehicular ad-hoc network. IEEE Access 2021; 9: 31309–21. [CrossRef] [Google Scholar]
  14. Bouguettaya A, Zarzour H and Kechida A et al. Vehicle detection from UAV imagery with deep learning: A review. IEEE Trans Neural Netw Learn Syst 2021, 33: 6047–6067. [Google Scholar]
  15. Mehrabi MA and Jolfaei A. Efficient cryptographic hardware for safety message verification in internet of connected vehicles. ACM Trans Int Technol-1 (TOIT) 2022, 86: 1–16. [Google Scholar]
  16. Armin J, Foti P and Cremonini M. 0-day vulnerabilities and cybercrime. In: Proceedings of 10th International Conference on Availability, Reliability and Security, IEEE, Toulouse France, 2015, 711–8. [Google Scholar]
  17. Lisova E, Sljivo I and Causevic A. Safety and security co-analyses: A systematic literature review. IEEE Syst J 2019; 13: 2189–200. [CrossRef] [Google Scholar]
  18. SAE. SAE J3061: Surface Vehicle Recommended Practice - Cybersecurity Guidebook for Cyber-Physical Vehicle Systems. SAE International, Tech. Rep., 2016. [Google Scholar]
  19. Macher G, Sporer H and Berlach R et al. SAHARA: A security-aware hazard and risk analysis method. In: Proc. IEEE DATE, Grenoble, France, Mar. 2015, 621–4. [Google Scholar]
  20. Cui J and Sabaliauskaite G. US2: An unified safety and security analysis method for autonomous vehicles. In: Proceedings of the Future of Information and Communication Conference, Singapore, 5-6 Apr. 2018, 600–11. [Google Scholar]
  21. Kavallieratos G, Katsikas S and Gkioulos V. Cybersecurity and safety co-engineering of cyberphysical systems-a comprehensive survey. Future Internet 2020; 12: 65. [CrossRef] [Google Scholar]
  22. Lyu X, Ding Y and Yang SH. Safety and security risk assessment in cyber-physical systems. IET Cyber-Phys Syst Theor Appl 2019; 4: 221–32. [CrossRef] [Google Scholar]
  23. Kriaa S, Pietre-Cambacedes L and Bouissou M et al. A survey of approaches combining safety and security for industrial control systems. Reliab Eng Syst Saf 2015; 139: 156–78. [CrossRef] [Google Scholar]
  24. Sabaliauskaite G, Liew LS and Cui J. Integrating autonomous vehicle safety and security analysis using STPA method and the six-step model. Int J Adv Secur 2018; 11: 160–9. [Google Scholar]
  25. Guzman NHC, Kufoalor DKM and Kozin I et al. Combined safety and security risk analysis using the UFoI-E method: A case study of an autonomous surface vessel. In: Proceedings of the 29th European Safety and Reliability Conference, Hannover, Germany, 22-26 Sept. 2019, 4099–106. [Google Scholar]
  26. Ito M. Finding threats with hazards in the concept phase of product development. In: European Conference on Software Process Improvement, Luxembourg, 25-27 Jun. 2014, 277–84. [Google Scholar]
  27. Apvrille L and Roudier Y. Designing safe and secure embedded and cyber-physical systems with SysML-Sec. In: Proceedings of the International Conference on Model-Driven Engineering and Software Development, Angers, France, 9-11 Feb. 2015, 293–308. [CrossRef] [Google Scholar]
  28. Macher G, Höller A and Sporer H et al. A combined safety-hazards and security-threat analysis method for automotive systems. In: Proceedings of the International Conference on Computer Safety, Reliability, and Security, Delft, The Netherlands, 22-25 Sept. 2014, 237–50. [Google Scholar]
  29. Popov PT. Stochastic modeling of safety and security of the e-Motor, an ASIL-D device. In: Computer Safety, Reliability, and Security, Cham, Switzerland: Springer International Publishing, 2015, 385–99. [CrossRef] [Google Scholar]
  30. Wei J, Matsubara Y and Takada H. HAZOP-based security analysis for embedded systems: Case study of open source immobilizer protocol stack. In: Recent Advances in Systems Safety and Security, Cham, Switzerland: Springer International Publishing, 2016, 79–96. [CrossRef] [Google Scholar]
  31. Islam MM, Lautenbach A and Sandberg C et al. A risk assessment framework for automotive embedded systems. In: Proceedings of the 2nd ACM International Workshop on Cyber-Physical System Security, Xi'an, China, 30 May 2016, 3–14. [CrossRef] [Google Scholar]
  32. Ponsard C, Dallons G and Massonet P. Goal-oriented co-engineering of security and safety requirements in cyber- physical systems. In: Proceedings of the International Conference on Computer Safety, Reliability, and Security, Trondheim, Norway, 20-23 Sept. 2016, 334–45. [CrossRef] [Google Scholar]
  33. Schmittner C, Ma Z and Puschner P. Limitation and improvement of STPA-Sec for safety and security co-analysis. In: Proceedings of the International Conference on Computer Safety, Reliability, and Security, Trondheim, Norway, 20-23 Sept. 2016, 195–209. [CrossRef] [Google Scholar]
  34. Dürrwang J, Beckers K and Kriesten R. A lightweight threat analysis approach intertwining safety and security for the automotive domain. In: Proceedings of the International Conference on Computer Safety, Reliability, and Security, Trento, Italy, 12-15 Sept. 2017, 305–19. [Google Scholar]
  35. Stoneburner G. Toward a unified security-safety model. Computer 2006; 39: 96–7. [CrossRef] [Google Scholar]
  36. Aven T. A unified framework for risk and vulnerability analysis covering both safety and security. Reliab Eng Syst Saf 2007; 92: 745–54. [CrossRef] [Google Scholar]
  37. Piètre-Cambaćedès L, Deflesselle Y and Bouissou M. Security modeling with BDMP: From theory to implementation. In: Proceedings of the 2011 Conference on Network and Information Systems Security, La Rochelle, France, May 2011, 18–21. [Google Scholar]
  38. Apvrille L and Roudier Y. Towards the model-driven engineering of secure yet safe embedded systems. ArXiv preprint [arXiv:1404.1985], 2014. (accessed on 8 November 2019). [Google Scholar]
  39. Simpson A, Woodcock J and Davies J. Safety through Security. IEEE Press, Ise-Shima Japan, 1998, 18–24. [Google Scholar]
  40. Schmittner C, Ma Z and Schoitsch E et al. A case study of FMVEA and CHASSIS as safety and security co-analysis method for automotive cyber-physical systems. In: Proceedings of the 1st ACM Workshop on Cyber-Physical System Security, Singapore, 14 Apr. 2015, 69–80. [CrossRef] [Google Scholar]
  41. Schmittner C, Ma Z and Smith P. FMVEA for safety and security analysis of intelligent and cooperative vehicles. In: Proceedings of the International Conference on Computer Safety, Reliability, and Security, Florence, Italy, 10-12 Sept. 2014, 282–8. [Google Scholar]
  42. Jaynarayan H, Lala CE and Landwehr JF. Autonomous vehicle safety: Lesson from aviation. Commun ACM 2020; 63: 28. [Google Scholar]
  43. Wu JX. Cyberspace Mimic Defense. Cham: Springer International Publishing, 2020. [Google Scholar]
  44. Society of Automotive Engineers (SAE), SAE-J3016: Taxonomy and definitions for terms related to driving a systems for on-road motor vehicles, 2016. [Google Scholar]
  45. Cao Y, Xiao C and Cyr B et al. Adversarial sensor attack on lidar-based perception in autonomous driving. In: Proceedings of the 2019 ACM SIGSAC conference on Computer and Communications Security, London United Kingdom, 2019, 2267–81. [Google Scholar]
  46. National Highway Traffic Safety Administration (NHTSA) Traffic Safety Facts Research Note, Oct. 2019. https://www.nhtsa.gov/traffic-deaths-2018. [Google Scholar]
Yufeng Li

Yufeng Li is a professor at the School of Computer Engineering and Science, at Shanghai University, China. His research interests include cybersecurity, broadband information network, and high-speed router core technology.

Qi Liu

Qi Liu is a Ph.D. candidate at the School of Computer Engineering and Science, Shanghai University, China. His research interests include the safety and security of connected automated vehicles and privacy protection.

Xuehong Chen

Xuehong Chen is a Senior Professional Engineer of China Industry Control Systems Cyber Emergency Response Team, a member of the China Electrical Engineering Society Energy Internet Committee and China Energy Research Association Smart Power Generation Committee. She has engaged in the research and practice of cybersecurity policies, standards, management, and technologies.

Chenhong Cao

Chenhong Cao is a lecturer at the School of Computer Engineering and Science, at Shanghai University, China. Her research interests mainly focus on the network and system of the Internet of Things, network measurement and security, and wireless sensing.

All Tables

Table 1.

Characterization

Table 2.

Study results on the detection success rate

Table 3.

Estimated fatality rates for automated vehicles

All Figures

thumbnail Figure 1.

Safety vs. security in CAVs

In the text
thumbnail Figure 2.

Typical E/E architecture for CAVs

In the text
thumbnail Figure 3.

Safety/security attributes of in-vehicle components

In the text
thumbnail Figure 4.

DRS defense system based on DRS architecture vs. Defense system based on DHR architecture

In the text
thumbnail Figure 5.

DHR prototype for CAVs

In the text
thumbnail Figure 6.

The arbitration process of DHR under functional failure (one obstacle)

In the text
thumbnail Figure 7.

The field test under functional failure (one obstacle): (a) One executor; (b) DHR architecture

In the text
thumbnail Figure 8.

Adversarial sensor attack on LiDAR-based perception

In the text
thumbnail Figure 9.

The arbitration process of DHR under cyberattack (one fake obstacle)

In the text
thumbnail Figure 10.

The field test under cyberattack: (a) Working under DHR architecture; (b) Working under one executor

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.