Within the era of CCAM[1] evolution we experience nowadays, cybersecurity is considered a crucial yet “intangible” characteristic.
All related operations have to be secured enough, leading to acceptable safety levels. In our slang we suggest that they are cybersecure, but at the same time, there is difficulty in either expressing, measuring, rating or clearly quantifying cybersecurity. The HEADSTART project has well experienced this difficulty while performing its tasks and having the engaged CAD[2]-oriented diverse partners expressing this converged perspective.
Specifically, in HEADSTART, cybersecurity is regarded as a Key Enabling Technology (KET) along with Communication (V2X) and Positioning. It is evident that the two latter ones are easier to be defined and measured. But what actually happens with cybersecurity?
Everyone agrees that all applied CAD communications should be secured, since we put in the framework human lives when discussing on SAE L3 and above. The envisioned safety consequently depends on all fragments of the communication cycle, whatever small, turning cybersecurity from a KET to a way of confronting, developing, applying, testing and evaluating (communication mostly) technology. Starting from very simple security rules like frequent change of passwords and moving to more complicated tasks, like the standardization of securing the over-the-air update process, cybersecurity is so wide in its essence that could cause spending tons of hours studying – with coffee or not – to get a fair grasp on its insights!
To avoid re-inventing the wheel or getting lost in the (cyber)security maze, we chose to use concrete and well-established knowledge exploiting the esteemed Common Criteria (CC) principles. Moving a step forward, since CC are generic to IT products, we exploited the advances of SAFERtec project[3], which actually tries to specialize CC in CAD sector, ending up in using conformity lists. To cut the long story short, in HEADSTART we propose a methodology to quantify security evaluation, assigning a value to the usage of the HEADSTART conformity list.
We have named this value as HEADSTART Assurance Level (HAL), which can be determined by the sum of three basic parameters involved in the evaluation process of the (entries of the) conformance list, including among others testing/experimental evaluation of security functional requirements. This value belongs in the range [0,3], since each parameter may have as maximum score number one[4].
Despite the fact that the HAL value uses a number of conventions, it poses a quantitative indicator to showcase how much (cyber)secure a “system”, “methodology” or a given conformity list is or not, based on CC principles. To our acknowledgement this may become a rating system for CAD cybersecurity evaluation. Among the main advantages, it is a fast, relatively cheap and effective way to compare two either equal or not lists in terms of size and get assumptions on cybersecurity, at least in theory. The interpretation is truly simplistic, still scientific; the highest the value the more the cybersecure the system under test. That’s it.
Our next target in the project is to exploit this value in practice, injecting it appropriately to the overall HEADSTART methodology when evaluating CAD use cases. We have already identified truck platooning, highway pilot and traffic jam chauffeur as the preferred use cases to be tested by our overall methodology, with cybersecurity being modularly tested. Following HEADSTART’s rationale, we cooperate with concurrent CAD projects already conducting such use cases.
Cybersecurity testing is still a demanding open task. The challenge here is to effectively combine any testing facilities undertaken by external stakeholders in their running use cases, manage to apply our methodology technically within restricted time schedules, and get proper evaluation results. In fact, it poses work in progress (March 2021) and will be concluded in the forthcoming deliverables.
[1] Cooperative, Connected and Automated Mobility
[2] Connected and Automated Driving
[3] https://www.safertec-project.eu/
[4] The detailed definition of the formula can be found in Chapter 8 of the publicly available HEADSTART deliverable D3.2: Toolchain for mixed validation – integration of simulation and physical testing, (https://www.headstart-project.eu/results-to-date/deliverables/)
The blog post was created by Athanasios Ballis. Athanasios Ballis is a Computer Engineer, holding an MSc. He works in the ISENSE Group of ICCS both as a researcher and project manager in national and international ITS projects.