EU Science Hub

Cybersecurity challenges in the uptake of artificial intelligence in autonomous driving

Autonomous vehicles can only reach their full potential, if they are able to deal with cybersecurity risks.
Autonomous vehicles can only reach their full potential, if they are able to deal with cybersecurity risks.
Feb 11 2021

A report by the JRC and the European Union Agency for Cybersecurity (ENISA) looks at cybersecurity risks connected to artificial intelligence (AI) in autonomous vehicles and provides recommendations for mitigating them.

By removing the most common cause of traffic accidents – the human driver – autonomous vehicles are expected to reduce traffic accidents and fatalities.

However, they may pose a completely different type of risk to drivers, passengers and pedestrians.

Autonomous vehicles use artificial intelligence systems, which employ machine-learning techniques to collect, analyse and transfer data, in order to make decisions that in conventional cars are taken by humans.

These systems, like all IT systems, are vulnerable to attacks that could compromise the proper functioning of the vehicle.

The report by the JRC and ENISA sheds light on the cybersecurity risks linked to the uptake of AI in autonomous cars, and provides recommendations to mitigate them.

“It is important that European regulations ensure that the benefits of autonomous driving will not be counterbalanced by safety risks. To support decision-making at EU level, our report aims to increase the understanding of the AI techniques used for autonomous driving as well as the cybersecurity risks connected to them, so that measures can be taken to ensure AI security in autonomous driving,” said JRC Director-General Stephen Quest.  

“When an insecure autonomous vehicle crosses the border of an EU Member State, so do its vulnerabilities. Security should not come as an afterthought, but should instead be a prerequisite for the trustworthy and reliable deployment of vehicles on Europe’s roads,” said EU Agency for Cybersecurity Executive Director Juhan Lepassaar.

Vulnerabilities of AI in autonomous cars

The AI systems of an autonomous vehicle are working non-stop to recognise traffic signs and road markings, to detect vehicles, estimate their speed, to plan the path ahead.

Apart from unintentional threats such as sudden malfunctions, these systems are vulnerable to intentional attacks that have the specific aim to interfere with the AI system and to disrupt safety-critical functions.

Adding paint on the road to misguide the navigation, or stickers on a stop sign to prevent its recognition are examples of such attacks.

These alterations can lead to the AI system wrongly classifying objects, and subsequently to the autonomous vehicle behaving in a way that could be dangerous.

Recommendations for more secure AI in autonomous vehicles

In order to improve the AI security in autonomous vehicles, the report contains several recommendations, one of which is that security assessments of AI components are performed regularly throughout their lifecycle.

This systematic validation of AI models and data is essential to ensure that the vehicle always behaves correctly when faced with unexpected situations or malicious attacks.

Another recommendation is that continuous risk assessment processes supported by threat intelligence could enable the identification of potential AI risks and emerging threats related to the uptake of AI in autonomous driving.

Proper AI security policies and an AI security culture should govern the entire automotive supply chain.

The automotive industry should embrace a security by design approach in the development and deployment of AI functionalities, where cybersecurity becomes a central element of the digital design from the beginning.

Finally, it is important that the automotive sector increas­es its level of preparedness and reinforces its inci­dent response capabilities to handle emerging cy­bersecurity issues connected to AI.