Title
ML-Driven Malware that Targets AV Safety
Abstract
Ensuring the safety of autonomous vehicles (AVs) is critical for their mass deployment and public adoption. However, security attacks that violate safety constraints and cause accidents are a significant deterrent to achieving public trust in AVs, and that hinders a vendor's ability to deploy AVs. Creating a security hazard that results in a severe safety compromise (for example, an accident) is compelling from an attacker's perspective. In this paper, we introduce an attack model, a method to deploy the attack in the form of smart malware, and an experimental evaluation of its impact on production-grade autonomous driving software. We find that determining the time interval during which to launch the attack is{ critically} important for causing safety hazards (such as collisions) with a high degree of success. For example, the smart malware caused 33X more forced emergency braking than random attacks did, and accidents in 52.6% of the driving simulations.
Year
DOI
Venue
2020
10.1109/DSN48063.2020.00030
2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)
Keywords
DocType
ISSN
Autonomous Vehicles, Security, Safety
Conference
1530-0889
ISBN
Citations 
PageRank 
978-1-7281-5810-5
1
0.36
References 
Authors
9
7
Name
Order
Citations
PageRank
Saurabh Jha1132.61
Shengkun Cui210.69
Subho S. Banerjee3266.88
James Cyriac410.36
Timothy Tsai593.56
?zg???ner63318.65
Ravishankar K. Iyer73489504.32