29
1 Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward Adnan Qayyum 1 , Muhammad Usama 1 , Junaid Qadir 1 , and Ala Al-Fuqaha 2 1 Information Technology University (ITU), Punjab, Lahore, Pakistan 2 Hamad Bin Khalifa University (HBKU), Doha, Qatar Abstract—Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent trans- portation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation—which will be fuelled by concomitant ad- vances in technologies for machine learning (ML) and wireless communications—will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings. Index Terms—Connected and autonomous vehicles, ma- chine/deep learning, adversarial machine learning, adversarial perturbation, perturbation detection, and robust machine learn- ing. I. I NTRODUCTION In recent years, connected and autonomous vehicles (CAVs) have emerged as a promising area of research. The connected vehicles are an important component of intelligent transporta- tion systems (ITS) in which vehicles communicate with each other and with communications infrastructure to exchange safety messages and other critical information (e.g., traffic and road conditions). One of the main driving force for CAVs is the advancement of machine learning (ML) methods, particu- larly deep learning (DL), that are used for decision making at different levels. Unlike conventional connected vehicles, the autonomous (self-driving) vehicles have two important characteristics; namely, automation capability and cooperation (connectivity) [5]. In future smart cities, CAVs are expected to have a profound impact on the vehicular ecosystem and society. The phenomenon of connected vehicles is realized through technology known as vehicular networks or vehicular ad-hoc networks (VANETs) [6]. Over the years, various configurations of connected vehicles have been developed including the use of dedicated short-range communications (DSRC) in the United States and ITS-G5 in Europe based on the IEEE 802.11p standard. However, a recent study [7] has shown many limita- tions of such systems such as (1) short-lived infrastructure- to-vehicle (I2V) connection, (2) non-guaranteed quality of service (QoS), and (3) unbounded channel access delay, etc. To address such limitations, the 3rd generation partnership project (3GPP) has been initiated with a focus on leveraging the high penetration rate of long term evolution (LTE) and 5G cellular networks to support vehicle-to-everything (V2X) services [8]. The purpose of developing V2X technology is to enable the communication between all entities encountered in the road environment including vehicles, communications infrastructure, pedestrians, cycles, etc. The impressive ability of ML/DL to leverage increasingly accessible data, along with the advancement in other concomi- tant technologies (such as wireless communications), seems to be set to enable autonomous and self-organizing connected vehicles in the future. In addition, future vehicular networks will evolve from normal to autonomous vehicles and will enable ubiquitous Internet access on vehicles. ML will have a predominant role in building the perception system of autonomous and semi-autonomous connected vehicles. Despite the development of different configurations of con- nected vehicles, they are still vulnerable to various security issues and there are various automotive attack surfaces that can be exploited [9]. The threat is getting worse with the development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as cam- eras, RADAR, LIDAR, and mechanical control units, etc. These sensors share critical sensory information with onboard devices through CAN bus and with other nearby vehicles as well. The backbone of self-driving vehicles is the onboard intelligent processing capabilities using the data collected through the sensory system. This data can be used for many other purposes, e.g., getting information about vehicle kinetics, traffic flow, road, and network conditions, etc. Such data can be potentially used for improving the performance of the vehicular ecosystem using adaptive data-driven decision making and can also be used to accomplish various destructive objectives. Therefore, ensuring data integrity and security are necessarily important to avoid various risks and attacks on CAVs. It is common for the perception and control systems of CAVs to be built using ML/DL methods. However, ML/DL arXiv:1905.12762v1 [cs.LG] 29 May 2019

Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

1

Securing Connected & Autonomous Vehicles:Challenges Posed by Adversarial Machine Learning

and The Way ForwardAdnan Qayyum1, Muhammad Usama1, Junaid Qadir1, and Ala Al-Fuqaha2

1 Information Technology University (ITU), Punjab, Lahore, Pakistan2 Hamad Bin Khalifa University (HBKU), Doha, Qatar

Abstract—Connected and autonomous vehicles (CAVs) willform the backbone of future next-generation intelligent trans-portation systems (ITS) providing travel comfort, road safety,along with a number of value-added services. Such atransformation—which will be fuelled by concomitant ad-vances in technologies for machine learning (ML) and wirelesscommunications—will enable a future vehicular ecosystem thatis better featured and more efficient. However, there are lurkingsecurity problems related to the use of ML in such a criticalsetting where an incorrect ML decision may not only be anuisance but can lead to loss of precious lives. In this paper, wepresent an in-depth overview of the various challenges associatedwith the application of ML in vehicular networks. In addition, weformulate the ML pipeline of CAVs and present various potentialsecurity issues associated with the adoption of ML methods. Inparticular, we focus on the perspective of adversarial ML attackson CAVs and outline a solution to defend against adversarialattacks in multiple settings.

Index Terms—Connected and autonomous vehicles, ma-chine/deep learning, adversarial machine learning, adversarialperturbation, perturbation detection, and robust machine learn-ing.

I. INTRODUCTION

In recent years, connected and autonomous vehicles (CAVs)have emerged as a promising area of research. The connectedvehicles are an important component of intelligent transporta-tion systems (ITS) in which vehicles communicate with eachother and with communications infrastructure to exchangesafety messages and other critical information (e.g., traffic androad conditions). One of the main driving force for CAVs isthe advancement of machine learning (ML) methods, particu-larly deep learning (DL), that are used for decision makingat different levels. Unlike conventional connected vehicles,the autonomous (self-driving) vehicles have two importantcharacteristics; namely, automation capability and cooperation(connectivity) [5]. In future smart cities, CAVs are expectedto have a profound impact on the vehicular ecosystem andsociety.

The phenomenon of connected vehicles is realized throughtechnology known as vehicular networks or vehicular ad-hocnetworks (VANETs) [6]. Over the years, various configurationsof connected vehicles have been developed including the use ofdedicated short-range communications (DSRC) in the UnitedStates and ITS-G5 in Europe based on the IEEE 802.11p

standard. However, a recent study [7] has shown many limita-tions of such systems such as (1) short-lived infrastructure-to-vehicle (I2V) connection, (2) non-guaranteed quality ofservice (QoS), and (3) unbounded channel access delay, etc.To address such limitations, the 3rd generation partnershipproject (3GPP) has been initiated with a focus on leveragingthe high penetration rate of long term evolution (LTE) and5G cellular networks to support vehicle-to-everything (V2X)services [8]. The purpose of developing V2X technology isto enable the communication between all entities encounteredin the road environment including vehicles, communicationsinfrastructure, pedestrians, cycles, etc.

The impressive ability of ML/DL to leverage increasinglyaccessible data, along with the advancement in other concomi-tant technologies (such as wireless communications), seemsto be set to enable autonomous and self-organizing connectedvehicles in the future. In addition, future vehicular networkswill evolve from normal to autonomous vehicles and willenable ubiquitous Internet access on vehicles. ML will havea predominant role in building the perception system ofautonomous and semi-autonomous connected vehicles.

Despite the development of different configurations of con-nected vehicles, they are still vulnerable to various securityissues and there are various automotive attack surfaces thatcan be exploited [9]. The threat is getting worse with thedevelopment of fully autonomous vehicles. As the autonomousvehicles are being equipped with many sensors such as cam-eras, RADAR, LIDAR, and mechanical control units, etc.These sensors share critical sensory information with onboarddevices through CAN bus and with other nearby vehicles aswell. The backbone of self-driving vehicles is the onboardintelligent processing capabilities using the data collectedthrough the sensory system. This data can be used for manyother purposes, e.g., getting information about vehicle kinetics,traffic flow, road, and network conditions, etc. Such datacan be potentially used for improving the performance ofthe vehicular ecosystem using adaptive data-driven decisionmaking and can also be used to accomplish various destructiveobjectives. Therefore, ensuring data integrity and security arenecessarily important to avoid various risks and attacks onCAVs.

It is common for the perception and control systems ofCAVs to be built using ML/DL methods. However, ML/DL

arX

iv:1

905.

1276

2v1

[cs

.LG

] 2

9 M

ay 2

019

Page 2: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

2

Fig. 1: Outline of the paper

techniques have been recently found vulnerable to carefullycrafted adversarial perturbations [10] and different physicalworld attacks have been successfully performed on the visionsystem of autonomous cars [11], [12]. This has raised manyprivacy and security concerns about the use of such methodsparticularly for security-critical applications like CAVs. In thispaper, we aim to highlight various security issues associatedwith the use of ML and we present a review of adversarialML literature mainly focusing on CAVs. In addition, we alsopresent a taxonomy of possible solutions to restrict adversarialML attacks and open research issues on autonomous, con-nected vehicles, and ML.

ML in general and DL schemes specifically perform ex-ceptionally well in learning hidden patterns from data. DLschemes such as deep neural networks (DNN) have out-performed human-level intelligence in many perception anddetection tasks by accurately learning from a large corpusof training data and classifying/predicting with high accuracyon unseen real-world test examples. As DL schemes produceoutstanding results, they have been used in many real-worldsecurity-sensitive tasks such as perception system in self-driving cars, anomaly and intrusion detection in vehicularnetworks, etc. ML/DL schemes are designed for benign andstationary environments where it is assumed that the trainingand test data belongs to the same statistical distribution. Theapplication of this assumption in a real-world application isflawed as training and test data can have different statisticaldistributions which gives rise to an opening for adversaries

to compromise the ML/DL-based systems. Furthermore, thelack of interpretability of the learning process, imperfectionsin training process, and discontinuity in the input-outputrelationship of DL schemes also resulted in an incentive foradversaries to fool the deployed ML/DL system [13].

Contributions of this Paper: In this paper, we build upon theexisting literature available on CAVs and present a compre-hensive review of that literature. The following are the majorcontributions of this study.

1) We formulate the ML pipeline of CAVs and describein detail various security challenges that arise with theincreasing adoption of ML techniques in CAVs, specif-ically emphasizing the challenges posed by adversarialML;

2) We present a taxonomy of various threat models andhighlight the generalization of attack surfaces for generalML, autonomous, and connected vehicle applications;

3) We review existing adversarial ML attacks with a specialemphasis on their relevance for CAVs;

4) We review robust ML approaches and provide a tax-onomy of these approaches with a special emphasis ontheir relevance for CAVs; and

5) Finally, we highlight various open research problemsthat require further investigation.

Organization of the Paper: The organization of this paperis depicted in Figure 1. The history, introduction, and variouschallenges associated with connected and automated vehicles(CAVs) are presented in Section II. Section III presents an

Page 3: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

3

TABLE I: Comparison of this paper with existing survey and review papers on the security of machine learning (ML) and connected and autonomous vehicles (CAVs). (Legend:√

means covered;× means not covered; ≈ means partially covered.)

Year Authors Publisher PapersCited

Focused Area ConventionalChallenges

ThreatModels

AdversarialML

RobustMLSolutions

AutonomousVehicles

ConnectedVehicles

OpenResearchIssues

2014 Mejri et al.[1]

Elsevier VehicularCommunication

69 Security of vehicularnetworks

√× × × ×

√×

2016 Gardiner etal. [2]

ACM Computing Sur-veys (CSUR)

40 Security of ML formalware classification

×√ √

≈ × × ×

2018 Chakrabortyet al. [3]

arXiv 79 Adversarial attacksand defenses

×√ √ √

× × ×

2018 Akhter et al.[4]

IEEE Access 195 Adversarial attacksand defenses incomputer vision

× ×√ √

≈ × ×

2018 Siegel et al.[14]

IEEE Transactions onITS

198 Survey on connectedvehicles’ landscape

√× × × ×

√×

2018 Hussain etal. [6]

IEEE Communica-tions Surveys andTutorials (COMST)

230 Autonomous cars: re-search results, issuesand future challenges

√× × ×

√×

2019 Yuan et al.[15]

IEEE Transactions onNN & LS (TNNLS)

146 Adversarial attacksand defenses for deeplearning systems

×√ √ √

× × ×

2019 Wang et al.[16]

arXiv 128 Adversarial ML at-tacks and defenses intext domain

×√ √ √

× × ×

2019 Our Paper 239 Security of CAVs andML

√ √ √ √ √ √ √

overview of the ML pipeline in CAVs. The detailed overviewof adversarial ML and its threats for CAVs are describedin Section IV. An outline of various solutions to robustifyapplications of ML along with common methods and recom-mendations for evaluating robustness are presented in SectionV. Section VI presents open research problems on the use ofML in the context of CAVs. Finally, we conclude the paper inSection VII. A summary of the salient acronyms used in thispaper is presented in Table II for convenience.

TABLE II: List of Acronyms

BSM Basic Safety MessageBFGS Broyden–Fletcher–Goldfarb–Shanno AlgorithmCAN Controller Area NetworkCAVs Connected and Automated (Autonomous) VehiclesCIFAR Canadian Institute for Advanced ResearchCNN Convolutional Neural NetworkC&W Carlini and Wagner AlgorithmDARPA Defense Advanced Research Projects AgencyDL Deep LearningDNN Deep Neural NetworkECUs Electronic Control UnitsFGSM Fast Gradient Sign MethodGAN Generative Adversarial NetworksGPS Global Positioning SystemGTSDB German Traffic Sign Detection BenchmarkGTSRB German Traffic Sign Recognition BenchmarkIoV Internet of VehiclesJSMA Jacobian-based Saliency Map AttackL-BFGS Limited-memory BFGSLIDAR LIght Detection and RangingLISA Laboratory for Intelligent & Safe AutomobilesLSTM Long Short-Term MemoryMC/DC Modified Condition/Decision CoverageML Machine LearningMNIST Modified National Institute of Standards and TechnologyODD Operational Design DomainRADAR RAdio Detection And RangingRL Reinforcement LearningRSU Road-Side UnitSAE Society of Automotive EngineersSVM Support Vector MachineVANETs Vehicular Ad-hoc NetworksV2I Vehicle to InfrastructureV2V Vehicle to VehicleV2X Vehicle to EverythingVGG (Oxford University’s) Visual Geometry GroupYOLO You Only Look Once (Classifier)

II. CONNECTED AND AUTONOMOUS VEHICLES (CAVS):HISTORY, INTRODUCTION, AND CHALLENGES

In this section, we provide the history, introduction, back-ground of CAVs along with different conventional and securitychallenges associated with them.

A. Autonomous Vehicles and Levels of Automation

The Society of Automotive Engineers (SAE) has defined ataxonomy of driving automation that is organized in six levels.The SAE defined the potential of driving automation at eachlevel that is described next and depicted in Figure 2. Moreover,according to a recent scientometric and bibliometric reviewarticle on autonomous vehicles [17], different naming conven-tions have been used over the years to refer to autonomousvehicles. These names are illustrated in Figure 3; note that theyear denotes the publication year of first paper mentioning thecorresponding name.

• Level 0 No automation: all driving tasks and majorsystems are controlled by a human driver;

• Level 1 Function-specific automation: provides limiteddriver assistance, e.g., lateral or longitudinal motion con-trol;

• Level 2 Partial driving automation: at least two primarycontrol functions are combined to perform an action, e.g.,lane keeping assistance and adaptive cruise control;

• Level 3 Conditional driving automation: enables limitedself-driving automation, i.e., allows the driver to tem-porarily deviate his attention from driving to performanother activity but the presence of driver is alwaysrequired to retake control within a few seconds;

• Level 4 High driving automation: an automated drivingsystem performs all dynamic tasks of driving, e.g., mon-itoring of the environment and motion control. However,the driver is capable of getting full control of the vehicle’ssafety-critical functions under certain scenarios;

• Level 5 Self-driving automation: an automated drivingsystem performs all dynamic functions of driving and

Page 4: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

4

Fig. 2: The taxonomy of the levels of automation in driving.

Fig. 3: The illustration of different naming conventions used for referring autonomousvehicles in past years, the year denotes the publication year of first paper mentioningcorresponding name. We see that self-driving car is not entirely a new concept and it isreferred to through a number of terms. (Source: [17])

monitors the nearby environment for the entire trip,without any human intervention at any time.

The SAE defines the operational design domain (ODD) forthe safe operation of autonomous vehicles as “the specificconditions under which a given driving automation system

or feature thereof is designed to function, including, but notlimited to, driving modes” [18]. ODD refers to the domain ofoperation which an autonomous vehicle has to deal with. AnODD representing an ability to drive in good weather condi-tions is quite different from an ODD that embraces all kindsof weather and lighting conditions. The SAE recommendsthat ODD should be monitored at run-time to gauge if theautonomous vehicle is in a situation that it was designed tosafely handle.

B. Development of Autonomous Vehicles: Historical Overview

Self-driving vehicles, especially ones considering lowerlevels of automation (referring to the taxonomy of automationas presented in Figure 2), have existed for a long time. In 1925,Francis Udina presented a remote controlled car famouslyknown as American wonder. In the 1939-1940 New YorkWorld’s Fair, General Motors Futurama exhibited aspects ofwhat we call self-driving car today. The first work around thedesign and development of self-driving vehicles was initiatedby General Motors and RCA in early 1950 [19] that wasfollowed by Prof. Robert Fenton at The Ohio State Universityfrom 1964–80.

In 1986, Ernst Dickens at University of Munich designeda robotic van that can drive autonomously without traffic andby 1987 the robotic van drove up to 60 Kmhr. This grouphad also started the development of video image processingto recognize driving scenes [20] and it was followed by ademonstration performed under Eureka Prometheus project.The super-smart vehicle systems (SSVS) program in Europe[21] and Japan [22] were also based on the earlier workof Ernst Dickens. In 1992, four vehicles drove in a convoyusing magnetic markers on the road for relative positioning,a similar test was repeated in 1997 with eight vehicles usingradar systems and V2V communications. This work has pavedthe way for modern adaptive cruise control and automatedemergency braking systems. This R&D work then witnessedinitiatives of programs like the PATH Program by Caltrans and

Page 5: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

5

the University of California in 1986, in particular, the workon self-driving got huge popularity with the demonstrationof research work done by the national automated highwaysystems consortium (NAHSC) during 1994-98 [23] and thisclimax remained until 2003.

In the year 2002, Defence Advanced Research ProjectAgency (DARPA) announced the grand autonomous vehicleschallenge, first episode was held in 2004 where very few carswere able to navigate miles through the Mojave desert. Thefirst grand challenge was won by Carnegie Mellons University(CMU) where their car only drove nearly seven miles wherethe finish line was at 140 miles. In 2005, the second episodeof DARPA grand challenge was held, in this episode, five outof twenty-three teams were able to make it through to thefinish line. This time Stanford University’s vehicle “Stanley”has won the challenge. In the third episode of DARPA grandchallenge in 2007, universities were invited to present theautonomous vehicles on busy roads to shift the perception ofthe public, tech, and automobile industries about the designand feasibility of autonomous vehicles.

In 2007, Google hired the team leads of Stanford and CMUautonomous vehicle projects and started pushing towards theirself-driving car design on the public roads. By the year 2010,Google’s self-driving car has navigated approximately 140thousand miles on the roads of California in quest of achievingthe target of 10 million miles by 2020. In 2013, VisLab (aspin-off company of the University of Parma) successfullycompleted the international autonomous driving challengeby driving two orange vans 15000 km with minimal driverinterventions from University of Parma in Italy to Shanghaiin China. A year later in 2014, Volvo demonstrated the roadtrain concept where one vehicle controls several other vehiclesbehind it in order to avoid road congestion. In 2016, Tesla carshave started the commercial sales of highway speed intelligentcruise control based cars with minimal human intervention.

In October 2018, Google self-driving car has successfullyachieved the 10 million miles target. The main aim of Google’sself-driving car program is to reduce the number of deathscaused by traffic accidents by half and to date, they areworking towards achieving this ambitious goal. It is expectedthat by 2020 the state departments of motor vehicles (DMV)may permit self-driving cars on the highways with their speciallanes and control settings. By 2025, it is expected that publictransportation will also become driver-less and by 2030 it isforesighted that we will have level-5 autonomous vehicles1. Atimeline for the development of autonomous vehicles over thepast decades is depicted in Figure 4.

C. Introduction to Connected and Autonomous Vehicles(CAVs)

The term connected vehicles refers to the technologies,services, and applications that together enable inter-vehiclesconnectivity. In connected vehicles’ settings, the vehiclesare equipped with a wide variety of onboard sensors thatcommunicate with each other via CAN bus and nearbycommunication infrastructures and vehicles (as illustrated in

1https://bit.ly/2Kei9ci

Fig. 4: The timeline for the development of autonomous vehicles.

Fig. 5: The basic system architecture of connected vehicles having three types ofcommunications: vehicle-to-vehicle (V2V), infrastructure-to-infrastructure (I2I), andinfrastructure-to-vehicle (I2V).

Figure 5). The applications of connected vehicles include ev-erything from traffic safety, roadside assistance, infotainment,

Page 6: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

6

Fig. 6: Autonomous vehicle’s major sensor types, their range, and position (figure adapted from [24]).

efficiency, telematics, and remote diagnostics to autonomousvehicles and GPS. In general, the connected vehicles can beregarded as a cooperative intelligent transport system [25] andfundamental component of the internet of vehicles (IoV) [26].A review of truck platooning automation projects formulatingthe settings of connected vehicles (described earlier) togetherwith various sensors (i.e., RADAR, LIDAR, localization, laserscanners, etc.) and computer vision techniques is presentedin [27]. The key purpose of initiating and investigating suchprojects is to reduce energy consumption and personnel costsby automated operation of following vehicles. Furthermore, ithas been suggested in the literature that throughput on urbanroads can be doubled using vehicle platooning [28].

CAVs is an emerging area of research that is drawing sub-stantial attention from both academia and industry. The ideaof connected vehicles has been conceptualized to enable inter-vehicle communications to provide better traffic flow, roadsafety, and greener vehicular environment while reducing fuelconsumption and travel cost. There are two types of nodes ina network of connected vehicles: (1) vehicles having onboardunits (OBUs), and (2) roadside wireless communication in-frastructure or roadside units (RSUs). The basic configurationof a vehicular network is shown in Figure 5. There are threemodes of communications in such networks: vehicle-to-vehicle(V2V), infrastructure-to-infrastructure (I2I), and vehicle-to-infrastructure (V2I). Besides these, there are two more typesof communication—vehicle to pedestrian (V2P) and vehicleto anything (V2X)—that are expected to become part of thefuture connected vehicular ecosystem.

In modern vehicles, self-contained embedded systems calledelectronic control units (ECUs) are used to digitally control aheterogeneous combination of components (such as brakes,lighting, entertainment, and drivetrain/powertrain, etc.) [29].There are more than 100 such embedded ECUs in a car thatare executing about 100 million expressions of code and areinterconnected to control and provide different functionalitiessuch as acceleration, steering, and braking [30]. The securityof ECUs can be compromised and remote attacks can be

realized to gain control of the vehicle as illustrated in [29].

Modern CAVs utilize a number of onboard sensors includ-ing proximity, short, middle, and long range sensors. Whileeach of these sensors works in its dedicated range, they canact together to detect objects and obstacles over a wide range.The major types of sensors deployed in autonomous vehiclesand their sensing range are shown in Figure 6 and are brieflydiscussed next.

• Proximity Sensors (5m): Ultrasonic sensors are proximitysensors that are designed to detect nearby obstacles whenthe car is moving at a low speed, especially, they provideparking assistance.

• Short Range Sensors (30m): There are two types of short-range sensors: (1) forward and backward cameras and(2) short range radars (SRR). Forward cameras assistin traffic signs recognition and lane departure whilebackward cameras provide parking assistance and SRRhelp in blind spot detection and cross traffic alert.

• Medium Range Sensors (80-160m): The LIDAR andmedium-range radars (MRR) are designed with a mediumrange and are used for pedestrian detection and collisionavoidance.

• Long Range Sensors (250m): Long range radars (LRR)enable adaptive cruise control (ACC) at high speeds inconjunction with the information collected from internalsensors and from other vehicles and nearby RSU [31].

The software design of the autonomous vehicles utilizingML/DL schemes is divided into five inter-connected mod-ules; namely, environmental perception, a mapping module,planning module, controller module, and system supervisor.The software modules take input from the sensor block ofautonomous vehicles and output intelligent actuator controlcommands. Figure 7 highlights the software design of theautonomous vehicles and it also provides the sensory inputrequired for each software module to perform the designatedtask.

Page 7: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

7

Fig. 7: The systematic software workflow of autonomous vehicles. Nuts and bolts of all important operational blocks of software workflows are depicted to provide reader with abetter understanding of the system design involved in developing a state-of-art autonomous vehicle.

D. Security-Related Challenges in Developing Robust CAVs

Modern vehicles are controlled by complex distributed com-puter systems comprising millions of lines of code executingon tens of heterogeneous processors with rich connectivityprovided by internal networks (e.g., CAN) [9]. While thisstructure has offered significant efficiency, safety, and costbenefits, it has also created the opportunity for new attacks.Ensuring the integrity and security of vehicular systems iscrucial, as they are intended to provide road safety and areessentially life critical. Vehicular networks are designed usinga combination of different technologies and there are variousattack surfaces which can be broadly classified into internaland external attacks. Different types of attacks on vehicularnetworks are described below.

1) Application Layer Attacks: The application layer attacksaffect the functionality of a specific vehicular application suchas beaconing and message spoofing. Application layer attackscan be broadly classified as integrity or authenticity attacksand are briefly described below.

(a) Integrity Attacks: In the message fabrication attack, theadversary continuously listens to the wireless mediumand upon receiving each message, fabricates its contentaccordingly and rebroadcasts it to the network. Mod-ification of each message may have a different effecton the system state and depends solely on the designof the longitudinal control system. A comprehensivesurvey on attacks on the fundamental security goals, i.e.,confidentiality, integrity, and availability can be found in[32].In the spoofing attack, the adversary imitates anothervehicle in the network to inject falsified messages intothe target vehicle or a specific vehicle preceding thetarget. Therefore, the physical presence of the attackerclose to the target vehicle is necessarily not required. Ina recent study [33], the use of onboard ADAS sensors isproposed for the detection of location spoofing attack invehicular networks. A similar type of attack in a vehicularnetwork can be GPS spoofing/jamming attack [34] inwhich an attacker transmits false location information bygenerating strong GPS signals from a satellite simulator.

In addition, a thief can use integrated GPS/GSM jammerto restrain a vehicle’s anti-theft system from reporting thevehicle’s actual location [35].In the replay attack, the adversary stores the messagereceived by one of the network’s nodes and tries to replayit later to attain evil goals [36]. The replayed messagecontains old information that can cause different hazardsto both the vehicular network and its nodes. For example,consider the message replaying attack by a maliciousvehicle that is attempting to jam traffic [37].

(b) Authenticity Attacks: Authenticity is another major chal-lenge in vehicular networks which refers to protecting thevehicular network from inside and outside malicious vehi-cles (possessing falsified identity) by denying their accessto the system [38]. There are two types of authenticityattacks; namely, Sybil attack and impersonation attacks[39]. In a Sybil attack, a malicious vehicle pretends manyfake identities [40] and in an impersonation attack, theadversary exploits a legitimate vehicle to obtain networkaccess and performs malicious activities. For example, amalicious vehicle can impersonate a few non-maliciousvehicles to broadcast falsified messages [41]. This typeof attack is also known as the masquerading attack.

To avoid application layer attacks, various cryptographicapproaches can be effectively leveraged especially when anattacker is a malicious outsider [1]. For instance, digitalsignatures can be used to ensure messages’ integrity andto protect them against unauthorized use [42]. In addition,digital signatures can potentially provide both data and entitylevel authentication. Moreover, to prevent replay attacks, atimestamp-based random number (nonce) can be embeddedwithin messages. While the aforementioned methods are gen-eral, there are other unprecedented challenges related to vehic-ular networks implementation, deployment, and standardiza-tion. For example, protection against security threats becomesmore challenging with the presence of a trusted compromisedvehicle with a valid certificate. In such cases, data-drivenanomaly detection methods can be used [43], [44]. A surveyon anomaly detection for enhancing the security of connectedvehicles is presented in [45].

Page 8: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

8

2) Network Layer Attacks: Network layer attacks are dif-ferent from the application layer attacks in a way that they canbe launched in a distributed manner. One prominent exampleof such attacks on vehicular systems is the use of vehicularbotnets to attempt a denial of service (DoS) or distributeddenial of service (DDoS) attack. The potential of vehicularnetwork-based botnet attack for autonomous vehicles is pre-sented in [46]. The study demonstrates that such an attack cancause severe physical congestion on hot spot road segmentsresulting in an increased trip duration of vehicles in the targetarea. Another way to realize the DoS attack is to use networkjamming that causes disruption in the communications networkover a small or large geographic area. As discussed earlier,current configurations of vehicular networks are based on theIEEE 802.11p standard that uses single control channel (CCH)with multiple service channels (SCH) and can be attackedby attempting single channel or multi-channel jamming byswiping between all channels. Various conventional techniquescan be adopted to mitigate network layer attacks such asfrequency hopping, channel, and technology switching, etc.Coalition or platooning attack is a similar type of attackin which a group of compromised vehicles can cooperate toperform malicious activities such as blocking or interruptingcommunications between legitimate vehicles.

3) System Level Attacks: The attacks on the vehicle’shardware and software are known as system level attacks andcan be performed by either malicious insiders at the time ofdevelopment or outsiders using unattended vehicular access.Such attacks are more serious in nature as they can causedamage even in the presence of the deployed state of theart security measures and secure end-to-end communications[47]. For instance, if the onboard hardware or software systemof a vehicle is maliciously modified then the informationexchange between the vehicle and communication systemswill be inaccurate and with such a phenomenon the overallperformance and security of the vehicular network will becompromised. In [48], authors investigated a non-invasivesensor spoofing attack on car’s anti-lock braking system suchthat the braking system mistakenly reports a specific velocity.

4) Privacy Breaches: In vehicular networks, vehiclesbroadcast safety messages periodically that contain criticalinformation such as vehicle identity, current location, velocity,acceleration, etc. The adversary can exploit such kind ofinformation by attempting an eavesdropping attack which isa type of passive attack and is more difficult to be detected.Therefore, preserving the privacy of vehicles and drivers isof utmost importance. This allows the vehicles to commu-nicate with each other without disclosing their identities,which is accomplished by masking their identities, e.g., usingpseudonyms. In vehicular networks, knowing the origin ofthe message is crucial for authentication purposes, therefore,vehicles should be equipped with privacy-preserving authen-tication mechanism ensuring that the communication amongvehicles (V2V) and with infrastructure (V2I) is confidential.However, inter-vehicular communication can be eavesdroppedby anyone within the radio range, e.g., a malicious vehiclecan collect and misuse confidential information. Similarly, anattacker can construct location profiles of vehicles by establish-

ing a connection with the RSU. Therefore, the effectivenessof pseudonymous or even complete anonymous schemes invehicular networks remains vulnerable to privacy breaches[49].

5) Sensors Attacks: Although sensors of autonomous vehi-cles are by design resilient to environmental noises such asacoustical interference from nearby objects and vehicles, etc.However, current sensors cannot resist intentional noise andit can be injected to realize various attacks such as jammingand spoofing.

6) Attacks on Perception System: The perception systemof self-driving vehicles is developed using various computervision techniques including modern ML/DL-based methodsfor identifying objects, e.g., pedestrians, traffics signs, andsymbols, etc. The perception system of self-driving vehicleis highly vulnerable to the physical world and adversarialattacks. For example, suppose we’re learning a controller f(x)to predict the steering angle in an autonomous car as a functionof the vision-based input (captured into a feature vector x).The adversary may introduce small manipulations (i.e., x ismodified into x′) such that the predicted steering angle f(x′)is maximally distant from the optimal angle y.

7) Intrusion Detection: The detection of malicious activ-ities is one of the major challenges of VANETs. Intrusiondetection systems enable the identification of various types ofattacks being performed on the system, e.g., sink- and black-hole attacks, etc. Without such a system, communication invehicular networks is highly vulnerable to numerous attackssuch as selective forwarding rushing, and Sybil attacks, etc.To detect the selective forwarding attack, a trust system basedmethod utilizing local and global detection of attacks amonginter-nodes mutual monitoring and detection of abnormaldriving patterns is presented in [50]. Ali et al. proposed asystem for intelligent intrusion detection of gray holes andrushing attack [51].

8) Certificate Revocation: The security mechanism of ve-hicular networks is based on trusted certification authority(CA) that manages the identities and credentials of the vehiclesby issuing valid certificates to them. The vehicles are essen-tially unable to operate in the system without a valid certificateand validity of certificate must be revoked after a certainamount of time. The revocation process is a challenging taskadministratively due to challenges such as the identificationof nodes with illegitimate behavior and the need to changethe registered domain. Moreover, it is necessary to restrainmalicious nodes by revoking their certificates to prevent themfrom attacking the system. To tackle this problem, threedifferent certificate revocation protocols have been proposedin [52].

E. Non-Security-Related Challenges in Deploying CAVs

The phenomenon of connected vehicles is realized usinga technology named vehicular networks which have variouschallenges that need to be addressed for their efficient deploy-ment in the longer term. Various challenges associated withvehicular networks are described below.

Page 9: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

9

Fig. 8: The machine learning (ML) pipeline of CAVs comprising of four major modules: (1) perception; (2) prediction; (3) planning; and (4) control.

1) High Mobility of Nodes: The large scale mobility ofvehicles in vehicular networks result in a highly dynamictopology; thus, raising several challenges for the communica-tion networks [53]. In addition, the dynamic nature of trafficcan lead to a partitioned network having isolated clustersof nodes [54]. As the connections between the vehicles andnearby RSUs are short-lived, the wireless channel coherencetime is short. This makes accurate real-time channel estimationmore challenging at the receiver end. This necessitates thedesign of dynamic and robust resource management protocolsthat can efficiently utilize available resources while adaptingto the vehicular density variations [55].

2) Heterogeneous and Stringent QoS Requirements: Invehicular networks, there are different modes of communi-cations that can be broadly categorized into V2V and V2Icommunications. In V2V communications, vehicles exchangesafety-critical information (e.g., information beacons, roadand traffic conditions) among each other known as basicsafety messages (BSM). This communication, which can beperformed periodically or when triggered by some event,requires high reliability and is sensitive to delay [56]. In V2Icommunications, on the other hand, vehicles can communicatewith nearby communications infrastructure to get support forroute planning, traffic information, operational data, and toaccess entertainment services that requires more bandwidthand frequent access to the Internet, e.g., for downloadinghigh-quality maps and accessing infotainment services, etc.Therefore, the heterogeneous and stringent QoS requirementsof VANETs cannot be simultaneously met with traditionalwireless design approaches.

3) Learning Dynamics of Vehicular Networks: As dis-cussed above, vehicular networks exhibit high dynamicity;thus, to meet the real-time and stringent requirements ofvehicular networks, historical data-driven predictive strategiescan be adopted, e.g., traditional methods like hidden Markovmodels (HMM) and Bayesian methods [56]. In addition tousing traditional ML methods, more sophisticated DL modelscan be used, for example, recurrent neural networks (RNN)and long short term memory (LSTM) have been shown ben-eficial for time series data and can be potentially used formodeling temporal dynamics of vehicular networks.

4) Network Congestion Control: Vehicular networks aregeographically unbounded and can be developed for a city,several cities, and countries as well. The unbounded natureof vehicular networks leads to the challenge of networkcongestion [57]. As the traffic density is high in urban areasas compared to rural areas, particularly during rush hours, thatcan possibly lead to network congestion issues.

5) Time Constraints: The efficient application of vehicularnetworks requires hard real-time guarantees because it laysout the foundation for many other applications and servicesthat require strict deadlines [58], for example, traffic flowprediction [59], traffic congestion control [60], and path plan-ning [61]. Therefore, safety messages should be broadcastedin acceptable time either by vehicles or RSUs.

III. THE ML PIPELINE IN CAVS

The driving task elements of self-driving vehicles that canbenefit from ML can be broadly categorized into the followingfour major components (as shown in Figure 8).

Page 10: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

10

TABLE III: Overview of machine learning (ML)-based research on different vehicular network’s applications

Authors Application MethodologyYao et al. [62]

Location prediction based scheduling and routingHidden Markov models

Xue et al. [63] Variable-order Markov modelsZeng et al. [64] Recursive least squaresKarami et al. [65] Network congestion control Feed forward neural networkTaherkhani et al. [66] k-means clusteringLi et al. [67] Load balancing Reinforcement learningTaylor et al. [68] Network security LSTMZheng et al. [69] Virtual resource allocation Reinforcement learningAtallah et al. [70], [71] Resource management Reinforcement learningYe et al. [57] Distributed resource management Reinforcement learningKim et al. [72] Vehicle trajectory prediction Reinforcement learning

1) Perception: assists in perceiving the nearby environmentand recognizing objects;

2) Prediction: predicting the actions of perceived objects,i.e., how environmental actors such as vehicles andpedestrians will move;

3) Planning: route planning of vehicle, i.e., how to reachfrom point A to B;

4) Decision Making & Control: making decisions relatingto vehicle movement, i.e., how to make the longitudinaland lateral decisions to control and steer the vehicle.

These components are combined to develop a feedbacksystem for enabling the phenomenon of self-driving withoutany human intervention. This ML pipeline can then facilitateautonomous real-time decisions by leveraging insights fromthe diverse types of data (e.g., vehicles’ behavioral patterns,network topology, vehicles’ locations, and kinetics informa-tion, etc.) that can be in easily gathered by CAVs.

In the remainder of this section, we will discuss some ofthe most prominent applications of ML-based methods forperforming these tasks (a summary is presented in Table III).

A. Applications of ML for the Perception Task in CAVs

Different ML techniques, particularly, DL models havewidely been used for developing the perception system ofautonomous vehicles [73]. In addition to using video camerasas major visionary sensors, these vehicles also use other sen-sors for detection of different events in the car’s surroundings,e.g., RADAR and LIDAR. The surrounding environment ofthe autonomous vehicles is perceived in two stages [74]. Inthe first stage, the whole road is scanned for the detectionof changes in the driving conditions such as traffic signs andlights, pedestrian crossing, and other obstacles, etc. In the sec-ond stage, knowledge about the other vehicles is acquired. In[75], a CNN model is trained for developing direct perceptionrepresentation of autonomous vehicles.

B. Applications of ML for the Prediction Task in CAVs

In CAVs, accurate and timely prediction of different eventsencountered in driving scenes is another important task whichis mainly accomplished using different ML and DL algorithms.For instance, autonomous vehicles uses DL models for thedetection and localization of obstacles [76], different objects(e.g., vehicles, pedestrians, and bikes, etc.) [77] and theirbehavior (e.g., tracking pedestrians along the way [78]) andtraffic signs [79] and traffic lights recognition [80]. Another

prediction tasks in CAVs that involve the application ofML/DL methods are vehicle trajectory and location prediction[81], efficient and intelligent wireless communication [82],and traffic flow prediction and modeling [83]. Moreover, MLschemes have also been used for the prediction of uncertaintiesin autonomous driving conditions [84].

C. Applications of ML for the Planning Task in CAVs

CAVs are equipped with onboard data processing com-patibilities and they intelligently process the data collectedfrom heterogeneous sensors for efficient route planning andfor other optimized operations using different ML and DLtechniques. The key goal of route planning is to reach thedestination in a small amount of time while avoiding trafficcongestion, potholes, and other vehicles by navigating throughGPS and consuming less fuel as possible. In the literature,motion planning of autonomous vehicles is studied in threedimensions: (1) finding a path for reaching destination point;(2) searching for the fastest manoeuvre; and (3) determiningthe most feasible trajectory [85]. Moreover, to avoid collisionsbetween vehicles in CAVs, predicting the trajectories of othervehicles is a crucial task for the planning trajectory of anautonomous vehicle [86]. For instance, Li et al. presenteda hybrid approach to model uncertainty in vehicle trajectoryprediction for CAVs application using deep learning and kerneldensity estimation [87].

D. Applications of ML for the Decision Making and Control

In recent years, DL based algorithms have been extensivelyused for control of autonomous vehicles that are refinedthrough millions of kilometers of test drives. For instance,Bojarski et al. presented a CNN based end-to-end learningframework for self-driving cars [88]. The model was able todrive the car on local roads with or without markings andon highways with small training data. In a similar study,CNN is trained for end-to-end learning of lane keeping forautonomous cars [89]. Recently, researchers have now startedworking on utilizing deep reinforcement learning (RL) forperforming actions and decision making in driving conditions[90]. Bouton et al. proposed a generic approach to enforceprobabilistic guarantees on RL learning for which they derivedan exploration strategy that restricts the RL agent to chooseamong only those actions that satisfy a desired probabilisticspecification criteria prior to training [91]. Moreover, human-like speed control of autonomous vehicles using deep RL

Page 11: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

11

Fig. 9: The illustration of the generalization of attack surfaces in ML systems: generic model (top), autonomous vehicles model (middle), and connected vehicles model (bottom).

with double Q-learning is presented in [92] that uses scenesgenerated by naturalistic driving data for learning. In [93],authors presented an integrated framework that uses a deepRL based approach for dynamic orchestration of networking,caching, and computing resources for connected vehicles.

In addition, ML-based methods have been used for manyother applications in CAVs. For example, adaptive traffic flowin which smart infrastructure integrates V2V signals from themoving cars to optimize speed limits, traffic-light timing, andthe number of lanes in each direction on the basis of theactual traffic load. The traffic flow can be further improved inCAVs by using cooperative adaptive cruise control technology[94]. Also, vehicles can take advantage of cruise control andsave fuel by following one another in the form of vehiclesplatoons. Moreover, DL based methods have been proposedfor intrusion detection for in-vehicle security of CAN bus [95].The overview of intelligent and connected vehicles, currentand future perspectives are presented in [96].

Autonomous vehicles are evolving through four stages ofdevelopment. The first stage includes passive warning andconvenience systems such as front and backward facing cam-eras, cross-traffic warning mechanism, radar for blind spotdetection, etc. These warning systems use different computervision and machine learning techniques to perceive the sur-rounding views of the vehicle on the road and to recognizetraffic signs, static, and moving objects. In the second stage,these systems are used to assist the active control system ofthe vehicle while parking, braking, and to prevent backingover unseen objects. In the third stage, the vehicle is equippedwith some semi-autonomous operations—as the vehicle maybehave unexpectedly and the on seat driver should be able toresume control. In the final stage, the vehicle is designed toperform fully autonomous operations.

CAVs together formulate the settings of the self-drivingvehicular network and there is a strong synergy betweenthem [5]. In addition, autonomous vehicles are an importantcomponent of future vehicular networks that are equippedwith complex sensory equipment. The autonomous vehicularnetworks are predictive and adaptive to their environments and

are designed with two fundamental goals, i.e., autonomy andinteractivity. The first goal enables the network to monitor,plan, and control itself and the later ensures that the infras-tructure is transparent and friendly to interact with.

The deployment of ML in CAVs entails the followingstages:

(a) Data Collection: Input data is collected using sensors orfrom other digital repositories. In autonomous vehicles,input data is collected using a complex sensory network,e.g., cameras, RADAR, GPS, etc. (see Figure 6); ina connected vehicular ecosystem, there is also inter-vehicle information communication.

(b) (Pre-)Processing: The heterogeneous data (video im-agery, network, and traffic information, etc.) collected bythe sensors is then digitally processed and appropriatefeatures (e.g., traffic signs information and traffic flowinformation, etc.) are extracted.

(c) Model Training: Using the extracted features from theinput data, a ML model is trained to recognize anddistinguish between different objects events encounteredin the driving environment, e.g., recognizing movingobjects like pedestrian, vehicles, and cyclists, etc. anddistinguishing between traffic signs, i.e., stop or speedlimit sign, etc.

(d) Decision or Action: A decision or an action (e.g., stop-ping the car at the stop sign and predicting traffic flowbased on the knowledge acquired by the vehicular net-work) is performed according to the learned knowledgeand underlying system.

We present an illustration of the generalization of attacksurfaces in ML systems from generic models to the morespecific cases of autonomous and connected vehicles in Figure9. As we shall discuss later in the paper, each of these stagesis vulnerable to adversarial intrusion since an adversary cantry to manipulate the data collection and processing system,tamper the model, or its outputs.

Page 12: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

12

Fig. 10: The taxonomy of adversarial examples, perturbation methods, and benchmarks (datasets and models).

IV. ADVERSARIAL ML ATTACKS AND THE ADVERSARIALML THREAT FOR CAVS

A comprehensive overview of adversarial ML in the contextof CAVs is presented in this section.

A. Adversarial Examples

Formally, adversarial examples are defined as inputs to adeployed ML/DL model created by an attacker by adding animperceptible perturbation in the actual input to compromisethe integrity of the ML/DL model. An adversarial sample x∗

is created by adding a small carefully crafted perturbationδ to the correctly classified sample x. The perturbation δ iscalculated by approximating the optimization problem givenin Eq. 1 iteratively until the crafted adversarial examplegets classified by ML classifier f(.) in targeted class t. Ataxonomy of adversarial examples, perturbation methods, andbenchmarks is presented in Figure 10.

x∗ = x+ argminδx{‖δ‖ : f(x+ δ) = t} (1)

1) Adversarial Attacks: An adversarial attack affecting thetraining phase of the learning process is termed as poisoningattack where an attacker compromises the learning process ofthe ML/DL scheme by manipulating the training data [97],whereas the adversarial attack on the inference phase of thelearning process is termed as evasion attack where an attackermanipulates the test data or real-time inputs to the deployedmodel for producing a false result [98]. Usually, examples usedfor fooling the ML/DL schemes at inference time are calledadversarial examples.

2) Adversarial Perturbations: The adversarial perturbationcrafting is divided in three major categories; namely, localsearch, combinatorial optimization, and convex relaxation.This division is based on solving the objective function givenin Eq. 1. Local search is the most common method of gener-ating adversarial perturbations where the adversarial examplesare generated by solving the objective function provided inEq. 1 to obtain a lower bound on the adversarial perturbationby using gradient-based methods. A prime example of local

search adversarial example crafting is the fast gradient signmethod (FGSM) where an adversarial example is created bytaking a step in the direction of the gradient [99]. In anotherstudy, the authors demonstrated that adversarial images arevery easy to be constructed using evolutionary algorithms orgradient ascent [100]. Combinatorial optimization is also amethod for creating adversarial examples where we find theexact solution of the optimization problem provided in Eq.1, a major shortcoming of this method is the increase in thecomputational complexity with the increase of the number ofexamples in the dataset. Recently, Khalil et al. [101] launcheda successful adversarial attack based on combinatorial andinteger programming on binarized neural networks but theperformance of the proposed attack reduces as the size and di-mensions of data increase. Recently, convex relaxation is alsoused to generate [102] and defend [103] against adversarialexamples where the upper bound on the objective functionprovided in Eq. 1 is calculated.

3) Different Aspects of Perturbations: The adversarial ex-amples are designed to look like the original ones and im-perceptible to humans. In this regard, the addition of smallperturbations is of utmost importance. Whereas, the literaturesuggests that even one-pixel perturbation is often sufficient tofool the deep model trained for classification task [104]. Herewe analyze different aspects of adversarial perturbations.(a) Perturbation Scope: Adversarial perturbations are gen-

erated from two aspects: (1) perturbations for each le-gitimate input and (2) universal perturbations for thecomplete datasets, i.e., for each original cleaned sample.To date, most of the studies considered the first scope ofadversarial perturbations.

(b) Perturbation Limitation: Similarly, there are two types oflimitations, optimizing the system at a low perturbationsscale and optimizing the system at a low perturbationsscale with constrained optimization.

(c) The magnitude of the perturbations is mainly measuredusing three norms L2, L∞, and L0 norm. In L2-norm-based attacks, the attacker aims to minimize the squarederror between the original and adversarial example. L2-norm measures the Euclidean distance between the ad-

Page 13: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

13

versarial example and the original sample and results ina very small amount of noise added to the adversarialsample. L∞ attacks are perhaps the simplest type ofattacks which aim to limit or minimize the extent towhich the maximum change for all pixels in adversarialexamples is achieved. Also, this constraint forces to onlymake very small changes to each pixel. L0-norm-basedattacks work by minimizing the number of perturbedpixels in an image and force the modifications only tovery few pixels.

To ensure tightly constrained action space available to anadversary, imperceptibility of perturbations is important todevelop an attack. Considering the important constraints: (1)what constraints are placed on the attacker’s “starting point”?and (2) where did this initial example come from? Gilmer et al.identified four salient features (described below) of adversarialperturbations [105].(a) Indistinguishable Perturbation: The attacker does not

have to select a starting point but it is given a draw fromthe data distribution and introduces such perturbation inthe input sample that is indistinguishable by a human.

(b) Content-Preserving Perturbation: The attacker does nothave to select a starting point but it is given a draw fromthe data distribution and creates such perturbation as longas the original content of the sample is preserved.

(c) Non-suspicious Input: The attacker can generate any typeof desired perturbed input sample as long as it remainsundetectable to a human.

(d) Content-Constrained Input: The attacker can generateany type of desired perturbed input sample as longas it maintains some content payload, i.e., it must bea picture of dog but not necessarily a particular dog.This includes payload-constrained input, where humanperception might not be important. Rather, the intendedfunction of the input example remains intact.

(e) Unconstrained Input: There is no constraint on the inputand an attacker can produce any type of input exampleto get the desired output or behavior from the system.

4) Adversarial ML Benchmarks: In this section, we de-scribe the benchmarks datasets and victim ML models usedfor evaluating adversarial examples. Researchers mostly adoptan inconsistent approach and report the performance of theattacks on diverse datasets and victim models. The widely usedbenchmark datasets and victim models are described below.• Datasets: MNIST [106], CIFAR-10 [107], and ImageNet

[108] are the widely used datasets in adversarial MLresearch and are also regarded as the standard deeplearning datasets.

• Victim Models: The widely used victim ML/DL modelsfor evaluating adversarial examples are LeNet [106],AlexNet [109], VGG [110], GoogLeNet [111], CaffeNet[112], and ResNet [113].

B. Threat Models for Adversarial ML Attacks on CAVs

Threat modeling is the procedure of answering a fewcommon and straight forward questions related to the systembeing developed or deployed from a hypothetical attacker’s

point of view. Threat modeling is a fundamental component ofsecurity analysis. It requires that some fundamental questionsrelated to the threat are addressed [114]. In particular, a threatmodel should identify:• the system principals: what is the system and who are

the stakeholders?• the system goals: what does the system intend to do?• the system adversities: what potential bad things can hap-

pen due to adverse situations or motivated adversaries?• the system invariants: what must be always true about the

system even if bad things happen?The key goal of threat modeling is to optimize the security

of the system by determining security goals, identifying po-tential threats, and vulnerabilities, and to develop countermea-sures for preventing or mitigating their associated effects onthe system. Answering these questions requires careful logicalthoughts and significant expertise and time.

As the focus of this paper is on highlighting the potentialvulnerabilities of using ML techniques in CAVs, the scope ofour study is restricted to the adversarial ML threat in CAVs. Inthe remainder of this section, we discuss the various facets ofthe adversarial ML threat in CAVs (a taxonomy aggregatingthese issues is illustrated in Figure 11).

1) Adversarial Attack Type: In the literature, the attacks onlearning systems are generally categorized into three dimen-sions [115]:(a) Influence: It includes causative (trying to get control

over training data) and exploratory (exploiting mis-classifications of the model without affecting the trainingprocess) attacks.

(b) Specificity: It involves targeted and indiscriminate attackson a specific instance.

(c) Security Violation: It is concerned with the integrity ofassets and availability of the service attack.

The first dimension describes the capabilities of the ad-versary and whether the attacker has the ability to affectthe learning by poisoning training data. Instead, the attackerexploits the model by sending new samples and observingtheir responses to get the intended behavior. The second axisindicates the specific intentions of the attacker, i.e., whetherthe attacker is interested in realizing a targeted attack on oneparticular sample or he aims to cause learned model t fail in anindiscriminate fashion. The third dimension detail the types ofsecurity violation an attacker can cause, e.g., the attacker mayaim to bypass harmful messages to bypass through the filter asfalse negatives or realizing denial of service by causing benignsamples misclassified as false positives.

2) Adversarial Knowledge: Based on the adversarialknowledge available to the adversaries, the adversarial MLattacks are divided into three types; namely, white-box, gray-box, and black-box attacks. White-box attacks assume com-plete knowledge about the underlying ML model includinginformation about the optimization technique, the trainedML model, model architecture, activation function, hyper-parameters, layer weights, and training data. Gray-box attacksassume a partial knowledge about the targeted model whereasthe black-box adversarial attack assumes the adversary has

Page 14: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

14

Fig. 11: The taxonomy of various types of threat models used in literature to design adversarial ML attacks. This figure also provides the information needed by a defender toensure the robustness of ML-based autonomous system.

zero knowledge and no access to the underlying ML modeland the training data. Black box attack refers to the real-world knowledge where there is not much information aboutthe targeted ML/DL scheme is available. In such cases, theadversary acts as a normal user and tries to infer from theoutput of the ML system. Black-box adversarial attacks makeuse of transferability property of adversarial examples where itis assumed that adversarial examples created for one ML/DLmodel will affect other models trained on datasets with asimilar distribution to that of the original model [116].

3) Adversarial Capabilities: Adversarial capabilities areimportant to be identified in security practice. As they definethe strength of the adversaries to compromise the securityof the system. In general, an adversary can be stronger orweaker based on the knowledge and access to the system.Adversarial capabilities advocate how and what type of attacksan adversary can realize using what type of attack vector onwhich attack surface. The attacks can be launched at twomain phases; namely, inference and training. Inference timeattacks are exploratory attacks that do not modify the un-derlying model. Instead they influence it to produce incorrectoutputs. Inference attacks vary with the availability of systemknowledge. The training time attacks aim at tampering withthe model itself or influence its learning process and involvetwo types of attack methods [117]. In the first type, adversarialexamples are injected in the training data and in the secondtype, training data is directly modified.

4) Adversarial Specificity: Another classification of theadversarial attacks is based on the specificity of the adversarialexamples, where adversarial attacks are classified as targetedand non-targeted attacks. The attacks where adversarial pertur-bations are added to compromise the performance of a specificclass in the data are known as the targeted adversarial attacks.Targeted adversarial attacks are launched by adversaries tocreate targeted misclassification (i.e., a specific road sign willbe misclassified by the self-driving vehicle while the rest ofthe road sign classification system will function correctly)

or source/target misclassification (i.e., a certain road trafficsign will be always classified in a pre-determined wrongclass by the road sign classifier in a self-driving vehicle).Whereas adversarial perturbations created for deteriorating theperformance of the model irrespective of any class of dataare known as non-targeted adversarial attacks. Non-targetedattacks are launched by adversaries to reduce the classificationconfidence (i.e., a traffic sign will be detected with lessaccuracy which was previously detected with high accuracy)and misclassification (i.e., a road traffic sign will be classifiedin any other class than its original one).

5) Adversarial Falsification: The adversary can attempttwo types of falsification attacks; namely, false positive at-tacks, and false negative attacks [15]. In the first attack, an ad-versary generates a negative sample which can be misclassifiedas a positive one. Let’s assume such attack has been launchedon the image classification system of an autonomous vehicle.A false positive will be an adversarial image predicted to beof a class with high confidence to whom it did not belongand is unrecognizable to humans. On the contrary, whileattempting false negative attacks, the adversary generates apositive sample which can be misclassified as a negative one.In adversarial ML, this type of attack is referred to as anevasion attack.

6) Attack Frequency: The adversarial attacks can be singlestep or consist of an iterative optimization process. Comparedto single step attacks, iterative adversarial attacks are stronger;however, they require frequent interactions for querying theML system and subsequently require a large amount of timeand computational resources for their efficient generation.

7) Adversarial Goals: The last component of the threatmodeling is the articulation of the adversary’s goals. The clas-sical approach to model adversarial goals includes modeling ofthe adversary’s desires to impact the confidentiality, integrity,and availability (known as the CIA model) and a fourth, yetimportant dimension is the privacy [117].

Page 15: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

15

TABLE IV: Summary of the state-of-the-art attacks

Year Authors Method Adversarial Knowledge Adversarial Specificity Perturbation Scope Perturbation Norm Attack Learning2014 Szegedy et al. [13] L-BFGS White box Targeted Image specific L∞ One Shot2015 Goodfellow et al. [99] FGSM White box Targeted Image specific L∞ One shot2016 Kurakin et al. [118] BIM & ILCM White box Non targeted Image specific L∞ Iterative2016 Papernot et al. [10] JSMA White box Targeted Image specific L0 Iterative2016 Moosavi et al. [119] DeepFool White box Non targeted Image specific L2,L∞ Iterative2017 Carlini et al. [120] C&W attacks White box Targeted Image specific L0,L2,L∞ Iterative2017 Moosavi et al. [121] Uni. perturbations White box Non targeted Universal L2,L∞ Iterative2017 Sarkar et al. [122] UPSET Black box Targeted Universal L∞ Iterative2017 Sarkar et al. [122] ANGRI Black box Targeted Image specific L∞ Iterative2017 Cisse et al. [123] Houdini Black box Targeted Image specific L2,L∞ Iterative2018 Baluja et al. [124] ATNs White box Targeted Image specific L∞ Iterative2019 Su et al. [104] One-pixel Black box Non Targeted Image specific L0 Iterative

C. Review of Existing Adversarial ML Attacks

1) Adversarial ML Attacks on Conventional ML Schemes:A pioneering work on adversarial ML was performed by Dalviet al. [125] in 2004 where they proposed a minimum distanceevasion of the linear classifier and tested there proposed attackon spam classification system highlighting the threat of adver-sarial ML examples. A similar contribution was made by Lowdet al. [126] in 2005 where they proposed adversarial classifierreverse engineering technique for constructing an adversarialattack on classification problems. In 2006, Barreno et al. [127]discussed the security of ML in adversarial environments andprovided a taxonomy of attacks on ML schemes along with thepotential defenses against them. In 2010, Huang et al. [128]provided the first consolidated review of adversarial ML wherethey discussed the limitations on the classifiers and adversariesin real-world settings. Biggio et al. [97] proposed poisoningattack on Support Vector Machines (SVM) to increase the testerror in SVM, their attack successfully altered the test errorof SVM with linear and non-linear kernels. The same authorsalso proposed an evasion attack where they used a gradient-based approach for evading PDF malware detectors [98] andtested their attack on SVM and simple neural networks.

2) Adversarial ML Attacks on DNNs: Adversarial MLattacks on DNNs were first observed by Szegedy et al. [13]where they demonstrated that DNNs can be fooled by mini-mally perturbing their input images at test time, the proposedattack was a gradient-based attack where minimum distancebased adversarial examples were crafted to fool the imageclassifiers. Another gradient-based attack was proposed byGoodfellow et al. [99]. In this attack, they formulated adversar-ial ML as a min-max problem and adversarial examples wereproduced by calculating the lower bound on the adversarialperturbations. This method was termed as FGSM and is stillconsidered a very effective algorithm for creating adversarialexamples. Adversarial training was also introduced in the samepaper as a defensive mechanism against adversarial examples.Kurakin et al. [118] highlighted the fragility of ML/DLschemes in real-world settings using images taken from acell phone camera for adversarial example generation. Theadversarial samples were created by using the basic iterativemethod (BIM) an extended version of FGSM. The resultantadversarial examples were able to fool the state-of-art imageclassifier. In [129], authors demonstrated that only rotationand translation are sufficient for fooling state-of-the-art deeplearning based image classification models, i.e., convolutionalneural networks(CNNs). In a similar study [130], ten state-of-

the-art DNNs were shown to be fragile to the basic geometrictransformation, e.g., translation, rotation, and blurring. Liu etal. presented a trojaning attack on neural networks that worksby modifying the neurons of the trained model instead ofaffecting the training process [131]. Authors used trojan asa backdoor to control the trojaned ML model as desired andtested it on an autonomous vehicle. The car misbehaves whena specific billboard (trojan trigger) is encountered by it on theroadside.

Papernot et al. [10] exploited the mapping between theinput and output of DNNs to construct a white-box Jacobiansaliency-based adversarial attack (JSMA) scheme to fool theDNN classifiers. The same authors also proposed anotherdefense against adversarial perturbations by using defensivedistillation. Defensive distillation is a training method in whicha model is trained to predict the classification probabilitiesof another model which was trained on the baseline standardto give more importance to accuracy. Papernot et al. [132]also proposed a black-box adversarial ML attack where theyexploited the transferability property of adversarial examplesto fool the ML/DL classifiers. This black-box adversarialattack was based on the substitute model training which notonly fools the ML/DL classifiers but also breaks the distillationdefensive mechanism. Carlini et al. [120] proposed a suite ofthree adversarial attacks termed as C&W attacks on DNNsby exploiting three distinct distance measures L1, L2, andL∞. These attacks have not only evaded the DNN classifiersbut also evaded the defensive distillation successfully. Thisdemonstrated that defensive distillation is not an appropriatemethod for building robustness. In another paper, Carlini etal. [133] presented that the proposed adversarial attacks in[120] have successfully evaded the ten well known defen-sive schemes against adversarial examples. Right now theseattacks are also considered as state-of-art adversarial MLattacks. Furthermore, Carlini et al. successfully demonstratedan adversarial attack on speech recognition system by addingsmall noise in the audio signal that forces the underlying MLmodel to generate intended commands/text [134]. In [135],an adversarial patch affixed to an original image forces thedeep model to mis-classify that image. Such universal targetedpatches fool classifiers without requiring knowledge of theother items in the scene. Sich patches can be created offlineand then broadly shared. More details on adversarial MLattacks can be found in [3], [4], [15], [16], [136], [137]. Asummary of different state-of-the-art adversarial perturbationgeneration methods is provided in Table IV.

Page 16: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

16

TABLE V: Accidents caused by self-driving vehicles due to unintended adversarial conditions

Year Company Cause of the accident Damages System failure2014 Hyundai Weather (Rain fall) Car crashed Camera object detection failure2016 Google Waymo Speed estimation failure Car crashed in the bus Dynamic object movement detection failure

2016 Tesla Image classification and image contrast failure Car crashed in the neighboring truckand the driver was killed

Camera’s detection andclassification suite failure

2017 Uber Overreaction to an unseen event(Near by accident) Car crashed Lack of robustness in control system

2017 General Motors Stuck in a dilemma(Lane change decision reversal) Car knocked over a motorcyclist Coordination and state estimation failure

2018 Uber Confusion in the software decision systemand safety system failure Killed a person on the road Failure of planning and perception system

TABLE VI: Domains affected by adversarial machine learning (ML) and their applica-tions

Domain Application Papers

Imaging

Digit Recognition [10], [99], [120], ...Object Detection [120], [132], [102], ...Traffic Signs Recognition [132], [12], [138], ...Semantic Segmentation [139], [140], [141], ...Reinforcement Learning [142], [143], [144], ...Generative Modeling [145], [146], [147], ...

TextText Classification [148], [149], [150], ...Sentiment Analysis [151], [150]Reading Comprehension [152], [153]

Networking

Intrusion Detection [154], [155], [156], ...Anomaly Detection [157], [158]Malware Classification [159], [160], [161], ...Traffic Classification [162], [163]

Audio Speech Recognition [134], [164], [123], ...

D. Adversarial ML attacks on CAVs

ML and DL act are core ingredients for performing manykey tasks in self-driving vehicles. Beyond providing deeplyembedded information for the decision making process withinthe vehicle’s components, they also play an important rolein V2I and V2V, and V2X communications. As described inearlier sections, ML/DL schemes are very vulnerable to smallcarefully crafted adversarial perturbations. Self-driving vehi-cles are also threatened by this security risk along with othertraditional security risks. Adversarial ML has affected manyapplication domains including imaging, text, networking, andaudio as highlighted in Table VI.

1) Autonomous Vehicles Accidents Due to Unintended Ad-versarial Conditions: The autonomous vehicles developed sofar are not robust to unintended adversarial conditions andthere have been few reported fatalities caused by the malfunc-tioning of DNN-based autonomous vehicles where adversarialexamples were unintentionally created by the DNN operatingthe autonomous vehicle. In 2014, during Hyundai competition,an autonomous vehicle crashed because of a sensor failure dueto shifting in the angle of the car and direction of the sun2.Another incident was reported in 2016 where a Tesla autopilotwas not able to handle the image contrast which resulted inthe death of the driver3. It was also reported that the Teslaautopilot unable to differentiate between the bright sky and awhite truck which resulted in a horrible accident. A similaraccident happened to Google self-driving car where the carwas unable to estimate the relative speed which resulted in acollision with a bus4. In 2018 Uber self-driving car also faced

2https://bit.ly/2SWLxUY3https://cnnmon.ie/2VOB2834https://bit.ly/1U0O6yx

an accident due to malfunctioning in the DNN-based systemwhich resulted in a pedestrian fatality5. Table V provides adetailed description of accidents caused by malfunctions indifferent components of self-driving vehicles.

2) Physical World Attacks on Autonomous Vehicles: Aunget al. [165] used FGSM and JSMA schemes to generateadversarial traffic signs to successfully evade the DNN-basedtraffic sign detection schemes to highlight the problem ofadversarial examples in autonomous driving. Sitawarin et al.[138] proposed a real-world adversarial ML attack by alteringthe traffic signs and logos with adversarial perturbations whilekeeping the visual perception of the traffic and logo signs. Inanother work, Sitawarin et al. [12] proposed a technique forgenerating out-of-distribution adversarial examples to performan evasion attack on ML-based sign recognition system ofautonomous vehicles. They also proposed a Lenticular printingattack where they exploited the camera height in autonomousvehicles to create an illusion of false traffic signs in thephysical environment to fool the sign recognition system ofautonomous vehicles.

Object detection is another integral part of the perceptionmodule of autonomous vehicles where state-of-the-art DNN-based schemes such as Mask R-CNN [166] and YOLO [167]are used for object detection. Zhang et al. [168] proposed acamouflage physical world adversarial attack by approximatelyimitating how a simulator applies camouflage to the vehicleand then minimized the approximated detection score by usinglocal search for optimal camouflage. The proposed adversarialattack successfully fooled image-based object detection sys-tems. Another physical world adversarial example generationscheme on object detection is performed by Song et al. [169]where the perturbed “STOP” sign remained hidden from thestate-of-art object detectors like Mask R-CNN and YOLO.They produced adversarial perturbations by the robust physicalperturbations (RP2) [170] algorithm. Recently Zhou et al.[171] proposed DeepBillboard a systematic way for generatingadversarial advertisement billboards to inject a malfunction inthe steering angle of the autonomous vehicle. The proposedadversarial billboard misled the average steering angle by26.44 degrees. Table VII provides a summary of state-of-the-art adversarial attacks on self-driving vehicles. In a recentstudy [172], imitation learning has been shown robust enoughfor autonomous vehicles to drive in a realistic environment.Authors proposed a model named ChauffeurNet that learns todrive the vehicle by imitating best and synthesizing worst.

5https://bit.ly/2SWmb9N

Page 17: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

17

TABLE VII: Adversarial attacks on self-driving vehicles: summary of state-of-the-art

Attack Objective Specific work Problem formulation Data Threat model Attack results

Perception systemfailure

DARTS [12]: Traffic signsmanipulation

Generating adversarial examples forCNN-based traffic sign detection byperforming out-of-distribution attacksalong with Lenticular printing.

GTSRBGTSDB

1) Virtual and physical world attack2) White and black-box mode

DARTS has successfullyfooled the perception systemof self-driving car.

Perception systemfailure

Rogue Signs [138]: Traffic signsand logos manipulation

End-to-end pipeline for adversarialexample generation for CNN-basedtraffic signs and logo detection.

GTSRB 1) Virtual and physical world attack2) White-box mode

Fooled the perception systemof self-driving car with a successrate of 99.7%.

Object detectionfailure

ShapeShifter [173]: Adversarialattack on Faster R-CNN

Adversarial attack on bounding boxesof Faster R-CNN by using expectationover transformation techniques.

MS-COCO 1) Virtual and physical world attack2) White-box mode

Caused malfunction in FasterR-CNN of self-driving carwith 93% success.

Motion planning andperception system

failure

DeepBillboard [171]: Adversarialattack through drive-bybillboards.

Adversarial attack on steering angleof the self-driving car by using adversarialperturbations in drive-by billboards.

1) Udacity self-drivingcar challenge dataset2) Dave testing dataset3) Kitti dataset

1) Virtual and physical world attack2) White and black-box mode

Caused a 23 degree malfunctionin steering angle ofself-driving car.

Object detectionfailure

CAMOU [168]: Adversarialcamouflage to fool theobject detector

Generating adversarial perturbationto prevent the self-driving car fromMask R-CNN-based object detector.

Unreal enginesimulator

1) Virtual and physical world attack2) White and black-box mode

Caused a 32.74% dropin the performance ofMask R-CNN.

Object detectionfailure

Song et al. [169]: Objectdisappearance andcreation attack

Generating adversarial perturbationbased stickers where object detectionschemes like Yolo and R-CNN usedin self-driving cars fails to recognizecertain signs and logos. Furthermoreobject detection schemes startdetecting things that are not presentin the frame.

Video of traffic signs 1) Virtual and physical world attack2) White-box mode

The object detection schemesfails to recognize traffic signsnearly in 86% of the frames inthe video.

Perception systemfailure

Eykholt et al. [170]: Robustphysical perturbationsagainst visual classificationunder different physicalenvironment.

Robust physical perturbations arecreated to fool LISA-CNN andGTSRB-CNN based traffic signclassification schemes.

1) LISA2) GTSRB

1) Virtual and physical world attack2) White and black-box mode

In few cases caused 100%performance drop in visualclassification of traffic signs

Perception andcontroller system

failure

Tuncali et al. [174]: Simulationbased adversarial testgeneration for self-drivingcars.

Testing and verifying self-drivingcar’s perception and controllersystem against adversarial examples.

Simulated datafrom proposedsimulator

1) Virtual environment2) White and black-box mode

Designed system was ableto detect critical cases inautonomous car’s perceptionand control.

Controller systemfailure

Yaghoubi et al. [175]: Findinggray-box adversarial examplesfor closed loop autonomouscars control system.

Testing the controller and perceptionsystem of self-driving cars againstgradient-based gray-box adversarialexamples.

Simulated data 1) Virtual environment2) Gray box mode

Gray-box adversarial exampleshave outperformed simulatedAnnealing optimization ina dummy control systemproblem.

End-to-endautonomous control

failure

Boloor et al. [176]: Physicaladversarial examplesagainst E2E drivingmodels.

Disrupting steering by using physicalperturbation in the environment.

CARLA simulateddata

1) Virtual environment2) White-box mode

Physical adversarialperturbation has forced theself-driving car to crash.

V. TOWARDS DEVELOPING ADVERSARIALLY ROBUST MLSOLUTIONS

As discussed above, despite the outstanding performanceof ML techniques in many settings, including human levelaccuracy at recognizing images. These techniques exhibit strictvulnerability to carefully crafted adversarial examples. In thissection, we present an outline of approaches for developingadversarially robust ML solutions. We define the robustness asthe ability of the ML model to restrain adversarial examples.

In the literature, defenses against adversarial attacks havebeen divided into two broad categories: (1) reactive detectadversarial observations (input) after deep models are trained;and (2) proactive make the deep model robust against adver-sarial examples before the attack.

Alternatively, these techniques can also be broadly dividedinto three categories: (1) modifying data; (2) adding auxiliarymodels; and (3) modifying models. The reader is referred toFigure 12 for a visual depiction of a taxonomy of robustML solutions in which various techniques that fall in thesecategories are also listed. These categories are detailed next.

A. Modifying Data

The methods falling under this category mainly deal withmodification of either the training data (e.g., adversarial re-training) and its features or test data (e.g., data pre-processing).Widely used approaches that utilize such methods are de-scribed below.

1) Adversarial (Re-)training: The training with adversarialexamples has been firstly proposed by Goodfellow et al. [99]and Huang et al. [177] as a defense strategy to make deep

neural networks (DNNs) robust against adversarial attacks.They trained the model by augmenting adversarial examplesin the training set. Furthermore, Goodfellow et al. showedthat adversarial training could provide better regularization forDNNs. In [99], [177], the adversarial robustness of ML modelswas evaluated on the MNIST dataset having 10 classes whilein [178], a comprehensive evaluation of adversarial trainingwas performed on a considerably large dataset, i.e., ImageNethaving 1000 classes. The authors used 50% of the dataset foradversarial training and this strategy increased the robustnessof DNNs for single step adversarial attack (e.g., FGSM [99]).However, the strategy failed for iterative adversarial examplesgeneration methods such as the basic iterative method (BIM)[118].

2) Input Reconstruction: The idea of input reconstructionis to clean the adversarial examples to transform them backto legitimate ones. Once the adversarial examples have beentransformed, they will not affect the prediction of DNN mod-els. For robustifying DNN, a technique named deep contractiveautoencoder has been proposed in [179]. They trained adenoising autoencoder for cleaning adversarial perturbations.

3) Feature Squeezing: Xu et al. [180] leveraged the ob-servation that input feature spaces are typically unnecessarilylarge and provide a vast room for an adversary to construct ad-versarial perturbations and thereby proposed feature squeezingas a defense strategy to adversarial examples. The availablefeature space to an adversary can be reduced using featuresqueezing that combines samples having heterogeneous featurevectors in the original space into a single space. They performfeature squeezing at two levels: (1) reducing color bit depth;

Page 18: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

18

Fig. 12: Taxonomy of robust machine learning (ML) methods categorized into three classes: (1) Modifying Data (2) Adding Auxiliary Model(s) and (3) Modifying Models.

(2) spatial domain smoothing using both local and non-localmethod. Also, they evaluated eleven state-of-the-art adversarialperturbations generation methods on three different datasets,i.e., MNIST, CIFAR-10, and ImageNet. However, this defensestrategy was found to be less effective in a later study [181].

4) Features Masking: In [182], authors proposed to adda masking layer before the softmax layer of the classifierthat is mainly responsible for the classification task. Thepurpose of adding the masking layer was to mask the mostsensitive features of input that are more prone to adversarialperturbations by forcing the corresponding weights of thislayer to zero.

5) Developing Adversarially Robust Features: This methodhas been recently proposed as an effective approach to makeDNNs resilient against adversarial attacks [183]. Authorsleveraged the connections between the natural spectral ge-ometrical property of the dataset and the metric of interestfor developing adversarially robust features. They empiricallydemonstrated that the spectral approach can be effectively usedto generate adversarially robust features that can be ultimatelyused to develop robust models.

6) Manifold Projection: In this method, input examples areprojected on the manifold of learned data from another MLmodel, generally, the manifold is provided by a generative

model. For instance, Song et al. [184] leveraged generativemodels to clean the adversarial perturbations from maliciousimages and then the cleaned images are given to the non-modified ML model. Furthermore, this paper ascertains thatregardless of the attack type and targeted model, the ad-versarial examples lie in the low probability regions of thetraining data distribution. In a similar study [185], authorsused generative adversarial networks (GANs) for cleaningadversarial perturbations. Similarly, Meng et al. proposed aframework named MagNet that includes one or more detectorsand a reformer network [186]. The detector network is usedto classify normal and adversarial examples by learning themanifold of normal examples, whereas, the reformer networkmoves adversarial examples towards the learned manifold.

B. Modifying Model

The methods that fall in this category mainly modify theparameters/features learned by the trained model (e.g., defen-sive distillation), a few prominent such methods are describednext.

1) Network Distillation: Papernot et al. [187] adopted net-work distillation as a procedure to defend against adversarialattacks. The notion of distillation was originally proposed byHinton et al. [188] as a mechanism for effectively transferring

Page 19: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

19

knowledge from a larger network to a smaller one. The defensemethod developed by Papernot et al. uses the probabilitydistribution vector generated by the first model as an inputto the original DNN model. This increases the resilience ofthe DNN model towards very small perturbations. However,Carlini et al. showed that the defensive distillation methoddoes not work against their proposed attack [133].

2) Network Verification: Network verification aims to ver-ify the properties of DNN, i.e., whether an input satisfies orviolates certain property because it may restrain new unseenadversarial perturbations. For instance, a network verificationmethod for robustifying DNN models using ReLU activationis presented in [178]. To verify the properties of the deepmodel, the authors used the satisfiability modulo theory (SMT)solver and showed that the network verification problem isNP-complete. The assumption of using ReLU with certainmodifications is addressed in [189].

3) Gradient Regularization: Ross et al. [190] proposedusing input gradient regularization as a defense strategy againstadversarial attacks. In the proposed approach, they used differ-entiable DNN models and penalized the variation that resultsin the output with a change in the input. As a result, adversarialexamples with small perturbations were unlikely to modifythe output of deep models but this increases the trainingcomplexity with a factor of two. The notion of penalizing thegradient of loss function of models with respect to the inputsfor robustification has been already been investigated in [191].

4) Classifier Robustifying: In this method, classificationmodels that are robust to adversarial attacks are designedfrom the ground up instead of detecting or transforming them.Bradshaw et al. [192] utilized the uncertainty around theadversarial examples and developed a hybrid model usingGaussian processes (GPs) with RBF kernels on top of DNNsand showed that their approach is robust against adversarial at-tacks. The latent variable in GPs is expressed using a Gaussiandistribution and is parameterized by mean and covariance andencoded with RBF kernels. Schott et al. [193] proposed thefirst adversarially robust classifier for MNIST dataset, whererobustness is achieved by using analysis by synthesis throughlearned class-conditional data distribution. This work high-lights the lack of research that provides guaranteed robustnessagainst adversarial attacks.

5) Explainable and Interpretable ML: In a recent study[194], an adversarial example detection approach is presentedfor a face recognition task that leverages the interpretabilityof DNN models. The key in this approach is the identificationof critical neurons for an individual task that is performed byestablishing a bi-directional correspondence inference betweenthe neurons of a DNN model and its attributes. Then theactivation values of these neurons are amplified to augment thereasoning part and the values of other neurons are decreasedto conceal the uninterpretable part. Recently, Nicholas Carlinishowed that this approach does not defend against untargetedadversarial perturbations generated using L∞ norm with abound of 0.01 [195].

6) Masking ML Model: In a recent study [196], authorsformulated the problem of adversarial ML as learning andmasking problem and presented a classifier masking method

for secure learning. To mask the deep model, they introducednoise in the DNN’s logit output that was able to defend againstlow distortion attacks.

C. Adding Auxiliary Model(s)

These methods aim to utilize additional ML models to en-hance the robustness of the main model (e.g., using generativemodels for adversarial detection), such widely used methodsare described as follows.

1) Adversarial Detection: In adversarial detection strategy,a binary classifier (detector) is trained, e.g., DNN to identifythe input as a legitimate or an adversarial one [197], [198]. In[199], authors used a simple DNN-based binary adversarialdetector as an auxiliary network to the main model. In asimilar study [200], authors introduced an outlier class whiletraining the DNN model, the model then detects the adversarialexamples by classifying them as an outlier. This defenseapproach has been used in a number of studies in the literature.

2) Ensembling Defenses: As adversarial examples can bedeveloped in a multi-facet fashion, therefore, multiple defensemethods can be combined together (parallelly or sequentially)to defend against them [201]. PixelDefend [184] is a primeexample of ensemble defense in which an adversarial detectorand an “input reconstructor” are integrated to restrain adver-sarial examples. However, He et al. showed that an ensembleof weak defense strategies does not provide a strong defenseto adversarial attacks [181]. Further, they demonstrated thatadaptive adversarial examples transfer across several defenseor detection proposals.

3) Using Generative ML Models: Goodfellow et al. [99]firstly coined the idea of using generative training to defendadversarial attacks, however, in the same study they argued thatbeing generative is not sufficient and presented an alternativehypothesis of ensemble training that works by ensemblingmultiple instances of original DNN models. In [202], anapproach named cowboy is presented to detect and defendagainst adversarial examples. They transformed adversarialsamples back to data manifold by cleaning them using a GANtrained on the same dataset. Furthermore, authors empiricallyshowed that adversarial examples lie outside the data man-ifold learned by the GAN, i.e., the discriminator of GANconsistently scores the adversarial perturbations lower than thereal samples across multiple attacks and datasets. In anothersimilar study [203], a GAN-based framework named Defense-GAN is trained for modeling the distribution of legitimateimages. During inference time, Defense-GAN finds a similaroutput without adversarial perturbations that is then fed to theoriginal classifier. Also, the authors of both of these studiesclaimed that their method is independent of the DNN modeland attack type and that it can be used in existing settings.The summary of various state-of-the-art adversarial defensestudies is presented in Table VIII.

D. Adversarial Defense Evaluation: Methods and Recommen-dations

This section presents different potential methods for per-forming the evaluation of adversarial defenses along with an

Page 20: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

20

TABLE VIII: Summary of state-of-the-art adversarial defense approaches

Reference Type Defense Adv. Perturbation Dataset ThreatModel

OriginalAccuracy

AdversarialAccuracy

Defense Success

Gu et al.[179]

ModifyingData

Adversarial examplescleaning using denoisingautoencoders (DAEs).

Local perturbations, e.g.,additive Gaussian noise.

MNIST Not clearlyarticulated

99 (%) 100 (%) 99.1%

Xu et al.[180]

Reduced the featurespace available to anadversary.

Evaluated different stateof the art perturbationgeneration methods.

MNIST,CIFAR-10,and ImageNet

White Box MNIST(99.43%),CIFAR-10(94.84%),andImageNet(68.36%)

Roughlyachieved100%for eachmodel usingdifferentattackalgorithms.

MNIST (62.7%),CIFAR-10 (77.27%),and ImageNet(68.11%)

Gao et al.[182]

Proposed DeepCloakthat removesunnecessary featuresin the model.

Perturbations are gener-ated using FGSM.

CIFAR-10 Not clearlyarticulated

93.72%(1%masking)

39.23% (1%masking)

10% increase in ad-versarial settings

Garg et al.[183]

Constructedadversarially robustfeatures using spectralproperty of the dataset.

L2 Perturbations MNIST Not clearlyarticulated

Not clearlyarticulated

Not clearlyarticulated

Provided empiricalevidence for theeffectiveness of theproposed defense.

Song et al.[184]

Proposed PixelDefend toclean adversarial exam-ples by moving themback to the manifold oforiginal training data.

Used five state of the artadversarial attacks.

Fashion MNISTand CIFAR-10

White Box 90% FashionMNIST(63%),CIFAR-10(32%) forstrongestdefense.

Adversarial accuracyincreased from 63%to 84% for FashionMNIST and 32% to70% for CIFAR-10.

Prakash etal. [204]

Used wavelet-based de-noising method to cleannatural and adversarialnoise.

Generated perturbationsusing pixel deflectionL2(ε = 0.05).

ImageNet White Box 98.9% Not clearlyarticulated

81% accuracy

Xie et al.[205]

Proposed to use random-ization at inference stageand used random resiz-ing and random padding.

L∞(ε = 10/255) ImageNet White Boxand BlackBox

99.2% Not clearlyarticulated

86% accuracy

Guo et al.[206]

Investigated different im-age transformation meth-ods defending adversar-ial attacks.

L2(ε = 0.06) ImageNet Gray Boxand BlackBox

75% Not clearlyarticulated

70% accuracy

Goodfellowet al. [99]

Augmented adversarialexamples into thetraining set.

Fast Gradient Sign(FGSM) method

MNIST Not clearlyarticulated

Not clearlyarticulated

99.9% With adversarialtraining, the error ratefell to 17.9%

Schott etal. [193]

AddingAuxiliaryModel

Used generative mod-elling using variationalautoencoder (VAE.)

Applied score-based,decision-based, transfer-based, and gradient-based attacks usingL2(ε = 1.5).

MNIST Not clearlyarticulated

99% Not clearlyarticulated

80%

Wong etal. [103]

Formulated a robust op-timization problem usingconvex outer approxima-tion for detection of ad-versarial examples.

FGSM and gradientdescent based methods,L∞(ε = 0.1).

MNIST Not clearlyarticulated

98.2% ac-curacy

Not clearlyarticulated

94.2% accuracy

Liao et al.[207]

Proposed a high-levelrepresentation guided de-noiser (HGD) for de-fending adversarial at-tacks.

L∞(ε = 4/255) ImageNet White Boxand BlackBox

75% Not clearlyarticulated

75% accuracy

Ross et al.[190]

ModifyingModel

Trained the model withinput gradient regulariza-tion for defending adver-sarial attacks.

Evaluated three famousattacks, i.e., FGSM,TGSM, and JSMA.

MNIST, Street-View HouseNumbers(SVHN), andnotMNIST

White Boxand BlackBox

Not clearlyarticulated

Not clearlyarticulated

MNIST (100%),SVHN (∼90%), andnotMNIST (100%)-approximately samefor each type ofattack.

Madry etal. [208]

Trained the model withoptimized parameters us-ing robust optimization.

L∞(ε = 8/255) CIFAR-10 White Boxand WeakerBlack Box

87% Not clearlyarticulated

46% accuracy

Buckmanet al. [209]

Proposed to use ther-mometer encoding forinputs.

L∞(ε = 8/255) CIFAR-10 White Box 90% Not clearlyarticulated

79% accuracy

Dhillon etal. [210]

Proposed stochastic ac-tivation pruning of thetrained model for de-fense.

L∞(ε = 4/255) CIFAR-10 Not clearlyarticulated

83% Not clearlyarticulated

51% accuracy

Croce etal. [211]

Proposed a regulariza-tion scheme for ReLUnetworks

Perturbations using L2

and L∞ methods.MNIST,GermanTraffic Signs(GTS), FashionMNIST, andCIFAR-10.

Not clearlyarticulated

98.81% Not clearlyarticulated

96.4% accuracy (onfirst 1000 test points)

Page 21: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

21

Fig. 13: The taxonomy of different adversarial defense evaluation methods and recom-mendations.

outline of common evaluation recommendations, as depictedin Figure 13.

1) Principles for Performing Defense Evaluations: In arecent study [212], Carlini et al. provided recommendationsfor evaluating adversarial defenses and thereby provided threecommon reasons to evaluate the performance of adversarialdefenses. These recommendations are briefly described below.

a) Defending Against the Adversary: Defending againstadversaries attempting adversarial attacks on the system iscrucial as it is a matter of security concern. In real-worldapplications, if the ML-based systems are deployed withoutconsidering the security threats then the adversaries willing toharm the system will continue to practice attacking the systemas long as there are incentives. The nature and sovereigntyof attacks vary with adversarial capabilities and knowledge,etc. In this regard, proper and well-thought threat modeling(described in detail in an earlier section) is of paramountimportance.

b) Testing Worst-Case Robustness: In real-world set-tings, testing the worst-case robustness of ML models fromthe perspective of an adversary is crucial as real-world systemsexhibit randomness that is hard to be predicted. Comparedto the random testing approach, worst-case analysis can be apowerful tool to distinguish a system that fails one time ina billion trials from a system that never fails. For instance,if a powerful adversary who is attempting to harm a systemto get intentional misbehavior fails to do so, then it providesstrong evidence that the system will not misbehave in case ofpreviously unforeseen randomness.

c) Measuring Progress of ML Towards Human LevelAbilities: To advance ML techniques, it is important to un-derstand why ML algorithms fail in a particular setting. Inthe literature, we see that the performance gap between MLmethods and humans is considerably small on many complextasks, e.g., natural image classification [109], mastering thegame of Go using reinforcement learning [213], and humanlevel accuracy in the medical domain [214], [215]. However,in case of evaluating adversarial robustness, the performancegap between humans and ML systems is very large. This is

so true for the cases where ML models exhibit super-humanaccuracy, i.e., an adversarial attack can completely evade theprediction performance of the system. This leads to the beliefthat there exists a fundamental difference between the decisionmaking process of humans and ML models. So, keeping thisaspect in mind, adversarial robustness is the measure of MLprogress that is orthogonal to performance.

2) Common Evaluation Recommendations: In this section,we provide a brief discussion on the common evaluationrecommendations and we refer interested readers to the recentarticle of Carlini et al. [212] for a detailed and comprehensivedescription on evaluation recommendations and pitfalls foradversarial robustness. As authors promised to update thispaper timely, therefore, we also refer interested readers tofollowing URL6 for an updated version of this paper. To avoidunintended consequences and pitfalls of evaluation methods,the following evaluation recommendations can be adopted.

a) Use Both Targeted and Untargeted Attacks: Adver-sarial robustness should be evaluated on both targeted anduntargeted attacks. In any case, it is important to explicitlystate which attack were considered while evaluating. The-oretically, an untargeted attack is considered to be strictlyeasier than a targeted attack but practically, performing anuntargeted attack can give better results than targeting anyof N − 1 classes. Many untargeted attacks mainly work byminimizing the prediction confidence of the correct label.Contrarily, targeted attacks work by maximizing the predictionconfidence of some other class.

b) Perform Ablation: Perform ablation analysis by re-moving a combination of defense components and verifyingthat the attack succeeds on a similar but undefended model.This is useful to develop a straight forward understandingof the goals of the evaluation and assess the effectivenessof combining multiple defense strategies for robustifying themodel.

c) Diverse Test Settings: Perform the evaluation in di-verse settings, i.e., test the robustness to random noise, validatebroader threat models, and carefully evaluate the attack hyper-parameters and select those that provide the best performance.It is also important to verify that the attack converges underselected hyperparameters. Also, investigate whether attackresults are sensitive to a specific set of hyperparameters. Inaddition, experiment witg at least one hard label attack andone gradient free attack.

d) Evaluate Defense on Broader Domains: For a defenseto be truly effective, consider evaluating the proposed defensemethod on broader domains other than images. For instance,the majority of works on adversarial machine learning mainlyinvestigate the imaging domain. State explicitly if the defenseis only capable of defending adversarial perturbations in aspecific domain (e.g., images).

e) Ensemble Over Randomness: It is important to createadversarial examples by ensembling over the randomness ofthose defenses that randomize aspects of DNN inference. Theintroduced randomness enforces stochasticity and standard at-tacks become hard to be realized. Verify that the attack remains

6https://github.com/evaluating-adversarial-robustness/adv-eval-paper

Page 22: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

22

successful when randomness is assigned a fixed value. Also,define the threat model and the availability of randomnessknowledge to the adversary.

f) Transferability Attack: Select a similar substitutemodel (to the defended model) and perform transferabilityof the attack. Because the adversarial examples are oftentransferable across different models, i.e., an adversarial sampleconstructed for one model often appears adversarial to anothermodel with identical architecture [216]. This is true regardlessof the fact that the other model is trained on completelydifferent data distribution.

g) Upper Bound of Robustness: To provide upper boundon robustness, apply adaptive attacks, i.e., give access to a fulldefense. Apply the strongest attack for a given threat modeland defense being evaluated. Also, verify that adaptive attacksperform better than others and evaluate their performance inmultiple settings, e.g., the combination of transfer, random-noise, and black-box attacks. For instance, Ruan et al. eval-uated the robustness of DNN and presented an approximateapproach to provide lower and upper bounds on robustness forL0 norm with provable guarantees [217].

E. Testing of ML Models and Autonomous Vehicles

1) Behavior Testing of Models: In a recent study, Sun etal. proposed four novel testing criteria for verifying structuralfeatures of DNN using MC/DC7 coverage criteria [218]. Theyvalidated proposed methods by generating test cases guidedby their proposed coverage criteria using both symbolic andgradient-based approach and showed that their method wasable to capture undesired behaviors of DNN. Similarly, aset of multi-granularity testing criteria named DeepGauge ispresented in [219] that works by rendering a multi-facetedtestbed. The security analysis of neural networks based systemusing symbolic intervals is presented in [220] which usesinterval arithmetics and symbolic intervals together with otheroptimization methods to minimize confidence bound of over-estimation of outputs. A coverage guided fuzzing methodfor testing neural networks for goals (e.g., finding numericalerrors, generating disagreements, and determining the undesir-able behavior of models) is presented in [221]. In [222], thefirst approach utilizing differential fuzzing testing is presentedfor exploiting incorrect behavior of DL systems.

2) Automated Testing of ML Empowered Autonomous Ve-hicles: An Overview: To ensure a completely secure func-tionality of autonomous vehicles in a real-world environment,the development of automated testing tools is required. Asthe backbone of autonomous vehicles leverage different MLtechniques for building decision systems at different levels,e.g., perception, decision making, and control, etc. In thissection, we provide an overview of various studies performingtest of autonomous vehicles.

Tian et al. [223] proposed and investigated a tool namedDeepTest to perform testing of DNN empowered autonomousvehicles to automatically detect erroneous behaviors of the

7MC/DC (Modified Condition/Decision Coverage) is a method of measur-ing the extent to which safety-critical software has been adequately tested.

vehicle that can potentially cause fatal accidents. Their pro-posed tool automatically generates test cases using changes inreal-world road conditions such as weather and lighting con-ditions and then systematically explores different componentsof DNN logic that maximize the number of activated neurons.Furthermore, they tested three DNNs that won top positionsin Udacity self-driving car challenge and found various erro-neous behaviors in different real-world road conditions (e.g.,rain, fog, blurring, etc.) that led to fatal accidents. In [224],used a GAN-based approach to generate synthetic scenes ofdifferent driving conditions for testing autonomous cars. Ametamorphic testing approach for evaluating the software partof self-driving vehicles is presented in [225].

A generic framework for testing security and robustnessof ML models for computer vision systems depicting real-istic properties is presented in [130]. Authors evaluated thesecurity of fifteen state of the art computer vision systemsin black box setting including Nvidia’s Dave self-drivingsystem. Moreover, it has been provably demonstrated thatthere exists a trade-off between adversarial robustness toperturbations and the standard accuracy of the model in afairly simple and natural setting [226]. A simulation-basedframework for generating adversarial test cases to evaluate theclosed-loop properties of ML enabled autonomous vehiclesis presented in [174]. In [227], authors generated adversarialdriving scenes using Bayesian optimization to improve self-driving behavior utilizing vision-based imitation learning. Anautoencoder-based approach for automatic identification ofunusual events using small dashcam video and the inertialsensor is presented in [228] that can potentially be used todevelop a robust autonomous driving system. Various factorsand challenges impacting driveability of autonomous vehiclesalong with an overview of available datasets for training self-driving is presented in [229] and challenges in designing suchdatasets are described in [230]. Furthermore, Dreossi et al.suggested that while robustifying the ML systems, the effect ofadversarial ML should be studied by considering the semanticsand context of the whole system [231]. For example, in DLempowered autonomous vehicle, not every adversarial obser-vation might lead to harmful action(s). Moreover, one might beinterested in those adversarial examples that can significantlymodify the desired semantics of the whole system.

VI. OPEN RESEARCH ISSUES

The advancement of ML research and its state of the artperformance in various complex domains, in particular, theadvent of more sophisticated DL methods might be an inherentpanacea to the conventional challenges of vehicular networks.However, ML/DL methods cannot be naively applied to vehic-ular networks that possess unique characteristics and adaptionof these methods for learning such distinguishing features ofvehicular networks is a challenging task [232]. In this section,we highlight a few promising areas of research that requirefurther investigation.

A. Efficient Distributed Data StorageIn the connected vehicular ecosystem, the data is generated

and stored in a distributed fashion that raises a question about

Page 23: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

23

the applicability of ML/DL models at a global level. As MLmodels are developed with the assumption that data is easilyaccessible and managed by a central entity, there is a need toutilize distributed learning methods for connected vehicles sothat data may be scalably acquired from multiple units in theecosystem.

B. Interpretable MLAnother major security vulnerability in CAVs is the lack

of interpretability of ML schemes. ML techniques in generaland DL techniques specifically are based on the idea offunction approximation, where the approximation of the em-pirical function is performed using DNN architectures. Currentworks in ML/DL lack interpretability, which is resulting in amajor hurdle in the progress of ML/DL empowered CAVs.The lack of interpretability is exploited by the adversariesto construct adversarial examples for fooling the deployedML/DL schemes in autonomous vehicles, i.e., physical attackson self-driving vehicles as discussed above. Developmentof secure, explainable, and interpretable ML techniques forsecurity-critical applications of CAVs is another open researchissue.

C. Defensive and Secure MLDespite many defense proposals presented in the literature

for adversarial attacks, developing adversarially robust MLmodels remains yet another open research problem. Almostevery defense has been shown to be only effective for aspecific attack type and fails for stronger or unseen attacks.Moreover, most defenses address the problem of adversarialattacks for computer vision tasks but adversarial ML is be-ing developed for many other vertical application domains.Therefore, development of efficient and effective novel defensestrategies is essentially required, particularly, for safety-criticalapplications, e.g., communication between connected vehicles.

D. Privacy Preserving MLPreserving privacy in any user-centric application is of

high concern. Privacy means that models should not revealany additional information about the subjects involved incollected training data (a.k.a. differential privacy) [233]. AsCAVs involve human subjects, ML model learning should becapable of preserving the privacy of drivers, passengers, andpedestrians where privacy breaches can results in extremelyharmful consequences.

E. Security Centric Proxy MetricsDevelopment of security-centric proxy metrics to evaluate

security threats against systems is fundamentally important.Currently, there is no way to formalize different types ofperturbation properties, e.g., indistinguishable and content-preserving, etc. In addition, there is no function to determinethat a specific transformation is content-preserving. Similarly,the process of measuring perceptual similarity between twoimages is very complex and widely used perceptual metricsare shallow functions that fail to account for many subtledistinctions of human perception [234].

F. Fair and Accountable ML

The literature on ML reveals that ML-based results andpredictions lack fairness and accountability. The fairness prop-erty ensures that the ML model did not nurture discriminationagainst specific cases, e.g., favoring cyclists over pedestrians.This bias in ML predictions is introduced by the biasedtraining data and results in social bias and higher error ratefor a particular demographic group. For example, researchersidentified a risk of bias in the perception system of au-tonomous vehicles to recognize pedestrians with dark skin[235]. This is an experimental work in which authors evaluateddifferent models developed by other academic researchers forautonomous vehicles. Despite the fact that this work does notuse an actual object detection model that is being used byautonomous vehicles in the market, nor did it use the trainingdata being used by autonomous vehicle manufactures, thisstudy highlights a major vulnerability of ML models usedin autonomous vehicles and raises serious concerns abouttheir applicability in real-world settings where a self-drivingvehicle may encounter people from a variety of demographicbackgrounds.

The accountability of ML models is associated with theirinterpretability property as we are interested in developingsuch models that can explain their predictions using themodels’ internal parameters. The notion of accountability isfundamentally important to understand ML model failures foradversarial examples.

G. Robustifying ML Models Against Distribution Drifts

To restrict the integrity attacks, ML models should be maderobust against distribution drifts which refer to the situationwhere train and test data distributions are different. This dif-ference between the training and test distributions gives rise toadversarial examples. These examples can also be consideredas the worst case distribution drifts [117]. It is fairly clearthat the data collection process in the vehicular ecosystem istemporal and dynamic in nature so such distribution drifts arehighly possible and will affect the robustness of the underlyingML systems. Moreover, such drifts can be exploited by theadversaries to create adversarial samples during inference, forexample, in [236] authors investigated this distribution driftby introducing positively connotated words in spam emails toevade detection. Moreover, modification of the training distri-bution is also possible in a similar way and distribution driftviolates the widely known presumption that we can achievelow learning error when a large training data is available. Fordat al. [237] have presented empirical and theoretical evidencethat adversarial examples are a consequence of test error innoise caused by a distributional shift in the data. To ensurethat the adversarial defense is trustworthy, it must providedefense against data distribution shifts. As the perceptionsystem of CAVs is mainly based on data-driven modelingusing historical training data, it is highly susceptible to theproblem of distribution drifts. Therefore, robustifying MLmodels against the aforementioned distribution drifts is veryimportant. One way to counter this problem is to leveragedeep reinforcement learning (RL) algorithms for developing

Page 24: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

24

the perception system of autonomous vehicles but this is notyet practically possible, as the state and action spaces inrealistic settings (road and vehicular environment) are con-tinuous and very complex. Therefore, fine control is requiredfor the efficacy of the system [156]. However, the work onleveraging deep RL-based methods for autonomous vehiclesis building up. For instance, Sallab et al. proposed a deep RL-based framework for autonomous vehicles that enables thevehicle to handle partially observable scenarios [238]. Theyinvestigated the effectiveness of their system using an opensource 3D car racing simulator (TORCS8) and demonstratedthat their model was able to learn complex road curvaturesand simple inter-vehicle interactions. On the counter side,deep RL-based systems have been shown vulnerable to policyinduction attacks [239].

VII. CONCLUSIONS

The recent discoveries that machine learning (ML) tech-niques are vulnerable to adversarial perturbations have raisedquestions on the security of connected and autonomous ve-hicles (CAVs), which utilize ML techniques for various tasksranging from environmental perception to objection recogni-tion and movement prediction. The safety-critical nature ofCAVs clearly demands that the technology it uses shouldbe robust to all kinds of potential security threats—be theyaccidental, intentional, or adversarial. In this work, we presentfor the first time a comprehensive analysis of the challengesposed by adversarial ML attacks for CAVs aggregating insightsfrom both the ML and CAV literature. Our major contributionsinclude: a broad description of the ML pipeline used in CAVs;description of the various adversarial attacks that can belaunched on the various components of the CAV ML pipeline;a detailed taxonomy of the adversarial ML threat for CAVs; acomprehensive survey of adversarial ML attacks and defensesproposed in literature. Finally, open research challenges andfuture directions are discussed to provide readers with theopportunity to develop robust and efficient solutions for theapplication of ML models in CAVs.

REFERENCES

[1] Mohamed Nidhal Mejri, Jalel Ben-Othman, and Mohamed Hamdi.Survey on VANET security challenges and possible cryptographicsolutions. Vehicular Communications, 1(2):53–66, 2014.

[2] Joseph Gardiner and Shishir Nagaraja. On the security of machinelearning in malware c&c detection: A survey. ACM Computing Surveys(CSUR), 49(3):59, 2016.

[3] Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopad-hyay, and Debdeep Mukhopadhyay. Adversarial attacks and defences:A survey. arXiv preprint arXiv:1810.00069, 2018.

[4] Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deeplearning in computer vision: A survey. IEEE Access, 6:14410–14430,2018.

[5] Steven E Shladover. Connected and automated vehicle systems: Intro-duction and overview. Journal of Intelligent Transportation Systems,22(3):190–200, 2018.

[6] Rasheed Hussain and Sherali Zeadally. Autonomous cars: Researchresults, issues and future challenges. IEEE Communications Surveys& Tutorials, 2018.

[7] Giuseppe Araniti, Claudia Campolo, Massimo Condoluci, Antonio Iera,and Antonella Molinaro. LTE for vehicular networking: a survey. IEEEcommunications magazine, 51(5):148–157, 2013.

8http://torcs.sourceforge.net/

[8] Haixia Peng, Le Liang, Xuemin Shen, and Geoffrey Ye Li. Vehicularcommunications: A network layer perspective. IEEE Transactions onVehicular Technology, 2018.

[9] Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Ander-son, Hovav Shacham, Stefan Savage, Karl Koscher, Alexei Czeskis,Franziska Roesner, Tadayoshi Kohno, et al. Comprehensive experi-mental analyses of automotive attack surfaces. In USENIX SecuritySymposium, pages 77–92. San Francisco, 2011.

[10] Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson,Z Berkay Celik, and Ananthram Swami. The limitations of deeplearning in adversarial settings. In 2016 IEEE European Symposiumon Security and Privacy (EuroS&P), pages 372–387. IEEE, 2016.

[11] Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rah-mati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song.Robust Physical-World Attacks on Deep Learning Visual Classification.In Computer Vision and Pattern Recognition (CVPR), 2018.

[12] Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chi-ang, and Prateek Mittal. DARTS: Deceiving Autonomous Cars withToxic Signs. arXiv preprint arXiv:1802.06430, 2018.

[13] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna,Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing propertiesof neural networks. arXiv preprint arXiv:1312.6199, 2013.

[14] Joshua E Siegel, Dylan C Erb, and Sanjay E Sarma. A survey ofthe connected vehicle landscapearchitectures, enabling technologies,applications, and development areas. IEEE Transactions on IntelligentTransportation Systems, 19(8):2391–2406, 2018.

[15] Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. Adversarialexamples: Attacks and defenses for deep learning. IEEE transactionson neural networks and learning systems, 2019.

[16] Wenqi Wang, Benxiao Tang, Run Wang, Lina Wang, and Aoshuang Ye.A survey on adversarial attacks and defenses in text. arXiv preprintarXiv:1902.07285, 2019.

[17] Rodrigo Marcal Gandia, Fabio Antonialli, Bruna Habib Cavazza,Arthur Miranda Neto, Danilo Alves de Lima, Joel Yutaka Sugano,Isabelle Nicolai, and Andre Luiz Zambalde. Autonomous vehicles:scientometric and bibliometric review. Transport reviews, 39(1):9–28,2019.

[18] Society of Automotive Engineers. Taxonomy and definitions for termsrelated to driving automation systems for on-road motor vehicles, 2016.

[19] Steven E Shladover. Roadway automation technology–research needs.Transportation Research Record, (1283), 1990.

[20] Ernst Dieter Dickmanns. Vision for ground vehicles: History andprospects. International Journal of Vehicle Autonomous Systems,1(1):1–44, 2002.

[21] Hans-Benz Glathe. Prometheus-a cooperative effort of the europeanautomotive manufacturers. Technical report, SAE Technical Paper,1994.

[22] Sadayuki Tsugawa, Takaharu Saito, and Akio Hosaka. Super smartvehicle system: Avcs related systems for the future. In Proceedings ofthe Intelligent Vehicles92 Symposium, pages 132–137. IEEE, 1992.

[23] James H Rillings. Automated highways: Cars that drive themselvesin tight formation might alleviate the congestion now plaguing urbanfreeways. Scientific American. Vol. 277, no. 4, 1997.

[24] Roman Staszewski and Hannes Estl. Making cars safer throughtechnology innovation. White Paper by Texas Instruments Incorporated,2013.

[25] Elisabeth Uhlemann. Introducing connected vehicles [connected vehi-cles]. IEEE Vehicular Technology Magazine, 10(1):23–31, 2015.

[26] Ning Lu, Nan Cheng, Ning Zhang, Xuemin Shen, and Jon W Mark.Connected vehicles: Solutions and challenges. IEEE internet of thingsjournal, 1(4):289–299, 2014.

[27] Sadayuki Tsugawa, Sabina Jeschke, and Steven E Shladover. A reviewof truck platooning projects for energy savings. IEEE Transactions onIntelligent Vehicles, 1(1):68–77, 2016.

[28] Jennie Lioris, Ramtin Pedarsani, Fatma Yildiz Tascikaraoglu, andPravin Varaiya. Platoons of connected vehicles can double throughputin urban roads. Transportation Research Part C: Emerging Technolo-gies, 77:292–305, 2017.

[29] Karl Koscher, Alexei Czeskis, Franziska Roesner, Shwetak Patel,Tadayoshi Kohno, Stephen Checkoway, Damon McCoy, Brian Kantor,Danny Anderson, Hovav Shacham, et al. Experimental security analysisof a modern automobile. In Security and Privacy (SP), 2010 IEEESymposium on, pages 447–462. IEEE, 2010.

[30] Pierre Kleberger, Tomas Olovsson, and Erland Jonsson. Securityaspects of the in-vehicle network in the connected car. In 2011 IEEEIntelligent Vehicles Symposium (IV), pages 528–533. IEEE, 2011.

Page 25: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

25

[31] Steven E Shladover, Christopher Nowakowski, Xiao-Yun Lu, andRobert Ferlis. Cooperative adaptive cruise control: Definitions andoperating concepts. Transportation Research Record, 2489(1):145–152,2015.

[32] Irshad Ahmed Sumra, Halabi Bin Hasbullah, and Jamalul-lail Bin Ab-Manan. Attacks on security goals (confidentiality, integrity, availability)in vanet: a survey. In Vehicular Ad-Hoc Networks for Smart Cities,pages 51–61. Springer, 2015.

[33] Kiho Lim, Kastuv M Tuladhar, and Hyunbum Kim. Detecting locationspoofing using adas sensors in vanets. In 2019 16th IEEE AnnualConsumer Communications & Networking Conference (CCNC), pages1–4. IEEE, 2019.

[34] Sebastian Bittl, Arturo A Gonzalez, Matthias Myrtus, Hanno Beck-mann, Stefan Sailer, and Bernd Eissfeller. Emerging attacks on vanetsecurity based on gps time spoofing. In 2015 IEEE Conference onCommunications and Network Security (CNS), pages 344–352. IEEE,2015.

[35] Jonathan Petit and Steven E Shladover. Potential cyberattacks onautomated vehicles. IEEE Transactions on Intelligent TransportationSystems, 16(2):546–556, 2015.

[36] Bryan Parno and Adrian Perrig. Challenges in securing vehicularnetworks. In Workshop on Hot Topics in Networks (HotNets-IV), pages1–6. Maryland, USA, 2005.

[37] Bassem Mokhtar and Mohamed Azab. Survey on security issues in ve-hicular ad hoc networks. Alexandria engineering journal, 54(4):1115–1126, 2015.

[38] Sherali Zeadally, Ray Hunt, Yuh-Shyan Chen, Angela Irwin, and AamirHassan. Vehicular ad hoc networks (VANETS): status, results, andchallenges. Telecommunication Systems, 50(4):217–241, 2012.

[39] Jie Li, Huang Lu, and Mohsen Guizani. Acpn: A novel authenticationframework with conditional privacy-preservation and non-repudiationfor vanets. IEEE Transactions on Parallel and Distributed Systems,26(4):938–948, 2015.

[40] John R Douceur. The sybil attack. In International workshop on peer-to-peer systems, pages 251–260. Springer, 2002.

[41] Debiao He, Sherali Zeadally, Baowen Xu, and Xinyi Huang. Anefficient identity-based conditional privacy-preserving authenticationscheme for vehicular ad hoc networks. IEEE Transactions on Infor-mation Forensics and Security, 10(12):2681–2691, 2015.

[42] Mani Amoozadeh, Arun Raghuramu, Chen-Nee Chuah, Dipak Ghosal,H Michael Zhang, Jeff Rowe, and Karl Levitt. Security vulnerabilitiesof connected vehicle streams and their impact on cooperative driving.IEEE Communications Magazine, 53(6):126–132, 2015.

[43] Nikita Lyamin, Denis Kleyko, Quentin Delooz, and Alexey Vinel. Ai-based malicious network traffic detection in vanets. IEEE Network,32(6):15–21, 2018.

[44] Seyhan Ucar, Sinem Coleri Ergen, and Oznur Ozkasap. Data-drivenabnormal behavior detection for autonomous platoon. In 2017 IEEEVehicular Networking Conference (VNC), pages 69–72. IEEE, 2017.

[45] Gopi Krishnan Rajbahadur, Andrew J Malton, Andrew Walenstein, andAhmed E Hassan. A survey of anomaly detection for connected vehiclecybersecurity and safety. In 2018 IEEE Intelligent Vehicles Symposium(IV), pages 421–426. IEEE, 2018.

[46] Mevlut Turker Garip, Mehmet Emre Gursoy, Peter Reiher, and MarioGerla. Congestion attacks to autonomous cars using vehicular botnets.In NDSS Workshop on Security of Emerging Networking Technologies(SENT), San Diego, CA, 2015.

[47] Philip Koopman and Michael Wagner. Autonomous vehicle safety: Aninterdisciplinary challenge. IEEE Intelligent Transportation SystemsMagazine, 9(1):90–96, 2017.

[48] Yasser Shoukry, Paul Martin, Paulo Tabuada, and Mani Srivastava.Non-invasive spoofing attacks for anti-lock braking systems. InInternational Workshop on Cryptographic Hardware and EmbeddedSystems, pages 55–72. Springer, 2013.

[49] Bjorn Wiedersheim, Zhendong Ma, Frank Kargl, and Panos Papadim-itratos. Privacy in inter-vehicular networks: Why simple pseudonymchange is not enough. In Wireless On-demand Network Systems andServices (WONS), 2010 Seventh International Conference on, pages176–183. IEEE, 2010.

[50] Suwan Wang and Yuan He. A trust system for detecting selectiveforwarding attacks in vanets. In International Conference on Big DataComputing and Communications, pages 377–386. Springer, 2016.

[51] Khattab Ali Alheeti, Anna Gruebler, and Klaus McDonald-Maier.Intelligent intrusion detection of grey hole and rushing attacks in self-driving vehicular networks. Computers, 5(3):16, 2016.

[52] Maxim Raya, Daniel Jungels, Panos Papadimitratos, Imad Aad, andJean-Pierre Hubaux. Certificate revocation in vehicular networks.

Laboratory for computer Communications and Applications (LCA)School of Computer and Communication Sciences, EPFL, Switzerland,2006.

[53] Felipe Cunha, Leandro Villas, Azzedine Boukerche, Guilherme Maia,Aline Viana, Raquel AF Mini, and Antonio AF Loureiro. Datacommunication in vanets: Protocols, applications and challenges. AdHoc Networks, 44:90–103, 2016.

[54] Jeremy J Blum, Azim Eskandarian, and Lance J Hoffman. Challengesof intervehicle ad hoc networks. IEEE transactions on intelligenttransportation systems, 5(4):347–351, 2004.

[55] Ramon Dos Reis Fontes, Claudia Campolo, Christian Esteve Rothen-berg, and Antonella Molinaro. From theory to experimental evaluation:Resource management in software-defined vehicular networks. IEEEAccess, 5:3069–3076, 2017.

[56] Le Liang, Hao Ye, and Geoffrey Ye Li. Toward intelligent vehicularnetworks: A machine learning framework. IEEE Internet of ThingsJournal, 6(1):124–135, 2019.

[57] Hao Ye and Geoffrey Ye Li. Deep reinforcement learning for resourceallocation in v2v communications. In 2018 IEEE International Con-ference on Communications (ICC), pages 1–6. IEEE, 2018.

[58] Xue Yang, Leibo Liu, Nitin H Vaidya, and Feng Zhao. A vehicle-to-vehicle communication protocol for cooperative collision warning. InThe First Annual International Conference on Mobile and UbiquitousSystems: Networking and Services, 2004. MOBIQUITOUS 2004., pages114–123. IEEE, 2004.

[59] Jiafu Wan, Jianqi Liu, Zehui Shao, Athanasios Vasilakos, MuhammadImran, and Keliang Zhou. Mobile crowd sensing for traffic predictionin internet of vehicles. Sensors, 16(1):88, 2016.

[60] Allan M de Souza, Roberto S Yokoyama, Guilherme Maia, AntonioLoureiro, and Leandro Villas. Real-time path planning to preventtraffic jam through an intelligent transportation system. In 2016 IEEESymposium on Computers and Communication (ISCC), pages 726–731.IEEE, 2016.

[61] Miao Wang, Hangguan Shan, Rongxing Lu, Ran Zhang, Xuemin Shen,and Fan Bai. Real-time path planning based on hybrid-vanet-enhancedtransportation system. IEEE Transactions on Vehicular Technology,64(5):1664–1678, 2015.

[62] Lin Yao, Jie Wang, Xin Wang, Ailun Chen, and Yuqi Wang. V2Xrouting in a VANET based on the hidden markov model. IEEETransactions on Intelligent Transportation Systems, 19(3):889–899,2018.

[63] Guangtao Xue, Yuan Luo, Jiadi Yu, and Minglu Li. A novel vehicularlocation prediction based on mobility patterns for routing in urbanVANET. EURASIP Journal on Wireless Communications and Net-working, 2012(1):222, 2012.

[64] Fanhui Zeng, Rongqing Zhang, Xiang Cheng, and Liuqing Yang. Chan-nel prediction based scheduling for data dissemination in VANETs.IEEE Communications Letters, 21(6):1409–1412, 2017.

[65] Amin Karami. ACCPNDN: Adaptive congestion control protocol innamed data networking by learning capacities using optimized time-lagged feedforward neural network. Journal of Network and ComputerApplications, 56:1–18, 2015.

[66] Nasrin Taherkhani and Samuel Pierre. Centralized and localized datacongestion control strategy for vehicular ad hoc networks using a ma-chine learning clustering algorithm. IEEE Transactions on IntelligentTransportation Systems, 17(11):3275–3285, 2016.

[67] Zhong Li, Cheng Wang, and Chang-Jun Jiang. User association forload balancing in vehicular networks: An online reinforcement learningapproach. IEEE Transactions on Intelligent Transportation Systems,18(8):2217–2228, 2017.

[68] Adrian Taylor, Sylvain Leblanc, and Nathalie Japkowicz. Anomalydetection in automobile control network data with long short-termmemory networks. In Data Science and Advanced Analytics (DSAA),2016 IEEE International Conference on, pages 130–139. IEEE, 2016.

[69] Qiang Zheng, Kan Zheng, Haijun Zhang, and Victor CM Leung.Delay-optimal virtualized radio resource scheduling in software-definedvehicular networks via stochastic learning. IEEE Transactions onVehicular Technology, 65(10):7857–7867, 2016.

[70] Ribal F Atallah, Chadi M Assi, and Jia Yuan Yu. A reinforcementlearning technique for optimizing downlink scheduling in an energy-limited vehicular network. IEEE Transactions on Vehicular Technology,66(6):4592–4601, 2017.

[71] Ribal Atallah, Chadi Assi, and Maurice Khabbaz. Deep reinforcementlearning-based scheduling for roadside communication networks. InModeling and Optimization in Mobile, Ad Hoc, and Wireless Networks(WiOpt), 2017 15th International Symposium on, pages 1–8. IEEE,2017.

Page 26: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

26

[72] ByeoungDo Kim, Chang Mook Kang, Jaekyum Kim, Seung Hi Lee,Chung Choo Chung, and Jun Won Choi. Probabilistic vehicle trajectoryprediction over occupancy grid map via recurrent neural network. In2017 IEEE 20th International Conference on Intelligent TransportationSystems (ITSC), pages 399–404. IEEE, 2017.

[73] Chenggang Yan, Hongtao Xie, Dongbao Yang, Jian Yin, YongdongZhang, and Qionghai Dai. Supervised hash coding with deep neuralnetwork for environment perception of intelligent vehicles. IEEE trans-actions on intelligent transportation systems, 19(1):284–295, 2018.

[74] Francisca Rosique, Pedro J Navarro, Carlos Fernandez, and AntonioPadilla. A systematic review of perception system and simulators forautonomous vehicles research. Sensors, 19(3):648, 2019.

[75] Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deep-driving: Learning affordance for direct perception in autonomousdriving. In The IEEE International Conference on Computer Vision(ICCV), December 2015.

[76] Sebastian Ramos, Stefan Gehrig, Peter Pinggera, Uwe Franke, andCarsten Rother. Detecting unexpected obstacles for self-driving cars:Fusing deep learning and geometric modeling. In 2017 IEEE IntelligentVehicles Symposium (IV), pages 1025–1032. IEEE, 2017.

[77] Ricardo Omar Chavez-Garcia and Olivier Aycard. Multiple sensorfusion and classification for moving object detection and tracking.IEEE Transactions on Intelligent Transportation Systems, 17(2):525–534, 2016.

[78] Heng Wang, Bin Wang, Bingbing Liu, Xiaoli Meng, and GuanghongYang. Pedestrian recognition and tracking using 3d lidar for au-tonomous vehicle. Robotics and Autonomous Systems, 88:71–78, 2017.

[79] Yujun Zeng, Xin Xu, Dayong Shen, Yuqiang Fang, and ZhipengXiao. Traffic sign recognition using kernel extreme learning machineswith deep perceptual features. IEEE Transactions on IntelligentTransportation Systems, 18(6):1647–1653, 2017.

[80] Vijay John, Keisuke Yoneda, B Qi, Zheng Liu, and Seiichi Mita.Traffic light recognition in varying illumination using deep learningand saliency map. In 17th International IEEE Conference on IntelligentTransportation Systems (ITSC), pages 2286–2291. IEEE, 2014.

[81] Lei Lin, Siyuan Gong, Tao Li, and Srinivas Peeta. Deep learning-based human-driven vehicle trajectory prediction and its applicationfor platoon control of connected and autonomous vehicles. In TheAutonomous Vehicles Symposium, volume 2018, 2018.

[82] Qian Mao, Fei Hu, and Qi Hao. Deep learning for intelligent wirelessnetworks: A comprehensive survey. IEEE Communications Surveys &Tutorials, 20(4):2595–2621, 2018.

[83] Lanhang Ye and Toshiyuki Yamamoto. Modeling connected and au-tonomous vehicles in heterogeneous traffic flow. Physica A: StatisticalMechanics and its Applications, 490:269–277, 2018.

[84] Feihu Zhang, Clara Martinez, Clark Daniel, Dongpu Cao, and Alois CKnoll. Neural network based uncertainty prediction for autonomousvehicle application. Frontiers in Neurorobotics, 13:12, 2019.

[85] Christos Katrakazas, Mohammed Quddus, Wen-Hua Chen, and LipikaDeka. Real-time motion planning methods for autonomous on-roaddriving: State-of-the-art and future research directions. TransportationResearch Part C: Emerging Technologies, 60:416–442, 2015.

[86] Adam Houenou, Philippe Bonnifait, Veronique Cherfaoui, and WenYao. Vehicle trajectory prediction based on motion model and ma-neuver recognition. In 2013 IEEE/RSJ International Conference onIntelligent Robots and Systems, pages 4363–4369. IEEE, 2013.

[87] Tao Li. Modeling uncertainty in vehicle trajectory prediction in a mixedconnected and autonomous vehicle environment using deep learningand kernel density estimation. In The Fourth Annual Symposium onTransportation Informatics, 2018.

[88] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, BernhardFirner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Mon-fort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-drivingcars. arXiv preprint arXiv:1604.07316, 2016.

[89] Zhilu Chen and Xinming Huang. End-to-end learning for lane keepingof self-driving cars. In 2017 IEEE Intelligent Vehicles Symposium (IV),pages 1856–1860. IEEE, 2017.

[90] Jingchu Liu, Pengfei Hou, Lisen Mu, Yinan Yu, and Chang Huang.Elements of effective deep reinforcement learning towards tacticaldriving decision making. arXiv preprint arXiv:1802.00332, 2018.

[91] Maxime Bouton, Jesper Karlsson, Alireza Nakhaei, Kikuo Fujimura,Mykel J Kochenderfer, and Jana Tumova. Reinforcement learningwith probabilistic guarantees for autonomous driving. arXiv preprintarXiv:1904.07189, 2019.

[92] Yi Zhang, Ping Sun, Yuhan Yin, Lin Lin, and Xuesong Wang. Human-like autonomous vehicle speed control by deep reinforcement learning

with double q-learning. In 2018 IEEE Intelligent Vehicles Symposium(IV), pages 1251–1256. IEEE, 2018.

[93] Ying He, Nan Zhao, and Hongxi Yin. Integrated networking, caching,and computing for connected vehicles: A deep reinforcement learningapproach. IEEE Transactions on Vehicular Technology, 67(1):44–55,2018.

[94] Vicente Milanes, Steven E Shladover, John Spring, ChristopherNowakowski, Hiroshi Kawazoe, and Masahide Nakamura. Cooperativeadaptive cruise control in real traffic situations. IEEE Transactions onIntelligent Transportation Systems, 15(1):296–305, 2014.

[95] Min-Joo Kang and Je-Won Kang. Intrusion detection system usingdeep neural network for in-vehicle network security. PloS one,11(6):e0155781, 2016.

[96] DianGe Yang, Kun Jiang, Ding Zhao, ChunLei Yu, Zhong Cao,ShiChao Xie, ZhongYang Xiao, XinYu Jiao, SiJia Wang, and KaiZhang. Intelligent and connected vehicles: Current status and futureperspectives. Science China Technological Sciences, 61(10):1446–1471, 2018.

[97] Battista Biggio, B Nelson, and P Laskov. Poisoning attacks againstsupport vector machines. In 29th International Conference on MachineLearning, pages 1807–1814. ArXiv e-prints, 2012.

[98] Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, NedimSrndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacksagainst machine learning at test time. In Joint European conference onmachine learning and knowledge discovery in databases, pages 387–402. Springer, 2013.

[99] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explainingand harnessing adversarial examples. arXiv preprint arXiv:1412.6572,2014.

[100] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks areeasily fooled: High confidence predictions for unrecognizable images.In Proceedings of the IEEE conference on computer vision and patternrecognition, pages 427–436, 2015.

[101] Elias B Khalil, Amrita Gupta, and Bistra Dilkina. Combinatorial attackson binarized neural networks. arXiv preprint arXiv:1810.03538, 2018.

[102] Emilio Rafael Balda, Arash Behboodi, and Rudolf Mathar. Ongeneration of adversarial examples using convex programming. In 201852nd Asilomar Conference on Signals, Systems, and Computers, pages60–65. IEEE, 2018.

[103] Eric Wong and Zico Kolter. Provable defenses against adversarialexamples via the convex outer adversarial polytope. In InternationalConference on Machine Learning (ICML), pages 5283–5292, 2018.

[104] Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. Onepixel attack for fooling deep neural networks. IEEE Transactions onEvolutionary Computation, 2019.

[105] Justin Gilmer, Ryan P Adams, Ian Goodfellow, David Andersen, andGeorge E Dahl. Motivating the rules of the game for adversarialexample research. arXiv preprint arXiv:1807.06732, 2018.

[106] Yann LeCun, Leon Bottou, Yoshua Bengio, Patrick Haffner, et al.Gradient-based learning applied to document recognition. Proceedingsof the IEEE, 86(11):2278–2324, 1998.

[107] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers offeatures from tiny images. Technical report, University of Toronto,2009.

[108] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009IEEE conference on computer vision and pattern recognition, pages248–255. IEEE, 2009.

[109] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenetclassification with deep convolutional neural networks. In Advances inneural information processing systems, pages 1097–1105, 2012.

[110] Karen Simonyan and Andrew Zisserman. Very deep convolutionalnetworks for large-scale image recognition. International Conferenceon Learning Representations (ICLR), 2015.

[111] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed,Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and AndrewRabinovich. Going deeper with convolutions. In Proceedings of theIEEE conference on computer vision and pattern recognition, pages1–9, 2015.

[112] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev,Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell.Caffe: Convolutional architecture for fast feature embedding. InProceedings of the 22nd ACM international conference on Multimedia,pages 675–678. ACM, 2014.

[113] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deepresidual learning for image recognition. In Proceedings of the IEEE

Page 27: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

27

conference on computer vision and pattern recognition, pages 770–778,2016.

[114] Kevin Riggle. An introduction to approachablethreat modeling. Accessed: 23 April 2019, URL:https://increment.com/security/approachable-threat-modeling/.

[115] Marco Barreno, Blaine Nelson, Anthony D Joseph, and J Doug Tygar.The security of machine learning. Machine Learning, 81(2):121–148,2010.

[116] Florian Tramer, Nicolas Papernot, Ian Goodfellow, Dan Boneh, andPatrick McDaniel. The space of transferable adversarial examples.arXiv preprint arXiv:1704.03453, 2017.

[117] Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and MichaelWellman. Towards the science of security and privacy in machinelearning. arXiv preprint arXiv:1611.03814, 2016.

[118] Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. Adversarialexamples in the physical world. In Artificial Intelligence Safety andSecurity, pages 99–112. Chapman and Hall/CRC, 2018.

[119] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and PascalFrossard. Deepfool: a simple and accurate method to fool deep neuralnetworks. In Proceedings of the IEEE conference on computer visionand pattern recognition, pages 2574–2582, 2016.

[120] Nicholas Carlini and David Wagner. Towards evaluating the robustnessof neural networks. In 2017 IEEE Symposium on Security and Privacy(SP), pages 39–57. IEEE, 2017.

[121] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, andPascal Frossard. Universal adversarial perturbations. In Proceedingsof the IEEE conference on computer vision and pattern recognition,pages 1765–1773, 2017.

[122] Sayantan Sarkar, Ankan Bansal, Upal Mahbub, and Rama Chellappa.Upset and angri: Breaking high performance image classifiers. arXivpreprint arXiv:1707.01159, 2017.

[123] Moustapha M Cisse, Yossi Adi, Natalia Neverova, and Joseph Keshet.Houdini: Fooling deep structured visual and speech recognition modelswith adversarial examples. In Advances in neural information process-ing systems, pages 6977–6987, 2017.

[124] Shumeet Baluja and Ian Fischer. Learning to attack: Adversarialtransformation networks. In AAAI, 2018.

[125] Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, et al.Adversarial classification. In Proceedings of the tenth ACM SIGKDDinternational conference on Knowledge discovery and data mining,pages 99–108. ACM, 2004.

[126] Daniel Lowd and Christopher Meek. Adversarial learning. In Pro-ceedings of the eleventh ACM SIGKDD international conference onKnowledge discovery in data mining, pages 641–647. ACM, 2005.

[127] Marco Barreno, Blaine Nelson, Russell Sears, Anthony D Joseph, andJ Doug Tygar. Can machine learning be secure? In Proceedings of the2006 ACM Symposium on Information, computer and communicationssecurity, pages 16–25. ACM, 2006.

[128] Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin IP Rubin-stein, and JD Tygar. Adversarial machine learning. In Proceedings ofthe 4th ACM workshop on Security and artificial intelligence, pages43–58. ACM, 2011.

[129] Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt,and Aleksander Madry. A rotation and a translation suffice: Foolingcnns with simple transformations. arXiv preprint arXiv:1712.02779,2017.

[130] Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. Towardspractical verification of machine learning: The case of computer visionsystems. arXiv preprint arXiv:1712.01785, 2017.

[131] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai,Weihang Wang, and Xiangyu Zhang. Trojaning attack on neuralnetworks. In 25th Annual Network and Distributed System SecuritySymposium, NDSS 2018, San Diego, California, USA, February 18-21,2018, 2018.

[132] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha,Z Berkay Celik, and Ananthram Swami. Practical black-box attacksagainst machine learning. In Proceedings of the 2017 ACM on AsiaConference on Computer and Communications Security, pages 506–519. ACM, 2017.

[133] Nicholas Carlini and David Wagner. Adversarial examples are noteasily detected: Bypassing ten detection methods. In Proceedings ofthe 10th ACM Workshop on Artificial Intelligence and Security, pages3–14. ACM, 2017.

[134] Nicholas Carlini and David Wagner. Audio adversarial examples:Targeted attacks on speech-to-text. In 2018 IEEE Security and PrivacyWorkshops (SPW), pages 1–7. IEEE, 2018.

[135] Tom B Brown, Dandelion Mane, Aurko Roy, Martın Abadi, and JustinGilmer. Adversarial patch. 31st Conference on Neural InformationProcessing Systems (NIPS 2017), 2017.

[136] Yevgeniy Vorobeychik and Murat Kantarcioglu. Adversarial machinelearning. Synthesis Lectures on Artificial Intelligence and MachineLearning, 12(3):1–169, 2018.

[137] Battista Biggio and Fabio Roli. Wild patterns: Ten years after therise of adversarial machine learning. Pattern Recognition, 84:317–331,2018.

[138] Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Prateek Mit-tal, and Mung Chiang. Rogue signs: Deceiving traffic sign recognitionwith malicious ads and logos. arXiv preprint arXiv:1801.02780, 2018.

[139] Anurag Arnab, Ondrej Miksik, and Philip HS Torr. On the robustnessof semantic segmentation models to adversarial attacks. In Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition,pages 888–897, 2018.

[140] Jan Hendrik Metzen, Mummadi Chaithanya Kumar, Thomas Brox, andVolker Fischer. Universal adversarial perturbations against semanticimage segmentation. In 2017 IEEE International Conference onComputer Vision (ICCV), pages 2774–2783. IEEE, 2017.

[141] Volker Fischer, Mummadi Chaithanya Kumar, Jan Hendrik Metzen, andThomas Brox. Adversarial examples for semantic image segmentation.arXiv preprint arXiv:1703.01101, 2017.

[142] Edgar Tretschk, Seong Joon Oh, and Mario Fritz. Sequential at-tacks on agents for long-term adversarial goals. arXiv preprintarXiv:1805.12487, 2018.

[143] Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan,and Girish Chowdhary. Robust deep reinforcement learning withadversarial attacks. In Proceedings of the 17th International Con-ference on Autonomous Agents and MultiAgent Systems, pages 2040–2042. International Foundation for Autonomous Agents and MultiagentSystems, 2018.

[144] Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih,Ming-Yu Liu, and Min Sun. Tactics of adversarial attack on deepreinforcement learning agents. arXiv preprint arXiv:1703.06748, 2017.

[145] Jernej Kos, Ian Fischer, and Dawn Song. Adversarial examples forgenerative models. In 2018 IEEE Security and Privacy Workshops(SPW), pages 36–42. IEEE, 2018.

[146] George Gondim-Ribeiro, Pedro Tabacof, and Eduardo Valle. Ad-versarial attacks on variational autoencoders. arXiv preprintarXiv:1806.04646, 2018.

[147] Dario Pasquini, Marco Mingione, and Massimo Bernaschi. Out-domainexamples for generative models. arXiv preprint arXiv:1903.02926,2019.

[148] Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto.Interpretable adversarial perturbation in input embedding space for text.International Joint Conference on Artificial Intelligence (IJCAI), 2018.

[149] Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarialtraining methods for semi-supervised text classification. InternationalConference on Learning Representations (ICLR), 2017.

[150] Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. Textbug-ger: Generating adversarial text against real-world applications. TheNetwork and Distributed System Security Symposium, 2019.

[151] Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and RichardHarang. Crafting adversarial input sequences for recurrent neuralnetworks. In MILCOM 2016-2016 IEEE Military CommunicationsConference, pages 49–54. IEEE, 2016.

[152] Robin Jia and Percy Liang. Adversarial examples for evaluating readingcomprehension systems. In Proceedings of the 2017 conference onempirical methods in natural language processing (EMNLP), 2017.

[153] Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, andKedar Dhamdhere. Did the model understand the question? The56th Annual Meeting of the Association for Computational Linguistics,2018.

[154] Igino Corona, Giorgio Giacinto, and Fabio Roli. Adversarial attacksagainst intrusion detection systems: Taxonomy, solutions and openissues. Information Sciences, 239:201–225, 2013.

[155] Zilong Lin, Yong Shi, and Zhi Xue. Idsgan: Generative adversarialnetworks for attack generation against intrusion detection. arXivpreprint arXiv:1809.02077, 2018.

[156] Zheng Wang. Deep learning-based intrusion detection with adversaries.IEEE Access, 6:38367–38384, 2018.

[157] Milad Salem, Shayan Taheri, and Jiann Shiun Yuan. Anomaly gen-eration using generative adversarial networks in host based intrusiondetection. arXiv preprint arXiv:1812.04697, 2018.

Page 28: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

28

[158] Gang Wang, Tianyi Wang, Haitao Zheng, and Ben Y Zhao. Man vs.machine: Practical adversarial detection of malicious crowdsourcingworkers. In 23rd USENIX Security Symposium USENIX Security 14),pages 239–254, 2014.

[159] Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, MichaelBackes, and Patrick McDaniel. Adversarial perturbations againstdeep neural networks for malware classification. arXiv preprintarXiv:1606.04435, 2016.

[160] Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca,Giorgio Giacinto, Claudia Eckert, and Fabio Roli. Adversarial malwarebinaries: Evading deep learning for malware detection in executables.In 2018 26th European Signal Processing Conference (EUSIPCO),pages 533–537. IEEE, 2018.

[161] Muhammad Usama, Junaid Qadir, and Ala Al-Fuqaha. Adversarialattacks on cognitive self-organizing networks: The challenge and theway forward. In 2018 IEEE 43rd Conference on Local ComputerNetworks Workshops (LCN Workshops), pages 90–97. IEEE, 2018.

[162] Muhammad Ejaz Ahmed and Hyoungshick Kim. Poster: Adversarialexamples for classifiers in high-dimensional network data. In Pro-ceedings of the 2017 ACM SIGSAC Conference on Computer andCommunications Security, pages 2467–2469. ACM, 2017.

[163] Eduardo Viegas, Altair Santin, Vilmar Abreu, and Luiz S Oliveira.Stream learning and anomaly-based intrusion detection in the ad-versarial settings. In 2017 IEEE Symposium on Computers andCommunications (ISCC), pages 773–778. IEEE, 2017.

[164] Lea Schonherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, andDorothea Kolossa. Adversarial attacks against automatic speechrecognition systems via psychoacoustic hiding. arXiv preprintarXiv:1808.05665, 2018.

[165] Arkar Min Aung, Yousef Fadila, Radian Gondokaryono, and LuisGonzalez. Building robust deep neural networks for road sign detection.arXiv preprint arXiv:1712.09327, 2017.

[166] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. MaskR-CNN. In Proceedings of the IEEE international conference oncomputer vision, pages 2961–2969, 2017.

[167] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement.arXiv preprint arXiv:1804.02767, 2018.

[168] Yang Zhang, Hassan Foroosh, Philip David, and Boqing Gong. Camou:Learning physical vehicle camouflages to adversarially attack detectorsin the wild. 2018.

[169] Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li,Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno.Physical adversarial examples for object detectors. In 12th {USENIX}Workshop on Offensive Technologies ({WOOT} 18), 2018.

[170] Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rah-mati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song.Robust physical-world attacks on deep learning visual classification. InProceedings of the IEEE Conference on Computer Vision and PatternRecognition, pages 1625–1634, 2018.

[171] Husheng Zhou, Wei Li, Yuankun Zhu, Yuqun Zhang, Bei Yu, LingmingZhang, and Cong Liu. Deepbillboard: Systematic physical-world test-ing of autonomous driving systems. arXiv preprint arXiv:1812.10812,2018.

[172] Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet:Learning to drive by imitating the best and synthesizing the worst.arXiv preprint arXiv:1812.03079, 2018.

[173] Shang-Tse Chen, Cory Cornelius, Jason Martin, and Duen Horng PoloChau. Shapeshifter: Robust physical adversarial attack on faster R-CNN object detector. In Joint European Conference on Machine Learn-ing and Knowledge Discovery in Databases, pages 52–68. Springer,2018.

[174] Cumhur Erkan Tuncali, Georgios Fainekos, Hisahiro Ito, and JamesKapinski. Simulation-based adversarial test generation for autonomousvehicles with machine learning components. In 2018 IEEE IntelligentVehicles Symposium (IV), pages 1555–1562. IEEE, 2018.

[175] Shakiba Yaghoubi and Georgios Fainekos. Gray-box adversarialtesting for control systems with machine learning components. InProceedings of the 22nd ACM International Conference on HybridSystems: Computation and Control, HSCC ’19, pages 179–184, NewYork, NY, USA, 2019. ACM.

[176] Adith Boloor, Xin He, Christopher Gill, Yevgeniy Vorobeychik, andXuan Zhang. Simple physical adversarial examples against end-to-endautonomous driving models. arXiv preprint arXiv:1903.05157, 2019.

[177] Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesvari.Learning with a strong adversary. arXiv preprint arXiv:1511.03034,2015.

[178] Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel JKochenderfer. Reluplex: An efficient SMT solver for verifying deepneural networks. In International Conference on Computer AidedVerification, pages 97–117. Springer, 2017.

[179] Shixiang Gu and Luca Rigazio. Towards deep neural network architec-tures robust to adversarial examples. arXiv preprint arXiv:1412.5068,2014.

[180] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detectingadversarial examples in deep neural networks. In 25th Annual Networkand Distributed System Security Symposium, NDSS 2018, San Diego,California, USA, February 18-21, 2018, 2018.

[181] Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and DawnSong. Adversarial example defense: Ensembles of weak defenses arenot strong. In 11th USENIX Workshop on Offensive Technologies(WOOT)’17), 2017.

[182] Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, and Yanjun Qi. Deep-cloak: Masking deep neural network models for robustness againstadversarial samples. arXiv preprint arXiv:1702.06763, 2017.

[183] Shivam Garg, Vatsal Sharan, Brian Zhang, and Gregory Valiant. Aspectral view of adversarially robust features. In Advances in NeuralInformation Processing Systems (NeurlIPS), pages 10159–10169, 2018.

[184] Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and NateKushman. Pixeldefend: Leveraging generative models to understandand defend against adversarial examples. In International Conferenceon Learning Representations (ICLR), 2018.

[185] Guoqing Jin, Shiwei Shen, Dongming Zhang, Feng Dai, and YongdongZhang. APE-GAN: adversarial perturbation elimination with GAN.In ICASSP 2019-2019 IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP), pages 3842–3846. IEEE,2019.

[186] Dongyu Meng and Hao Chen. Magnet: a two-pronged defense againstadversarial examples. In Proceedings of the 2017 ACM SIGSACConference on Computer and Communications Security, pages 135–147. ACM, 2017.

[187] Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Anan-thram Swami. Distillation as a defense to adversarial perturbationsagainst deep neural networks. In 2016 IEEE Symposium on Securityand Privacy (SP), pages 582–597. IEEE, 2016.

[188] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowl-edge in a neural network. Published in Deep Learning Workshop,Advances in Neural Information Processing Systems (NIPS), 2014.

[189] Nicholas Carlini, Guy Katz, Clark Barrett, and David L Dill. Ground-truth adversarial examples. arXiv preprint arXiv:1709.10207, 2017.

[190] Andrew Slavin Ross and Finale Doshi-Velez. Improving the adversarialrobustness and interpretability of deep neural networks by regularizingtheir input gradients. In Thirty-Second AAAI Conference on ArtificialIntelligence, 2018.

[191] Chunchuan Lyu, Kaizhu Huang, and Hai-Ning Liang. A unifiedgradient regularization family for adversarial examples. In 2015 IEEEInternational Conference on Data Mining, pages 301–309. IEEE, 2015.

[192] John Bradshaw, Alexander G de G Matthews, and Zoubin Ghahra-mani. Adversarial examples, uncertainty, and transfer testing ro-bustness in Gaussian process hybrid deep networks. arXiv preprintarXiv:1707.02476, 2017.

[193] L Schott, J Rauber, M Bethge, and W Brendel. Towards the firstadversarially robust neural network model on mnist. In SeventhInternational Conference on Learning Representations (ICLR 2019),pages 1–17, 2019.

[194] Guanhong Tao, Shiqing Ma, Yingqi Liu, and Xiangyu Zhang. Attacksmeet interpretability: Attribute-steered detection of adversarial samples.In Advances in Neural Information Processing Systems, pages 7728–7739, 2018.

[195] Nicholas Carlini. Is AmI (attacks meet interpretability) robust toadversarial examples? arXiv preprint arXiv:1902.02322, 2019.

[196] Linh Nguyen, Sky Wang, and Arunesh Sinha. A learning and maskingapproach to secure learning. In International Conference on Decisionand Game Theory for Security, pages 453–464. Springer, 2018.

[197] Jiajun Lu, Theerasit Issaranon, and David Forsyth. Safetynet: Detectingand rejecting adversarial examples robustly. In Proceedings of the IEEEInternational Conference on Computer Vision, pages 446–454, 2017.

[198] Divya Gopinath, Guy Katz, Corina S Pasareanu, and Clark Barrett.Deepsafe: A data-driven approach for checking adversarial robustnessin neural networks. arXiv preprint arXiv:1710.00486, 2017.

[199] Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and BastianBischoff. On detecting adversarial perturbations. International Con-ference on Learning Representations (ICLR), 2017.

Page 29: Securing Connected & Autonomous Vehicles: Challenges Posed … · development of fully autonomous vehicles. As the autonomous vehicles are being equipped with many sensors such as

29

[200] Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel JKochenderfer. Towards proving the adversarial robustness of deepneural networks. Formal Verification of Autonomous Vehicles (FVAV)Workshop, 2017.

[201] Florian Tramer, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow,Dan Boneh, and Patrick McDaniel. Ensemble adversarial training:Attacks and defenses. In International Conference on LearningRepresentations (ICLR), 2018.

[202] Gokula Krishnan Santhanam and Paulina Grnarova. Defending againstadversarial attacks by leveraging an entire GAN. arXiv preprintarXiv:1805.10652, 2018.

[203] Pouya Samangouei, Maya Kabkab, and Rama Chellappa. Defense-GAN: Protecting classifiers against adversarial attacks using generativemodels. In International Conference on Learning Representations(ICLR), 2018.

[204] Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, andJames Storer. Deflecting adversarial attacks with pixel deflection. InProceedings of the IEEE conference on computer vision and patternrecognition, pages 8571–8580, 2018.

[205] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille.Mitigating adversarial effects through randomization. In InternationalConference on Learning Representations (ICLR), 2018.

[206] Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van derMaaten. Countering adversarial images using input transformations. InInternational Conference on Learning Representations (ICLR), 2018.

[207] Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, XiaolinHu, and Jun Zhu. Defense against adversarial attacks using high-levelrepresentation guided denoiser. In Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition, pages 1778–1787, 2018.

[208] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, DimitrisTsipras, and Adrian Vladu. Towards deep learning models resistantto adversarial attacks. In International Conference on LearningRepresentations (ICLR), 2018.

[209] Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. Ther-mometer encoding: One hot way to resist adversarial examples. InInternational Conference on Learning Representations (ICLR), 2018.

[210] Guneet S. Dhillon, Kamyar Azizzadenesheli, Jeremy D. Bernstein,Jean Kossaifi, Aran Khanna, Zachary C. Lipton, and AnimashreeAnandkumar. Stochastic activation pruning for robust adversarialdefense. In International Conference on Learning Representations(ICLR), 2018.

[211] Francesco Croce, Maksym Andriushchenko, and Matthias Hein. Prov-able robustness of ReLU networks via maximization of linear regions.In Proceedings of Machine Learning Research, volume 89, pages2057–2066. PMLR, 16–18 Apr 2019.

[212] Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Bren-del, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, and Alek-sander Madry. On evaluating adversarial robustness. arXiv preprintarXiv:1902.06705, 2019.

[213] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Lau-rent Sifre, George Van Den Driessche, Julian Schrittwieser, IoannisAntonoglou, Veda Panneershelvam, Marc Lanctot, et al. Masteringthe game of go with deep neural networks and tree search. nature,529(7587):484, 2016.

[214] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, HershelMehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, KatieShpanskaya, et al. Chexnet: Radiologist-level pneumonia detectionon chest x-rays with deep learning. arXiv preprint arXiv:1711.05225,2017.

[215] Monika Grewal, Muktabh Mayank Srivastava, Pulkit Kumar, andSrikrishna Varadarajan. Radnet: Radiologist level accuracy using deeplearning for hemorrhage detection in CT scans. In 2018 IEEE 15thInternational Symposium on Biomedical Imaging (ISBI 2018), pages281–284. IEEE, 2018.

[216] Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferabil-ity in machine learning: from phenomena to black-box attacks usingadversarial samples. arXiv preprint arXiv:1605.07277, 2016.

[217] Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroen-ing, and Marta Kwiatkowska. Global robustness evaluation of deepneural networks with provable guarantees for l0 norm. arXiv preprintarXiv:1804.05805, 2018.

[218] Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, MartaKwiatkowska, and Daniel Kroening. Concolic testing for deep neuralnetworks. In Proceedings of the 33rd ACM/IEEE International Confer-ence on Automated Software Engineering, ASE 2018, pages 109–119,New York, NY, USA, 2018. ACM.

[219] Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue,Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, et al. Deepgauge:Multi-granularity testing criteria for deep learning systems. In Proceed-ings of the 33rd ACM/IEEE International Conference on AutomatedSoftware Engineering, pages 120–131. ACM, 2018.

[220] Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and SumanJana. Formal security analysis of neural networks using symbolicintervals. In 27th USENIX Security Symposium USENIX Security 18),pages 1599–1614, 2018.

[221] Augustus Odena, Catherine Olsson, David Andersen, and Ian Good-fellow. TensorFuzz: Debugging neural networks with coverage-guidedfuzzing. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors,Proceedings of the 36th International Conference on Machine Learn-ing, volume 97 of Proceedings of Machine Learning Research, pages4901–4911, Long Beach, California, USA, 09–15 Jun 2019. PMLR.

[222] Jianmin Guo, Yu Jiang, Yue Zhao, Quan Chen, and Jiaguang Sun.Dlfuzz: differential fuzzing testing of deep learning systems. InProceedings of the 2018 26th ACM Joint Meeting on EuropeanSoftware Engineering Conference and Symposium on the Foundationsof Software Engineering, pages 739–743. ACM, 2018.

[223] Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. Deeptest:Automated testing of deep-neural-network-driven autonomous cars.In Proceedings of the 40th International Conference on SoftwareEngineering, ICSE ’18, pages 303–314, New York, NY, USA, 2018.ACM.

[224] Mengshi Zhang, Yuqun Zhang, Lingming Zhang, Cong Liu, and SarfrazKhurshid. Deeproad: GAN-based metamorphic testing and input vali-dation framework for autonomous driving systems. In Proceedings ofthe 33rd ACM/IEEE International Conference on Automated SoftwareEngineering, pages 132–142. ACM, 2018.

[225] Zhi Quan Zhou and Liqun Sun. Metamorphic testing of driverless cars.Commun. ACM, 62(3):61–67, February 2019.

[226] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, AlexanderTurner, and Aleksander Madry. Robustness may be at odds withaccuracy. stat, 1050:11, 2018.

[227] Yasasa Abeysirigoonawardena, Florian Shkurti, and Gregory Dudek.Generating adversarial driving scenarios in high-fidelity simulators.International Conference on Robotics and Automation (ICRA), 2019.

[228] Hongyu Li, Hairong Wang, Luyang Liu, and Marco Gruteser. Auto-matic unusual driving event identification for dependable self-driving.In Proceedings of the 16th ACM Conference on Embedded NetworkedSensor Systems, pages 15–27. ACM, 2018.

[229] Junyao Guo, Unmesh Kurup, and Mohak Shah. Is it safe to drive? anoverview of factors, challenges, and datasets for driveability assessmentin autonomous driving. arXiv preprint arXiv:1811.11277, 2018.

[230] Michal Uricar, David Hurych, Pavel Krizek, and Senthil Yogamani.Challenges in designing datasets and validation for autonomous driving.Accepted in 14th International Conference on Computer Vision Theoryand Applications (VISAPP), 2019.

[231] Tommaso Dreossi, Somesh Jha, and Sanjit A Seshia. Semanticadversarial deep learning. In International Conference on ComputerAided Verification, pages 3–26. Springer, 2018.

[232] Hao Ye, Le Liang, Geoffrey Ye Li, JoonBeom Kim, Lu Lu, and MayWu. Machine learning for vehicular networks: Recent advances andapplication examples. IEEE Vehicular Technology Magazine, 13(2):94–101, 2018.

[233] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundationsof differential privacy. Foundations and Trends R© in TheoreticalComputer Science, 9(3–4):211–407, 2014.

[234] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, andOliver Wang. The unreasonable effectiveness of deep features as aperceptual metric. In Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition, pages 586–595, 2018.

[235] Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. Predictiveinequity in object detection. arXiv preprint arXiv:1902.11097, 2019.

[236] Daniel Lowd and Christopher Meek. Good word attacks on statisticalspam filters. In CEAS, volume 2005, 2005.

[237] Nic Ford, Justin Gilmer, Nicolas Carlini, and Dogus Cubuk. Adver-sarial examples are a natural consequence of test error in noise. arXivpreprint arXiv:1901.10513, 2019.

[238] Ahmad EL Sallab, Mohammed Abdou, Etienne Perot, and SenthilYogamani. Deep reinforcement learning framework for autonomousdriving. Electronic Imaging, 2017(19):70–76, 2017.

[239] Vahid Behzadan and Arslan Munir. Vulnerability of deep reinforcementlearning to policy induction attacks. In International Conference onMachine Learning and Data Mining in Pattern Recognition, pages 262–275. Springer, 2017.