8
Behaviour-based anomaly detection of cyber-physical attacks on a robotic vehicle Anatolij Bezemskij, George Loukas, Richard J. Anthony, Diane Gan Department of Computing & Information Systems University Of Greenwich London, United Kingdom Email: {a.bezemskij, g.loukas, r.j.anthony, d.gan}@gre.ac.uk Abstract—Security is one of the key challenges in cyber- physical systems, because by their nature, any cyber attack against them can have physical repercussions. This is a critical issue for autonomous vehicles; if compromised in terms of their communications or computation they can cause considerable physical damage due to their mobility. Our aim here is to facilitate the automatic detection of cyber attacks on a robotic vehicle. For this purpose, we have developed a detection mech- anism, which monitors real-time data from a large number of sources onboard the vehicle, including its sensors, networks and processing. Following a learning phase, where the vehicle is trained in a non-attack state on what values are considered normal, it is then subjected to a series of different cyber- physical and physical-cyber attacks. We approach the problem as a binary classification problem of whether the robot is able to self-detect when and whether it is under attack. Our experimental results show that the approach is promising for most attacks that the vehicle is subjected to. We further improve its performance by using weights that accentuate the anomalies that are less common thus improving overall performance of the detection mechanism for unknown attacks. 1. Introduction Vehicular cyber security has traditionally focused on passive attacks and especially on protecting the confiden- tiality of communications between vehicles or vehicles and smart infrastructures. However, over the last few years, autonomous vehicles have become a routine target for ex- perimental cyber attacks, as demonstrated as early as 2009 by the University of Washington [1], [2] and in numerous blackhat conferences since then. As a result, there is a need for protection systems appropriate for active attacks against an autonomous vehicle’s integrity or availability, and the corresponding impact on its actuation. Assuming that some attacks do get through regardless of the preventive measures, one needs to equip a vehicle with a mechanism to detect when this happens and potentially alert an operator or trigger some automated countermeasure. The focus of this work is on the real-time detection of the existence of an attack against a robot. We address both cyber-physical attacks, which are security breaches in cyber space that have an adverse effect in physical space, and physical-cyber attacks which are the reverse [3]. For this, we have developed an autonomous robotic vehicle with a variety of sensor and communication technologies typically found in the industry. To ensure that any solutions developed are highly prac- tical, we have set the following requirements: Detection should be real-time, so as to be able to support rapid and effective countermeasures. Detection should be carried out by the vehicle it- self, so as to be applicable to autonomous vehicles with limited or no communication with their human operators. Detection should not rely on the availability of knowledge of previous attacks, so as to be applicable to unknown attacks too. We do not rely on attacks on cyber-physical systems being frequent enough to allow for the gathering of a realistic body of knowledge on their impact. To meet these requirements, detection should be behaviour-based rather than knowledge- based. To address the above requirements, we have produced an onboard mechanism that monitors data related to cyber (communication and computation) and physical (actuation and sensing) features of the robot in real-time. During the training phase, the robot learns the normal range for the values of each feature monitored. In actual operation, it tracks the cyber and physical features that are in an abnormal state (beyond their learnt range) and accordingly reasons on whether a vehicle is in an attack state or not. The overall emphasis of the mechanism towards more tolerance for false positives or more tolerance for false negatives is configured by a sensitivity index, which determines the length of the normal range considered by the robot. We further improve on the detection accuracy achieved with this approach by also utilising individual weights for each feature, which are finetuned in a dedicated configuration phase. 1.1. Related Work While very mature for conventional computer systems, the field of intrusion detection is relatively new in the area of cyber-physical systems, such as vehicles and mobile robots. A relatively common approach is to use a human expert to first specify the safe and unsafe states of the vehicle and

Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

Behaviour-based anomaly detection of cyber-physical attacks on a robotic vehicle

Anatolij Bezemskij, George Loukas, Richard J. Anthony, Diane GanDepartment of Computing & Information Systems

University Of GreenwichLondon, United Kingdom

Email: {a.bezemskij, g.loukas, r.j.anthony, d.gan}@gre.ac.uk

Abstract—Security is one of the key challenges in cyber-physical systems, because by their nature, any cyber attackagainst them can have physical repercussions. This is a criticalissue for autonomous vehicles; if compromised in terms of theircommunications or computation they can cause considerablephysical damage due to their mobility. Our aim here is tofacilitate the automatic detection of cyber attacks on a roboticvehicle. For this purpose, we have developed a detection mech-anism, which monitors real-time data from a large number ofsources onboard the vehicle, including its sensors, networksand processing. Following a learning phase, where the vehicleis trained in a non-attack state on what values are considerednormal, it is then subjected to a series of different cyber-physical and physical-cyber attacks. We approach the problemas a binary classification problem of whether the robot isable to self-detect when and whether it is under attack. Ourexperimental results show that the approach is promising formost attacks that the vehicle is subjected to. We furtherimprove its performance by using weights that accentuatethe anomalies that are less common thus improving overallperformance of the detection mechanism for unknown attacks.

1. Introduction

Vehicular cyber security has traditionally focused onpassive attacks and especially on protecting the confiden-tiality of communications between vehicles or vehicles andsmart infrastructures. However, over the last few years,autonomous vehicles have become a routine target for ex-perimental cyber attacks, as demonstrated as early as 2009by the University of Washington [1], [2] and in numerousblackhat conferences since then. As a result, there is a needfor protection systems appropriate for active attacks againstan autonomous vehicle’s integrity or availability, and thecorresponding impact on its actuation. Assuming that someattacks do get through regardless of the preventive measures,one needs to equip a vehicle with a mechanism to detectwhen this happens and potentially alert an operator or triggersome automated countermeasure. The focus of this workis on the real-time detection of the existence of an attackagainst a robot. We address both cyber-physical attacks,which are security breaches in cyber space that have an

adverse effect in physical space, and physical-cyber attackswhich are the reverse [3]. For this, we have developed anautonomous robotic vehicle with a variety of sensor andcommunication technologies typically found in the industry.

To ensure that any solutions developed are highly prac-tical, we have set the following requirements:

• Detection should be real-time, so as to be able tosupport rapid and effective countermeasures.

• Detection should be carried out by the vehicle it-self, so as to be applicable to autonomous vehicleswith limited or no communication with their humanoperators.

• Detection should not rely on the availability ofknowledge of previous attacks, so as to be applicableto unknown attacks too.

We do not rely on attacks on cyber-physical systems beingfrequent enough to allow for the gathering of a realistic bodyof knowledge on their impact. To meet these requirements,detection should be behaviour-based rather than knowledge-based. To address the above requirements, we have producedan onboard mechanism that monitors data related to cyber(communication and computation) and physical (actuationand sensing) features of the robot in real-time. During thetraining phase, the robot learns the normal range for thevalues of each feature monitored. In actual operation, ittracks the cyber and physical features that are in an abnormalstate (beyond their learnt range) and accordingly reasons onwhether a vehicle is in an attack state or not. The overallemphasis of the mechanism towards more tolerance for falsepositives or more tolerance for false negatives is configuredby a sensitivity index, which determines the length of thenormal range considered by the robot. We further improveon the detection accuracy achieved with this approach byalso utilising individual weights for each feature, which arefinetuned in a dedicated configuration phase.

1.1. Related Work

While very mature for conventional computer systems,the field of intrusion detection is relatively new in the area ofcyber-physical systems, such as vehicles and mobile robots.A relatively common approach is to use a human expert tofirst specify the safe and unsafe states of the vehicle and

Page 2: Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

determine a large number of rules that cover all potentialstates, in what is known as behaviour-specification intrusiondetection [4], [5]. Rules can also be determined througha more automated learning phase without the involvementof a human expert: The vehicle is subjected to a seriesof different attacks, observing their impact and training amachine learning system to recognise these. Examples ofsuch supervised learning approaches for the detection ofattacks against robotic vehicles can be found in [6], [7],[8], where the rules are formed by a decision tree, whichtakes into account both cyber and physical features. Real-time capture of an attack’s physical impact, such as vibrationof the chassis due to repetitively entering and existing safemode during a denial of service attack, has been shown toimprove detection accuracy and latency.

When a vehicle does not operate in isolation, but belongsto a team of vehicles, which can make similar observationsabout their environment and each other, intrusion detectioncan be based on the identification of misbehaviour of oneof the members of the team. There, reputation-based ap-proaches [9] and voting algorithms [10] can prove veryuseful. For instance, if one vehicle veers off the pre-definedroute or reports very different sensor data, this can be con-sidered as an indication that it may have been compromised.

Most of the research presented above makes assumptionsthat are largely unrealistic in the operational environment ofa cyber-physical system such as a robotic vehicle (whetherautonomous or not). Assuming that a new attack will looklike one that has been seen before is reasonable for con-ventional computer networks, where millions of variationsof the same attacks can be seen in the same year. Forcyber-physical systems, this is less so, because attacks areless common and have very a different impact dependingon the type of system targeted. As a result, knowledge-based approaches, where the vehicle is trained to see specificattacks perform poorly when they encounter new types ofattack. At the same time, assuming that a robotic vehiclewill belong to a team, where group observation can helpspot signs of cyber compromise can be unrealistic in manyoperational environments.

Researchers have experimented with methods for detect-ing anomalies, but usually only for a particular aspect of avehicle’s operation. An example for aircraft is the detectionof false automatic dependent surveillance-broadcast (ADS-B) messages, used by aircraft to broadcast their positionto other aircraft and to air traffic control. Strohmeier et al.[11] achieve detection by monitoring statistics regarding thereceived signal strength (RSS), as it is assumed that falsesignals would be coming from the ground and thus wouldhave different RSS than signals coming from aircraft. Asimilar logic can be followed to protect autonomous vehiclesthat rely on GPS signals, as, coming from satellites, legiti-mate GPS signals are naturally much weaker than spoofedsignals that would come from a terrestrial source [12].

A first attempt to provide completely sensor-agnostic andonboard intrusion detection that is applicable to unknownthreats and takes into account both cyber and physicalsources has been made in [13]. Here, we extend this work

considerably by providing a method to quantify the degree towhich a vehicle is likely to be under attack without relyingon a learning phase, and further improve it with a mecha-nism that assigns weights to the different data sources. Wevalidate this approach with real-world experiments involvinga variety of normal and attack conditions.

2. Robotic Testbed System Design

Our testbed is a highly modular robotic vehicle devel-oped from the ground up for the purposes of this research(Figure 3). It contains a large variety of sensors, actuatorsand communication channels widely used in the industry.The latter include CAN, RS-485, WiFi and ZigBee.

Figure 1. High-level communication diagram

The various sub-systems are integrated such that systemcomponents produce signals and feedback used by othersystem components to change overall system behaviour.Several components that are mentioned in Table 1 produceinstrumentation data which is used as cyber or physicaldomain indicators. The combination of such indicators canproduce additional meta-data that can be used to identify aparticular behaviour of a system. All indicators that are usedare generalised and are treated as a data source for examplethe compass bearing signal output represents the orientationof the vehicle in degrees, but in an autonomous system it istreated as a stream of numerical values without context orunits. Processing is distributed across the various embeddedprocessors on the testbed platform (Table 1). Overall thesystem contains six processing nodes, five of which areAVR-CAN development boards clocked at 16 MHz, and oneSTK300 Kanda board powered by Atmel ATMega1281 chipclocked at 8 MHz.

The vehicle is able to undertake a variety of autonomictasks, such as navigation based on the logical mission layerthat represents a sequence of steps given to the testbed.Sensors allow the vehicle to navigate autonomously in anenvironment using the compass bearing to keep track of

Page 3: Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

TABLE 1. EQUIPMENT INSTALLED IN THE ROBOTIC VEHICLE TESTBED

Feature PurposeCAN bus Internal communicationZigBee External communicationWiFi Media streamingCompass Bearing Navigation correctionDC Motors MovementUltrasonic Rangers Collision avoidance

the direction, ultrasonic rangers for collision detection andavoidance, and pitch and roll sensors to make directioncorrections and inform the system of environment volatility.There is also a sensor that measures the temperature ofthe heat sink connected to the on-board voltage regulatorswhich supply power for the camera and robotic arm. In thisway, the system is able to determine if these heavy-current-drawing system components are in use. These sensors andadditional meta-data extraction allow automatic character-isation of the real-time behavioural profile of the vehiclewhilst in operation.

To gather the data for off-line analysis, we use an ex-ternal workstation. Sensor data from the vehicle is collectedand stored in a knowledge base. Communication betweenthe workstation and the vehicle is achieved using a dedicatedZigBee network. The ZigBee connection also enables us totransmit commands to the testbed (e.g. to initiate missions).The camera is a self-contained unit; its audio and video feedsare streamed using a standard WiFi protocol. An overview ofhigh-level communication architecture between workstationand robotic testbed vehicle can be seen in Figure 1.

The robotic vehicle is capable of accepting both simpleremote commands in terms of navigation, camera streamingand the operation of the robotic arm that is attached to itand complex missions uploaded to it. For security purposes,commands received are executed only if the sender is withina list of authorised ZigBee nodes and the command isin the correct format. The robotic vehicle testbed doesnot send any commands to any external nodes within theZigBee network. The testbed will only periodically reportits instrumentation data to a verified connected workstation.The instrumentation report periodicity is 1 s, due to thelow bandwidth ZigBee protocol and unique ZigBee ZE10module behaviour. Therefore higher-rate sample aggregationis performed on-platform on the sensor hosting nodes.

For communication between system components, thetestbed uses a CAN bus. This bus is used to share overallsensor data from data sources, including additional meta-data extracted during data analysis by the processing nodes.The internal communication architecture is shown in Figure2. This data is retransmitted to other nodes through gatewaysand is collected at the reporting node which transmits datato the workstation when appropriate.

The software structure of the robotic vehicle testbed usesa layered architecture, which separates the different levels ofreasoning from the lowest physical sensor level, representedby individual embedded nodes performing analog to digitalconversions interpreting signals into an understandable soft-

Figure 2. Internal Communication: gateways connect different subsystems

ware language. The next-higher level is the classificationlayer where data is analysed using statistical analysis ap-proaches, such as exponential smoothing to determine thetrends in the data. A level higher, we have an autonomousmodule controller layer which controls actuating capabilitiesbased on the data received from the lower layers of themodel. The autonomic module controller layer is a set ofautonomic controllers that are carrying out their definedtasks, such as robotic arm movement or navigational control.A mission layer then collects knowledge from autonomiccontrollers and evaluates if the expected mission goal hasbeen achieved. The layered software approach improvesflexibility and maintainability in terms of software devel-opment for the robotic vehicle testbed, as all these layersare implemented as a set of libraries that can be extendedfurther.

Figure 3. Robotic vehicle testbed

3. Experimental Environment

The training and testing phases of the experiments havebeen conducted in the Queen Mary Building at GreenwichUniversity. The irregular surface of the uneven stone flooringdents and lumps (Figure 4) provides the desired stochasticity

Page 4: Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

for the different data sources, as well as a challengingenvironment for the mobility of the vehicle. At the same, it isa controlled environment where we can ensure repeatabilityof the experiments without any foreign objects or weathermodifying the parameters of each iteration. This also allowsus to identify the behavioural profile of an environmentbased on the data source information. The corridor has aset of inset door openings on either side, which facilitatephysical observation of the effects of different attacks andof the periodic behaviour of the sensors (especially theultrasonic ones), as the vehicle passes by.

Figure 4. Experimental environment. An old corridor with irregular surfaceand uneven stone flooring dents and lumps at the University of Greenwich.

The corridor is 28m long and the distance from wall towall is 2m and constant. The experiments were repeatedeight times to ensure that the collected data set is rep-resentative and can be used for creation and evaluationof the behavioural profile. The behavioural profile is builtusing patterns of the variation and background noise in datasources; mainly we are looking at the spikiness of the datavariations and the variety of deviations. The experimentalenvironment facilitates repeatability and contains static ele-ments that can be used as guideline features during analysisof gained data, but it also introduces significant stochasticelements which are essential for understanding the normallevels of noise and variability in sensor signals.

The experimental scenario evaluated in this paper is amission in which the robotic vehicle testbed has to reach theend of the corridor using its own sensing capabilities. Thecomplexity of such a mission is not obvious. The uniquenessof the flooring surface disrupts the direction of the vehicle,forcing it to continuously adapt the speed of its motors andits direction and ensure that it maintains a safe distance fromthe walls during operation. The scenario was chosen due tothe structural uniqueness of the vehicle, and as such thescenario exercises all sensor capabilities. The experiment isorganised in two phases. The first is a training phase, whereover several runs a learning data set is collected that allowsus to create a “normal” behavioural profile. The secondphase is to evaluate the recognition of this profile.

4. Methodology

4.1. Signature of normal behavioural profile

In [13], we have described how signatures are formedand can be used for anomaly or threat identification. Thislearning phase is shown as “L1” in Figure 5. It is based on aninitial signature generation to establish the normal behaviourprofile of the sensors on the system. The data from thesensors is transformed into a generic data source formatthat allows the system to reason about them identically.The learning phase forms a normal behaviour profile basedon signature characteristics of each data source and formsnormal behaviour variation that is used during the validationphase.

After the learning phase, dynamically detected values arecompared with learnt normal behavioural profile signatures.The term “anomaly” is used to denote that a signal charac-teristic has been measured to be outside its expected normalrange. The signature is formed of 11 characteristics whichfacilitate learning the normal value range limits, as shownin Table 2. Differences and deviations from the standarddeviation are called spikes and these characteristics can beseen in Table 2. The validation phase, shown as “V” inFigure 5, classified behaviour based on an overall anomalyindex represented by a number of anomalies in the system.

TABLE 2. SIGNATURE CHARACTERISTICS MONITORED IN REAL-TIMEONBOARD THE VEHICLE

Value Type Characteristic

Raw MinimumMaximum

Exponential Smoothing

MinimumMaximum

Lowest DifferenceHighest Difference

Deviation Standard Deviation

Spike Areas

50% - 100%100%-150%150%-200%Over 200%

Figure 5. Methodology work flow

Each deviation of a signature characteristic is counted torepresent an outgoing level of threat from the data source.These deviations are summarized to produce an anomalyindex for the data source, this index represents the deviationlevel.

4.2. Anomaly Weighing and Indicator Confidence

To strengthen the detection performance, we have intro-duced an additional learning phase, shown as “L2” in Figure

Page 5: Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

5, which tunes the system by assigning weights to sourcesaccording to their likelihood of appearing anomalous insome normal scenarios too. The focus of this phase isto learn the number of individual signature characteristicanomalies that may be encountered in a non-attack condi-tion, arising due to environmental noise.

Figure 6. Matrix of the anomalies identified. Each row corresponds to adata source and each column to a signal characteristic measured for eachsource. The colour coding indicates anomalies

In Figure 6, we demonstrate how we summarise anoma-lies by taking the system data source signatures from fivenon-attack situations. Only two are shown here as a demon-stration. To reduce the importance of anomalies that tend tooccur in a non-attack environment we calculate the weight ofeach signature characteristic anomaly sample w(cij) in thefollowing way: for a number of n scenarios S`, 1 ≤ ` ≤ n,we take the complement of the mean of each signaturecharacteristic anomaly sample cij(`), where i represents adata source and j represents a signature characteristic:

w (cij) = 1− ci,j = 1−(∑n

`=1 cij(`)

n

)which produces the weight of an anomaly sample for the

signature characteristic. This allows the system to derive amore precise score taking into account the anomalies thattend to be less indicative of an attack as they persist in anon-attack conditions.

The calculated value represents the weight of a signaturecharacteristic. If the system learns that a particular signaturecharacteristic has a high probability of anomaly occurrencein a non-attack mission scenario then the importance of suchanomaly is reduced. This generates a lower anomaly index

for the data source’s signature. The sum of all weightedanomalies generates an overall anomaly index that is usedas a reference in the intrusion detection mechanism. Toimprove the methodology further we introduce a dynamicvariable that acts as a controller of the “normality” threshold.The “normality” variation is formed during the “L2” phase.The overall anomaly index generated from the non-attackexperiments is used as a mean reference and the dynamicvariable controls the variation. This allows the detectionmechanism to identify anomalous behaviour in two cases:when multiple anomalies are detected generating a highoverall anomaly index, as well as when anomalies are notdetected, therefore generating a low overall anomaly index.An overview of a work flow of the intrusion detectionmechanism can be seen in Figure 7.

Figure 7. Intrusion detection mechanism

When the learnt weights scheme is applied to detectedcharacteristic anomalies in several attack and mechanicalfailure experiments, the anomalies that have higher weightare accentuated and the importance of anomalies that tendto occur during a non-attack mission scenario are reduced.This reduces the anomaly score of the system in a non-attack scenario and ensures the score increases when abnor-mal circumstances occur on characteristics that should nototherwise change during non-attack experiments.

Figure 8. Matrices of anomalies spotted for each of the incidents in theexperiments (compass manipulation, rogue node, replay packet injection,wheel failure)

Page 6: Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

In Figure 8, we observe 4 different scenarios when therobotic vehicle testbed is in operation and is under cyber-physical attacks (Replay packet injection and rogue node),a physical-cyber attack (compass manipulation) or duringan unexpected mechanical failure. The result in each cellis rounded to the nearest whole number for presentationpurposes. The overall anomaly index is derived from asum of all signature characteristic anomaly results whenthe weighting scheme is applied. This index is used by theintrusion detection system to reason the behaviour of therobotic testbed vehicle at the level of an overall anomalyindex observation, reducing the need to analyse each sig-nature characteristic as they are updated, thereby reducingcomputational requirements.

5. Experimental scenarios

We have evaluated the performance of the proposed sys-tem using a prototype implementation on our autonomousvehicle, against different incidents, which the robot had notpreviously been subjected to.

5.1. Cyber-physical Attack C1: Replay Packet In-jection

In this scenario, data previously collected are replayedwith different intensities (1, 2, or 3 packets/s).

Figure 9. Bearing Sensor Data (Packet Injection)

The attack lasts for 40 s and starts 40 s after the start ofthe robotic vehicle’s mission. Its impact on the senor databearing is shown in Figure 9.

5.2. Cyber-physical Attack C2: Rogue Node

Here, a rogue node is integrated in the system andstarts replaying packets within the internal communicationnetwork. The attack has variable intensities and uses am-plification of the packet rate, where it replays each packetthat is captured multiple times. Each attack is repeated twiceusing various amplifications. The experiment includes twoiterations of 25 s of normal operation and 25 s of attack, asshown in Figure 10.

The particular attack achieves an impact similar in natureto that of a denial of service attack on a conventionalcomputer system [14], causing rapid increase of networkutilisation.

Figure 10. Internal Network Packet Rate (Rogue Node)

5.3. Physical-cyber Attack P1: Compass manipula-tion

In this experiment, a magnet is placed close to thecompass sensor while it is in operation. After 40 s of normaloperation, the magnet is applied. It is removed after a further40 s.

Figure 11. Bearing Sensor Data (Compass Manipulation)

The impact on the bearing reported can be seen inFigure 11. Such a change affects dramatically the vehicle’sorientation ability.

5.4. Normal Failure F1: Broken wheel

This experiment occurred unexpectedly, as based on thelearnt knowledge there was a high number of false-positivealarms during an experiment. Further investigation showedthat a mechanical fault had occurred, as the mecanum wheelholder screws had worked loose due to the vibrations causedby the previous experiments. These runs have been includedin our analysis to explain the methodology and also todemonstrate the ability of this model not only to identifyan attack, but also to identify mechanical faults as well.The fact that our signature method revealed this unexpectedissue, validates the sensor-agnostic approach.

6. Performance evaluation

Figures 8, 9, 10 and 11 show example experimental runsfor different cyber and physical incidents and their impact onthe cyber and physical data sources used by the detection

Page 7: Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

mechanism. Green corresponds to a normal state with nonoticeable impact from an attack. The color shift to redindicates a higher anomaly occurrence detection in the set ofattack scenarios as well as during the accidental mechanicalfailure. For evaluation of the detection performance, wehave used receiver operating characteristics (ROC) curvesand measured their area under curve (AUC) (Figure 14).AUC values near 0.5 indicate random detection, while near1 indicate completely correct detection. Table 3 shows theAUC values measured in our experiments for compass ma-nipulation (CM), packet injection (PI) and rogue node (RN)attacks, for both the simple approach with the signature andthe extended with the application of the learned weights.The latter appears to be particularly useful for the case ofthe more challenging attack to detect, which is the roguenode. In that case, the AUC increased from 0.406 to 0.875.

TABLE 3. OVERALL PERFORMANCE BASED ON ROC AREA UNDERCURVE FOR COMPASS MANIPULATION, PACKET INJECTION AND

ROGUE NODE ATTACKS

CM PI RNSignature + Weights 1.0 1.0 0.875Signature 0.938 1.0 0.406

Detection accuracy for the compass manipulation attackis shown in Figure 12, demonstrating that the impact ofthis attack was high and affected multiple data sourcesproducing a high score that exceeded the allowed thresholdscore drastically.

Figure 12. Compass Attack ROC Performance

The performance of the system for the packet injectionattack yielded an AUC score of 1 (Figure 13). In thisexperiment the physical impact on the vehicle while low asexternal communication was under attack, and the vehiclerelied only on the internal network communication data todetect it. The high score was achieved due to large deviationson multiple data sources, to such an extent that the level offalse-positive detections was impossible to achieve.

Using the sensor-agnostic approach an issue can arisewhen deviations are detected in the data source without

Figure 13. Packet Injection Attack ROC Performance

the impact on a robotic vehicle testbed, increasing thefalse-negative detection rate. Introducing weights in to thedetection technique, improved the detection rate for therogue node attack as shown in Figure 14. This techniqueemphasises the anomalies that are not frequently seen innon-attack scenarios.

Figure 14. Rogue Node Attack ROC Performance

7. Conclusion

We have presented a sensor-agnostic methodology thatis demonstrably able to run onboard a resource-constrainedautonomous vehicle and detect in real-time attacks, whichit has not been exposed to before. We achieve this bylearning the normal ranges for a wide variety of cyberand physical data sources and applying weights to finetune their importance for detecting anomalous behaviouralprofiles. The approach has shown to be promising for thevariety of different (unknown to the vehicle) attacks that weexperimented with, including an unplanned physical failure.

Page 8: Behaviour-based anomaly detection of cyber-physical attacks on a …gala.gre.ac.uk/15819/7/15819 BEZEMSKIJ_Behaviour-based... · 2016. 9. 30. · STK300 Kanda board powered by Atmel

Having shown the feasibility of the approach, our futurework will include a wider variety of attack scenarios duringmissions of different types of autonomous actuation andnavigation. We will focus especially on cases where theproposed methodology’s detection rate is expected to drop.An example of this would be using a packet injectionattack. The current high detection rate is achieved as, in theexperiments performed, deviations were high from multipledata sources. However, in the case where an attacker wouldreplay an attack with the data collected during the sameongoing mission, the deviations would be lower and thatwould reduce the overall anomaly index score as only certaincyber features would be affected. Also if one data sourcestarts acting abnormally and affects other data sources asituation could arise where they will cancel each other out,producing fewer anomalies. Such scenarios will help usacquire a clearer picture of the limits of the approach inchallenging conditions.

Acknowledgments

This research has been funded and supported by theDefence Science and Technology Laboratory. We thankRobert Sayers for his invaluable assistance and feedback.

References

[1] K. Koscher et al., “Experimental security analysis of a modernautomobile,” in Security and Privacy (SP), 2010 IEEE Symposiumon. IEEE, 2010, pp. 447–462.

[2] S. Checkoway, D. McCoy, B. Kantor, D. Anderson, H. Shacham,S. Savage, K. Koscher, A. Czeskis, F. Roesner, and T. Kohno, “Com-prehensive experimental analyses of automotive attack surfaces,” inUsenix Security Symposium, 2011.

[3] G. Loukas, Cyber-Physical Attacks: A Growing Invisible Threat.Butterworth-Heinemann, 2015.

[4] R. Mitchell and R. Chen, “Adaptive intrusion detection of maliciousunmanned air vehicles using behavior rule specifications,” Transac-tions on Systems, Man, and Cybernetics: Systems, vol. 44, no. 5, pp.593–604, 2014.

[5] R. Mitchell and I.-R. Chen, “Behavior rule specification-based in-trusion detection for safety critical medical cyber physical systems,”Dependable and Secure Computing, IEEE Transactions on, vol. 12,no. 1, pp. 16–30, 2015.

[6] T. Vuong, A. Filippoupolitis, G. Loukas, and D. Gan, “Physical indi-cators of cyber attacks against a rescue robot,” in IEEE InternationalConference on Pervasive Computing and Communications. IEEE,2014, pp. 338–343.

[7] T. P. Vuong, G. Loukas, D. Gan, and A. Bezemskij, “Decision tree-based detection of denial of service and command injection attackson robotic vehicles,” in Information Forensics and Security (WIFS),2015 IEEE International Workshop on. IEEE, 2015, pp. 1–6.

[8] T. P. Vuong, G. Loukas, and D. Gan, “Performance evaluation ofcyber-physical intrusion detection on a robotic vehicle,” in Computerand Information Technology; Ubiquitous Computing and Communi-cations; Dependable, Autonomic and Secure Computing; PervasiveIntelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEEInternational Conference on. IEEE, 2015, pp. 2106–2113.

[9] A. Fagiolini, M. Pellinacci, G. Valenti, G. Dini, and A. Bicchi,“Consensus-based distributed intrusion detection for multi-robot sys-tems,” in Robotics and Automation, 2008. ICRA 2008. IEEE Interna-tional Conference on. IEEE, 2008, pp. 120–127.

[10] R. Mitchell and I.-R. Chen, “Adaptive Intrusion Detection of Mali-cious Unmanned Air Vehicles Using Behavior Rule Specifications,”IEEE Transactions on Systems, Man, and Cybernetics: Systems,vol. PP, no. 99, p. 1, 2013.

[11] M. Strohmeier, V. Lenders, and I. Martinovic, “Intrusion detectionfor airborne communication using phy-layer information,” in 12thConference on Detection of Intrusions and Malware & VulnerabilityAssessment (DIMVA). Springer, 2015.

[12] J. Warner and R. Johnston, “GPS spoofing countermeasures,” Home-land Security Journal, vol. 25, no. 2, pp. 19–27, 2003.

[13] A. Bezemskij, R. J. Anthony, D. Gan, and G. Loukas, “Threat evalu-ation based on automatic sensor signal characterisation and anomalydetection,” in Proceedings of The Twelfth International Conferenceon Autonomic and Autonomous Systems. IARIA, 2016, pp. 1–7.

[14] G. Loukas and G. Oke, “Likelihood ratios and recurrent randomneural networks in detection of denial of service attacks,” in Proceed-ings of International Symposium of Computer and TelecommunicationSystems (SPECTS). SCS, 2007.