330
Cyber Counterintelligence: Assets, Audiences, and the Rise of Disinformation Courteney J. O’Connor July 2021 A thesis submitted for the degree of Doctor of Philosophy of The Australian National University © Copyright by Courteney J. O’Connor 2021 All Rights Reserved

Cyber Counterintelligence: Assets, Audiences, and the Rise of

Embed Size (px)

Citation preview

Cyber Counterintelligence: Assets, Audiences, and the Rise

of Disinformation

Courteney J. O’Connor

July 2021

A thesis submitted for the degree of Doctor of Philosophy of

The Australian National University

© Copyright by Courteney J. O’Connor 2021

All Rights Reserved

ii

iii

Declaration

This thesis contains no material which has been accepted for the award of any other degree or

diploma in any university. To the best of my knowledge, it contains no material previously

published or written by another person, except where due reference is made in the text. My research

was partially supported by an Australian Government Research Training Program (RTP)

Scholarship.

Courteney J. O’Connor

July 2021

iv

v

Acknowledgements

I extend my deepest thanks and gratitude to Associate Professor Adam Henschke, Associate

Professor Matthew Sussex, and Dr. Jennifer Hunt. Thank you for the advice, the humour, the

lessons, the patience, the understanding, the compassion, and the time you have all given me over

the last five years. I appreciate it more than I could ever truly articulate, and I am a better scholar

for having been the beneficiary of your guidance. Thanks also to Professor Paul Cornish and

Professor Roger Bradbury– your advice and expertise have been invaluable tools and I truly

appreciate your guidance.

My family have been a source of constant strength and support throughout this project as they

have been for every other part of my life. To my mothers – thank you. You are my inspiration and

I hope I have done you proud. Micayla, Tyler, Cambell – thank you for being truly excellent and

frequently hilarious siblings.

Dr. Megan Poore and Mrs. Tracy McCrae – you are the pillars upon which the Crawford PhD

program rests. Thank you for everything.

Jessie, John, Ana, Mel, the ladies of BC – thank you for all keeping me laughing and always being

there to listen, to cheerlead, and even occasionally to proofread. I appreciate you all beyond words.

I undertook this project because I had questions I wanted answered. After five years of research

and analysis, I can safely say I have more questions than I started with and more answers than I

ever thought I would find. Thank you to everyone who listened, who offered advice, who pointed

me toward helpful material, who spotted a junior academic and took the time to help out. Thank

you to everyone who taught me how to find the answers and accept that the good ones always

open up new questions.

vi

vii

Abstract

In April 2021, Facebook suffered yet another data breach that affected hundreds of millions of

accounts. The private information of over 500 million people had been stolen by hackers - names,

phone numbers, email addresses, locations and more. The cache is potentially valuable to a host of

malicious actors, from criminals motivated by financial gain to hostile foreign actors microtargeting

voters through information operations. It follows an evolution of threats in cyberspace targeting

government agencies, utilities, businesses, and electoral systems. With a focus on state-based actors,

this thesis considers how state threat perception of cyberspace has developed, and if that

perception is influencing the evolution of cyber counterintelligence (CCI) as a response to cyber-

enabled threats such as disinformation. Specifically, this thesis traces the threat elevation of

cyberspace through the evolution of the published national security documentation of the United

Kingdom, asking how threat elevation corresponds to the development of CCI, if at all, and what

sort of responses these processes generate to combat the rising threat of disinformation campaigns

conducted against liberal democracies.

Democratic audiences are a target of influence and information operations, and, as such, are an

intelligence and security vulnerability for the state. More so than in previous decades, to increase

national resilience and security in cyberspace, the individual, as part of the democratic audience, is

required to contribute to personal counterintelligence and security practices. This research shows

that while assets and infrastructure have undergone successful threat elevation processes,

democratic audiences have been insufficiently recognised as security vulnerabilities and are

susceptible to cyber-enabled disinformation.

viii

ix

“It ain’t what you don’t know that gets you into trouble. It’s

what you know for sure that just ain’t so.”

Mark Twain.

x

xi

TABLE OF CONTENTS Acknowledgements ........................................................................................................................... v

Abstract ........................................................................................................................................... vii

Figures and Tables ........................................................................................................................ xiii

Acronyms ......................................................................................................................................... xv

1 - Introduction .................................................................................................................................. 1

Approach and Research Design ................................................................................................................................................ 4 Chapter Outline ........................................................................................................................................................................ 17

2 - Contextualising Cyber Counterintelligence: the Need for Threat Elevation Analysis ............. 19

Securitisation ............................................................................................................................................................................. 19 Counterintelligence .................................................................................................................................................................... 57 Disinformation .......................................................................................................................................................................... 67 Conclusion ................................................................................................................................................................................. 74

3 - Threat Elevation Analysis .......................................................................................................... 76

Securitisation as Threat Elevation Theory ............................................................................................................................. 76 The Threat Elevation of Cyberspace ....................................................................................................................................... 84 Threat Elevation of Cyberspace ............................................................................................................................................. 114 Conclusion: Riskified but Not Securitised ............................................................................................................................ 115

4 - The United Kingdom: Tracing the Threat Perception of Cyberspace ..................................... 117

Document Selection and Justification .................................................................................................................................... 118 Analysis: National Security Strategies and Strategic Defence and Security Reviews ........................................................ 119 Analysis: Cyber Security Strategies ....................................................................................................................................... 150 Case Summary ........................................................................................................................................................................ 164 Conclusion ............................................................................................................................................................................... 166

5 - Counterintelligence and Cyberspace ....................................................................................... 168

Cyber Counterintelligence ....................................................................................................................................................... 173 A Typology of Cyber Counterintelligence .............................................................................................................................. 185 Conclusion ............................................................................................................................................................................... 221

6 - CCI in an Age of Disinformation: Assets vs. Audiences ......................................................... 223

Disinformation ........................................................................................................................................................................ 227 Elevating the Audience .......................................................................................................................................................... 246 Elevation and Audiences – COVID-19 Disinformation ................................................................................................. 254 Recognition and Resilience: Successful Threat Elevation in Sweden ................................................................................... 260 Conclusion: Elevating the Threat of Disinformation ........................................................................................................... 267

7 - Conclusion ................................................................................................................................ 270

Bibliography .................................................................................................................................. 277

xii

xiii

Figures and Tables

Figure 1 - The Elements of Cyber Security ........................................................................................ 3

Figure 2 - The Elements of Securitisation Theory ........................................................................ 24

Figure 3 - The Sectors of Security ....................................................................................................... 33

Figure 4 - Photo: Robert Clark, Associated Press, 2001 ............................................................... 44

Figure 5 - Securitisation and the orders of concern ...................................................................... 81

Table 1 – Passive, Active, Proactive Intelligence Collection .................................................... 188

Table 2 – Passive, Active, Proactive Cyber Counterintelligence ............................................. 190

Figure 6 - Twitter flags potential disinformation ......................................................................... 234

xiv

xv

Acronyms

5G Fifth-Generation

9/11 September 11, 2001 World Trade Centre terrorist attacks

ABC American Broadcasting Company

ACD Active Cyber Defence

ACSC Australian Cyber Security Centre

APT Advanced Persistent Threat

ASPI Australian Strategic Policy Institute

ATM Automatic Teller Machine

CBRN

CCDCOE

Chemical Biological Radiological Nuclear

Cooperative Cyber Defence Centre of Excellence

CCI Cyber Counterintelligence

CCTV Close Circuit Television

CEO Chief Executive Officer

CERT Computer Emergency Response Team

CIA

CIR

Central Intelligence Agency (United States)

Cyber Incident Response

CNN Cable News Network

COPRI Copenhagen Peace Research Institute

COVID-19 Coronavirus Disease of 2019 (virus SARS-CoV2)

CRG Corporate Response Group

CSOC Cyber Security Operations Centre (United Kingdom)

CSS Cyber Security Strategy

CST Copenhagen Securitisation Theory

DCOG Defence Cyber Operations Group

DDoS Distributed Denial of Service

DNC Democratic National Committee

EU European Union

FBI Federal Bureau of Investigation (United States)

FOI Totalförsvarets Forskningsinstitut (Defence Research Agency, Sweden)

GCHQ Government Communications Headquarters (United Kingdom)

GDP Gross Domestic Product

GFC Global Financial Crisis

xvi

GGE Group of Governmental Experts (United Nations)

GPS Global Positioning System

HIV/AIDS Human Immunodeficiency Virus/Acquired Immunodeficiency

Syndrome

HUMINT Human Intelligence

IDP Internally Displaced Persons

IoT Internet of Things

IP

IRA

Intellectual Property

Internet Research Agency

ISIL Islamic state of Iraq and the Levant (Alternate name: Daesh. See also:

ISIS)

ISIS Islamic state of Iraq and Syria (Alternate name: Daesh. See also: ISIL)

ISP

IT

Internet Service Provider

Information Technology

KGB Komitet Gosudartvennoy Bezopasnosti (Committee for State Security,

Soviet Union)

LAN Local Area Network

MAD Mutually Assured Destruction

MI5 Military Intelligence Section 5/Security Service (United Kingdom)

MOD Ministry of Defence (United Kingdom)

MSB Myndigheten för Samhällsskydd och Beredskap (Civil Contingencies

Agency, Sweden)

MSF Medecins Sans Frontiers (Doctors Without Borders)

NATO North Atlantic Treaty Organisation

NCG Non-affiliated Citizen Group

NGO Non-Governmental Organisation

NSA National Security Agency (United States)

NSCR National Security Capability Review

NSS

NZ

National Security Strategy

New Zealand

OCEAN Openness, Conscientiousness, Extraversion, Agreeableness,

Neuroticism

OCS

OECD

Office of Cyber Security (United Kingdom)

Organisation for Economic Cooperation and Development

xvii

P5

PII

Permanent Five (Member of the UNSC)

Personally Identifiable Information

PIN Personal Identification Number

PLA People’s Liberation Army (China)

PRC

RSCT

People’s Republic of China

Regional Security Complex Theory

SCADA Supervisory Control and Data Acquisition (System)

SDSR Strategic Defence and Security Review

SIGINT

SIS

Signals Intelligence

Secret Intelligence Service (United Kingdom)

SOC Serious Organised Crime

SSAG state-sponsored or –Affiliated Group

SSCI Senate Select Committee on Intelligence (United States)

UAV Unmanned Aerial Vehicle

UDHR Universal Declaration of Human Rights

UK United Kingdom

UN United Nations

UNIDIR United Nations Institute for Disarmament Research

UNSC United Nations Security Council

US United States of America

USB Universal Serial Bus

USSR Union of Soviet Socialist Republics (Alternate name: Soviet Union)

VPN Virtual Private Network

WHO World Health Organisation

WMD Weapons of Mass Destruction

xviii

1

1 - Introduction

In April 2021, the news broke that Facebook had suffered a data breach that affected hundreds of

millions of accounts. The private, personally identifiable information of over 500 million people

had been stolen by hackers - names, phone numbers, email addresses, and more.1 Over the last five

years, multiple elections in Western democratic states have been affected by organised

disinformation campaigns targeted at the voting populations of those states.2 The two problems of

cyber security and disinformation are conceptually related. Without adequate cyber security

measures in place, our information is vulnerable to malicious actors. That information can be used

for the targeting of information operations. Disinformation, if communicated efficiently and

effectively, has the potential to undermine the integrity of not only democratic elections but the

concept of democracy itself. If the voting public of a state is misled and makes their decisions based

on false information, the integrity of that election and the resulting government has been degraded.

Though the Facebook hack is not the first data breach of its kind,3 the attackers were able to retrieve

an immense volume of data that could easily be used for targeting in information operations.

These two challenges are emblematic of both the need for, and the relative insufficiency of, cyber

counterintelligence (CCI) measures and processes. They are also evidence of the fact that while

1 Paul Haskell-Dowland, “Facebook Data Breach: What Happened and Why It’s Hard to Know If Your Data Was Leaked,” The Conversation, April 6, 2021, http://theconversation.com/facebook-data-breach-what-happened-and-why-its-hard-to-know-if-your-data-was-leaked-158417; Aaron Holmes, “533 Million Facebook Users’ Phone Numbers and Personal Data Have Been Leaked Online,” Business Insider Australia, April 3, 2021, https://www.businessinsider.com.au/stolen-data-of-533-million-facebook-users-leaked-online-2021-4; Lily Hay Newman, “What Really Caused Facebook’s 500M-User Data Leak?,” Wired, April 6, 2021, https://www.wired.com/story/facebook-data-leak-500-million-users-phone-numbers/. 2 Saskia Brechenmacher, “Comparing Democratic Distress in the United States and Europe” (Washington, DC: Carnegie Endowment for International Peace, 2018); Office of the Director of National Intelligence, “Assessing Russian Activities and Intentions in Recent US Elections” (National Intelligence Council, January 6, 2017), https://www.dni.gov/files/documents/ICA_2017_01.pdf. 3 Facebook alone has suffered at least 9 massive data breaches in the period spanning 2005-2021, with victim counts ranging from 70,000 to more than 600 million. See Riddhi Jain, “A Complete Facebook Data Breach & Privacy Leak Timeline (2005 to 2021),” iTMunch, April 14, 2021, https://itmunch.com/facebook-data-breach-timeline-2005-2021/; SelfKey, “Facebook’s Data Breaches - A Timeline,” SelfKey (blog), March 11, 2020, https://selfkey.org/facebooks-data-breaches-a-timeline/. Similar breaches with massive amounts of data lost in the breach include those of eBay, Equifax, JP Morgan, and Sony. See BBC News, “Sony Pays up to $8m over Employees’ Hacked Data,” BBC News, October 21, 2015, sec. Business, https://www.bbc.com/news/business-34589710; Josh Fruhlinger, “Equifax Data Breach FAQ: What Happened, Who Was Affected, What Was the Impact?,” CSO Online, February 12, 2020, https://www.csoonline.com/article/3444488/equifax-data-breach-faq-what-happened-who-was-affected-what-was-the-impact.html; Steve Ragan, “Raising Awareness Quickly: The EBay Data Breach,” CSO Online, May 21, 2014, https://www.csoonline.com/article/2157782/security-awareness-raising-awareness-quickly-the-ebay-database-compromise.html; Dominic Rushe, “JP Morgan Chase Reveals Massive Data Breach Affecting 76m Households,” the Guardian, October 3, 2014, http://www.theguardian.com/business/2014/oct/02/jp-morgan-76m-households-affected-data-breach.

Chapter 1 - Introduction

2

traditional intelligence operations might be the sole remit of the sovereign state, in the cyber era it

also falls to corporations and individuals to contribute to both their own cyber security and to the

aggregate security of the state and the international networks of which they are a part.

To contextualise this thesis going forward, it is important to delineate CCI from cyber security.

Where cyber security refers to the overall state of, and methods of assuring, the overall security of

cyberspace or an entity operating therein, cyber counterintelligence concerns the specific practices

intended to obviate attempts at intelligence collection (espionage) or exploitation that then enable

disruption events.4

According to this understanding, CCI feeds into overall cyber security – See Figure 1 The Elements of

Cyber Security. For modern states to develop cyber resilience and make positive gains toward a secure

cyberspace, it is crucial that the understanding and practice of CCI evolve at both the state and the

individual levels. Given this requirement, this research offers the framework of ‘threat elevation

analysis’ and uses that framework to build on previous research in the fields of CCI, and

disinformation as a threat to democratic integrity. Disinformation is understood in this thesis as

false information that is introduced into a target political information ecology in order to instigate

behavioural or political change and will be considered at length in chapter six. This thesis analyses

the threat elevation of cyberspace using a state actor case study, drawing links between cyber

counterintelligence as it contributes to cyber security and the necessity of cyber counterintelligence

practices by states and substate actors to counter disinformation operations.

4 Disruption events may range from the irritating but largely defendable (Distributed Denial of Service (DDoS) attacks against websites), to the destructive use of cyber operations to degrade or destroy services such as telecommunications.

Courteney O’Connor – PhD Thesis 2021

3

Cyberspace has become one of the fundamental pillars of modern life. We use it in every sector of

our lives - for communication, to conduct our finances and for commercial transactions, to execute

our civic responsibilities. With time, more and more people are benefiting from the efficiency of

cyberspace and the multiplicity of platforms that it supports. However, with this profusion of

access to cyberspace comes an increase in the dangers posed to ordinary life from cyber-enabled

malicious actors and their level of access to both foreign populations and foreign states. Accepting

that cyberspace is a threat vector, it is the responsibility of the state to secure cyberspace insofar as

is possible for the ongoing safety and wellbeing of its citizens and the security of the state itself.

However, the number of actors in cyberspace and the potential for threat that these actors

represent increases the surface area that states must protect. To contribute to the resilience of

cyberspace technologies as well as the individuals that use those technologies, states need to design

risk management and threat mitigation measures that also promote greater cyber security.

However, this is not the sole remit of the state. To increase national resilience and security in

cyberspace, the individual, as part of the democratic audience, is required to contribute by engaging

in personal CCI and security practices that increase the aggregate resilience and security of the state.

But how can states, and individuals, best develop their understanding of the risks and threats of

cyberspace in order to best engage with CCI? How can we trace the elevation of that threat

perception, and what have the effects of that elevation been on the development and utilisation of

CCI?

Cyber Security

Cyber Intelligence Cyber Counterintelligence

Cyber Offence Cyber Defence

Figure 1 The Elements of Cyber Security

Chapter 1 - Introduction

4

Threat elevation analysis as offered in chapter three identifies the process according to which risks

and threats are perceived and then elevated to a higher stage of concern. The status of an identified

issue – concern, risk, threat – indicates the level of management or mitigation that the issue in

question should receive. This thesis critically evaluates the development of the threat elevation and

perception of cyberspace in the United Kingdom (UK) and whether there has been a concurrent

(unclassified, observable) development of CCI. More specifically, this thesis will engage in process-

tracing to analyse the evolution of cyberspace as a perceived threat in the UK. It is expected that

this examination will highlight any published (unclassified) admission or definition of CCI as

understood or practiced by the UK. “In process-tracing, the researcher examines histories, archival

documents, interview transcripts, and other sources to see whether the causal process a theory

hypothesises or implies in a case is in fact evident in the sequence and values of the intervening

variables in that case...”5 In other words, when we engage in process-tracing, we analyse the

trajectory of change.6 While the generalisability of the conclusions drawn from the analysis of a

single case study would need to be highly qualified and thoroughly examined for applicability within

context and specific rules, in this instance the method according to which the understanding of the

threat level of cyberspace developed (or did not develop) in the UK will be able to provide

policymakers considering the same problems with an understanding of how this has been done

previously. The observed results of that process can help inform the policymakers’ decisions about

how or whether a similar process can be undertaken in their state, and may provide suggestions or

warnings for that future development and implementation.

Approach and Research Design

The purpose of this thesis is three-fold; first, to develop an understanding of how identified

concerns are elevated by legitimate actors through riskification and securitisation processes, and

the constituent elements of these elevating processes. The second is to build upon previous

research on the definition and typology of CCI. Has the development of CCI been affected by the

evolving perceptions of the vulnerabilities of cyberspace? Who uses (and who should use) CCI,

and are there differences in the context or uses of CCI? In considering the operational

understanding of a relatively novel field like CCI, this thesis will discuss the tactical and strategic

understandings of, and uses for, CCI in line with the assumptions and consequences of the threat

elevation analysis. The third purpose of this thesis is to apply threat elevation analysis and CCI to

5 Alexander L. George and Andrew Bennett, Case Studies and Theory Development in the Social Sciences (Cambridge: The MIT Press, 2005), 6–7. 6 David Collier, “Understanding Process Tracing,” PS: Political Science & Politics 44, no. 4 (October 2011): 823, https://doi.org/10.1017/S1049096511001429.

Courteney O’Connor – PhD Thesis 2021

5

a case study of a specific state, and against a specific national security concern. In chapter three, I

will apply threat elevation analysis to a case study on the UK and examine whether threat elevation

of cyberspace and the development of counterintelligence responses can be ascertained through

official (published) documentation. I will then draw the threat elevation framework; UK case study;

and CCI discussion together in a contextualising example in my chapter on disinformation.

Purpose

The primary research question of this thesis is how has state threat perception of cyberspace

developed, and how/is that perception influencing the evolution of CCI as a response to cyber-

enabled threats such as disinformation? This generates two lines of research that will contribute to

answering the primary question. First, the framework through which any development in state

threat perception can be identified and analysed. The framework chosen and the conclusions drawn

from the analysis will affect the subsequent line of research, being the identification and

development of CCI as a response to cyber-enabled threats. To limit the scope of the thesis, rather

than attempt a generalised analysis of all cyber-enabled threats, I have chosen to examine

disinformation conducted via cyberspace. With the rise in use and increasing ubiquity of cyberspace,

the question must be asked whether states’ conceptions of counterintelligence (and thus, national

security requirements) are evolving apace.

The framework I utilise in my research and investigation is threat elevation analysis, a variant of the

Copenhagen securitisation theory (CST) that has been developed with the intent of it being

generalisable across the broadest spectrum of national security analysis. Threat elevation analysis

describes a framework according to which the evolution of threats can be traced and analysed. It

is a multistep process of elevation that involves first riskification of an issue (per Olaf Corry’s

framework)7 and, if management fails, securitisation of that issue followed by mitigation methods.

This framework is used in this thesis to analyse the elevation of cyberspace as a threat in the UK

through the examination of national security documentation published between 2008 and 2018.

Threat elevation analysis is one of the primary contributions to existing knowledge made by this

research. Ideally, further research will be undertaken to evaluate this framework beyond the bounds

of the questions that define this research project to further analyse the assumptions of the theory.

7 Olaf Corry, “Securitisation and ‘Riskification’: Second-Order Security and the Politics of Climate Change,” Millennium: Journal of International Studies 40, no. 2 (January 2012): 235–58, https://doi.org/10.1177/0305829811419444.

Chapter 1 - Introduction

6

Several smaller research questions must be considered in order to address the overarching research

question informing the research design of this thesis, and these are used to structure the narrative

line of this manuscript. Beginning with the theoretical framework itself: what is threat elevation

theory? How does it work, and why has it been developed? Following from this question, the

subsequent section deals with the intersection of threat elevation analysis and cyberspace: has the

latter been elevated, to either a security risk or a security threat? And how can this then be assessed

against the development of CCI as an operational field, and as it is conceived in the national security

arena? The subsequent question concerns CCI itself: what is it? How does it fit into the intelligence

field, and into the conception of national security in the contemporary era? Traditional

counterintelligence has contributed to, and informed, national security policies and practices for as

long as intelligence has existed as an operational concern;8 what changes, if any, should there

be/have there been to this working definition and operational field with the growing dependence

on the cyber domain in all aspects of modern life? Finally, what is the connection between CCI

and the rising threat of disinformation, and how have state responses to disinformation developed?

Method

This research is fundamentally qualitative in nature, and based on the research questions identified

above I determined documentary analysis alongside theory and concept development to be the

methods best suited to addressing the research goals. This section discusses the three general

methods of research utilised in this thesis: framework development, for threat elevation analysis.

Concept development, for the definition and typologies of CCI. Case study analysis, to include

documentary analysis, for the examination of the UK and the subsequent analysis of approaches

to disinformation.

Framework Development

The initial stages of this research were undertaken according to the assumption that the theoretical

framework being used was Copenhagen (traditional) securitisation theory. According to the basic

tenets of this theory (discussed in chapter two), a legitimate actor securitises a security concern by

speaking it as such. That is to say, through discursive legitimation before a legitimate audience, in

saying that something is an existential threat to the state, it is accepted and then mitigated as one.

However, applying traditional securitisation theory in the modern day to contemporary events like

cyber-attacks, the assumptions of this theory tend to weaken, such that the decision was made to

8 See section on the history and development in chapter two.

Courteney O’Connor – PhD Thesis 2021

7

develop a framework that would be based on the foundations of securitisation but is more aligned

to the requirements and realities of modern risks and threats.

To develop a theory that is not too similar to existing ones so as to prevent contribution to

knowledge, but also to build upon the foundations of the existing body of securitisation literature,

several steps were taken. First, identification of the elements and processes of traditional

securitisation theory that remained relevant and applicable. Second, identification of the specific

problems of securitisation theory that made application of the framework to contemporary issues

problematic. Third, and based on the problems identified in step two, was the identification and

development of the elements and processes required in the innovated framework that would both

work well with the remaining elements of securitisation theory, and would also develop a deeper

understanding of the ways in which security concerns are elevated to risks (for management) and

threats (for mitigation). Finally, the results of steps one through three were combined and two

separate elevation processes emerged to represent the journey of a security concern through to risk

and then threat: riskification (per Olaf Corry’s theory) and then securitisation, (per traditional

securitisation and concepts identified for this research).9

The primary source of material on the concepts and processes of securitisation is Security: A New

Framework for Analysis by Ole Waever, Barry Buzan, and Jaap de Wilde.10 The text identifies and

explains the constituent elements of what is now termed Copenhagen securitisation theory and

then identifies and explains the process by which a security concern is ‘securitised’. While other

sources have been considered, such as those dealing with the Paris School and Welsh School

evolutions of securitisation theory, most roads in the securitisation literature lead back to Buzan,

9 Corry, “Securitisation and ‘Riskification.’” 10 Barry Buzan, Ole Waever, and Jaap de Wilde, Security: A New Framework for Analysis (Boulder: Lynne Rienner Publishers, Inc, 1998). See also Thierry Balzacq, “A Theory of Securitization: Origins, Core Assumptions, and Variants,” in Securitization Theory: How Security Problems Emerge and Dissolve, PRIO, New Security Studies (New York: Routledge, 2011), 1–30; Thierry Balzacq, “Securitization Theory: Past, Present, and Future,” Polity 51, no. 2 (April 2019): 331–48, https://doi.org/10.1086/701884; Corry, “Securitisation and ‘Riskification’”; Matt McDonald, “Securitization and the Construction of Security,” European Journal of International Relations 14, no. 4 (December 2008): 563–87, https://doi.org/10.1177/1354066108097553; Joanna Nyman, “Securitization Theory,” in Critical Approaches to Security: An Introduction to Theories and Methods, ed. Laura J. Shepherd (Oxon: Routledge, 2013), 51–62; Holger Stritzel, Security in Translation: Securitization Theory and the Localization of Threat, New Security Challenges (New York: Palgrave Macmillan, 2014); Ole Waever, “Securitisation: Taking Stock of a Research Programme in Security Studies” (2003), https://docplayer.net/62037981-Securitisation-taking-stock-of-a-research-programme-in-security-studies.html; Michael C. Williams, “Words, Images, Enemies: Securitization and International Politics,” International Studies Quarterly 47, no. 4 (December 2003): 511–31, https://doi.org/10.1046/j.0020-8833.2003.00277.x; Michael C. Williams, “Securitization as Political Theory: The Politics of the Extraordinary,” International Relations 29, no. 1 (2015): 114–20.

Chapter 1 - Introduction

8

Waever and de Wilde.11 It is also important to note that, as outlined in the paragraph above, many

of the original elements of Copenhagen securitisation have been included in either original or

manipulated form in the elevation theory framework that guides the research of this thesis. The

second text which forms the basis of the theoretical development of threat elevation theory is Olaf

Corry’s 2011 article concerning the transformation of risks and threats.12 While the article itself

focuses mainly on the risks of, and associated with, climate change, the tenets of the riskification

process that he develops in the article form the basis of the first of the elevating processes of threat

elevation analysis.

Concept Development

A significant security threat to the modern state is the use of cyberspace and cyber-enabled systems

and technologies. As with many fields of academia in relation to national security concerns,

however, there has been a significant lag between the identification of immediate national security

requirements, and research and academic analysis being published in order to build a foundation

of both empirical and theoretical knowledge in those areas of concern. There is an evolving body

of literature on cyber intelligence and counterintelligence that defines and offers cases and

examples of CCI as a branch of counterintelligence discipline and practice, some of which I will

11 See chapter two for a discussion of post-Copenhagen securitisation theories. For further reading, see Rita Floyd, “Towards a Consequentialist Evaluation of Security: Bringing Together the Copenhagen and the Welsh Schools of Security Studies,” Review of International Studies 33, no. 2 (2007): 327–50; Olav F. Knudsen, “Post-Copenhagen Security Studies: Desecuritizing Securitization,” Security Dialogue 32, no. 3 (2001): 355–68. 12 Corry, “Securitisation and ‘Riskification.’”

Courteney O’Connor – PhD Thesis 2021

9

examine in chapter two.13 I have previously offered a basic definition and typology of CCI.14

However, there is still a significant gap in the literature on this field. In chapter five, this thesis will

further develop that understanding and typology, in relation to the understanding that cyberspace

has been elevated and is being perceived as either a disparate threat or a modern vector for more

traditional varieties of threat.15 In order to further the understanding of CCI offered in previous

research, this thesis will consider the operational varieties and purpose of CCI with reference to

both the pre-existing basic typology, as well as a more complex consideration of what, precisely,

CCI is and does or can do. Within operational CCI, this thesis explores two key areas: strategic

CCI, and tactical CCI.

The primary bodies of literature evaluated in chapter two are those concerning traditional

intelligence and counterintelligence, and the burgeoning literature on the value of and danger to

intelligence and national security of cyberspace and cyber-enabled processes and technologies.16 In

the initial stages of this research, literature on CCI was rare and often tied to, or a subsection of,

other (related) fields; this body of literature has increased during the years subsequent to the

commencement of my research, but remains fairly sparse relative to other fields of national security

13 See P. Duvenage and S. von Solms, “The Case for Cyber Counterintelligence,” in 2013 International Conference on Adaptive Science and Technology, 2013, 1–8, https://doi.org/10.1109/ICASTech.2013.6707493; Petrus Duvenage and Sebastian von Solms, “Putting Counterintelligence in Cyber Counterintelligence: Back to the Future,” in Proceedings of the 13th European Conference on Cyber Warfare and Security (13th European Conference on Cyber Warfare and Security, Piraeus, Greece: Academic Conferences International Limited, 2014), 70–79; PC Duvenage and Sebastian von Solms, “Cyber Counterintelligence: Back to the Future,” Journal of Information Warfare 13, no. 4 (2015): 42–56; Petrus Duvenage, Sebastian von Solms, and Manuel Corregedor, “The Cyber Counterintelligence Process - a Conceptual Overview and Theoretical Proposition,” in Published Proceedings of the 14th European Conference on Cyber Warfare and Security (14th European Conference on Cyber Warfare & Security, Hatfield, UK, 2015), 42–51; Petrus Duvenage, Thenjiwe Sithole, and Sebastian von Solms, “A Conceptual Framework for Cyber Counterintelligence: Theory That Really Matters,” in European Conference on Cyber Warfare and Security (Reading, United Kingdom: Academic Conferences International Limited, 2017), 109–19, https://search.proquest.com/docview/1966799163/abstract/9550E79F8887474EPQ/1; Petrus Duvenage, Thenjiwe Sithole, and Basie von Solms, “Cyber Counterintelligence: An Exploratory Proposition on a Conceptual Framework,” International Journal of Cyber Warfare and Terrorism 9, no. 4 (October 2019): 44–62, https://doi.org/10.4018/IJCWT.2019100103; Geoffrey S. French and Jin Kim, “Acknowledging the Revolution: The Urgent Need for Cyber Counterintelligence,” National Intelligence Journal 1, no. 1 (2009): 71–90; Courteney J O’Connor, “Cyber Counterintelligence: Concept, Actors, and Implications for Security,” in Cyber Security and Policy: A Substantive Dialogue, ed. Andrew Colarik, Julian Jang-Jaccard, and Anuradha Mathrani (Palmerston North: Massey University Press, 2017), 109–28; J. Sigholm and M. Bang, “Towards Offensive Cyber Counterintelligence: Adopting a Target-Centric View on Advanced Persistent Threats,” in 2013 European Intelligence and Security Informatics Conference (2013 European Intelligence and Security Informatics Conference, IEEE Computer Society, 2013), 166–71, https://doi.org/10.1109/EISIC.2013.37; George Vardangalos, “Cyber-Intelligence and Cyber Counterintelligence (CCI): General Definitions and Principles” (Center for International Strategic Analyses, 2016), https://kedisa.gr/wp-content/uploads/2016/07/Cyber-intelligence-and-Cyber-Counterintelligence-CCI-General-definitions-and-principles-2.pdf. 14 O’Connor, “Cyber Counterintelligence.” 15 Is cyberspace itself a threat? Or is it merely a new vehicle for old threats, like fraud or theft? 16 See chapter three for an examination of CCI; see chapter six for an examination of disinformation as a CCI problem.

Chapter 1 - Introduction

10

and strategic studies research.17 One of the contributions of this thesis to the existing body of

knowledge in national security, strategic studies, and intelligence and counterintelligence fields will

be the development of understanding of CCI as a delineated subfield of the overarching

counterintelligence discipline.

This research utilises the definition of CCI offered in Cyber Counterintelligence: concept, actors, and

implications for security.18 That definition was built upon existing definitions of cyberspace, networks,

and networked technologies. Finding that definition of CCI to be relevant and accurate, no changes

have been made to it, and as such CCI is understood to consist of

the methods, processes and technologies employed to prevent, identify, trace, penetrate, infiltrate, degrade, exploit and/or counteract the attempts of any entity to utilise cyberspace as the primary means and/or location of intelligence collection, transfer or exploitation; or to affect immediate, eventual, temporary or long-term negative change otherwise unlikely to occur.19

Following from this definition, the second half of chapter five further develops the basic typology

of CCI previously offered by the author in order to broaden both the understanding and the

application of this typology to the ways in which states seek to secure cyberspace and/or their

conceptions of cyber security in the modern age. Further to this development, the typology also

considers the operationalisation of CCI across two broad areas: strategic CCI, and tactical CCI. It

is anticipated that the overall discussion of CCI as a concept and a practice will support further

research on the place that CCI has in modern conceptions of security, particularly in an increasingly

digitally influenced world where disinformation and propaganda campaigns are having an outsize

effect on leadership and policy decisions among (Western) democracies. A greater understanding

of CCI will ultimately serve policymakers in the conception and drafting of national security

policies and strategies based on an informed understanding of the risks inherent in the use of

cyberspace and the ways those risks must be managed.

This particular definition of CCI accepts that any entity may conduct intelligence collection,

transfer or exploitation in cyberspace, and engages with practices intended to mitigate attempts at

17 See Duvenage and Solms, “The Case for Cyber Counterintelligence”; Duvenage and von Solms, “Putting Counterintelligence in Cyber Counterintelligence: Back to the Future”; Duvenage and von Solms, “Cyber Counterintelligence: Back to the Future”; Duvenage, von Solms, and Corregedor, “The Cyber Counterintelligence Process - a Conceptual Overview and Theoretical Proposition”; Duvenage, Sithole, and von Solms, “A Conceptual Framework for Cyber Counterintelligence”; Duvenage, Sithole, and von Solms, “Cyber Counterintelligence”; French and Kim, “Acknowledging the Revolution: The Urgent Need for Cyber Counterintelligence”; Sigholm and Bang, “Towards Offensive Cyber Counterintelligence”; Vardangalos, “Cyber-Intelligence and Cyber Counterintelligence (CCI): General Definitions and Principles.” 18 O’Connor, “Cyber Counterintelligence.” 19 O’Connor, 112.

Courteney O’Connor – PhD Thesis 2021

11

effecting negative change. This is particularly relevant in the chapter six analysis of disinformation

as a national security and democratic integrity threat, as it is the combination of cyber-enabled

communications platforms used to conduct information operations and the targeting of individuals

and populations that make disinformation so dangerous to democratic societies. Effective CCI can

only occur when both technology and psychology are adequately protected and made resilient to

external influence and exploitation.

Case Study

The initial contribution of this thesis is the development of the threat elevation analysis framework,

articulated and developed in chapter three. The development of CCI has also been outlined. The

content of the current section will discuss how the assumptions of both models will be examined

in the course of this research. As stated above, this thesis is a qualitative project. Threat elevation

analysis is a descriptive-analytical framework, and it is not intended to be either prescriptive or

proscriptive.

The conceptual development and examination of CCI is also an inherently qualitative undertaking.

A case study analysis was selected as the most appropriate form of theory and concept testing for

the subjects under investigation in this thesis. According to George and Bennett, “the case study

approach [is] the detailed examination of an aspect of a historical episode to develop or test

historical explanations that may be generalizable to other events.”20 With intelligence and security

issues, case studies can be particularly beneficial for examining the flow-on effects for problem

solving. As explained by Harry Eckstein, “[c]ase studies may be used not merely for the interpretive

application of general ideas to particular cases (that is, after theory has been established) or,

heuristically, for helping the inquirer to arrive at notions of problems to solve or solutions worth

pursuing, but may also be used as powerful means of determining whether solutions are valid.”21

This study follows the heuristic case study model outlined by Eckstein: the application of inductive

logic to a unique case to draw conclusions about a particular aspect or problem. The case study is

used to aid in the identification of “important general problems and possible theoretical

solutions.”22 Heuristic case studies focus the attention specifically on the discovery and problem-

solving aspects of case study analysis. This is particularly relevant to the analysis of threat

20 George and Bennett, Case Studies and Theory Development in the Social Sciences, 5. 21 Harry Eckstein, “Case Study and Theory in Political Science,” in Case Study Method, ed. Roger Gomm, Martyn Hammersley, and Peter Foster (London: SAGE Pulications Ltd, 2009), 12, https://dx.doi.org/10.4135/9780857024367.d11. 22 Eckstein, 17.

Chapter 1 - Introduction

12

perception of cyberspace in the UK – discovery – but also to the development of CCI - problem-

solving.23

This thesis explores how one specific state has developed their understanding of these concepts,

and whether there are potential takeaways for other states facing these challenges. The use of a

single case allows for a deeper qualitative content analysis, tracing the development of a particular

strategic concept in a meaningful way. It is important to note that while there is a single overarching

case under examination, within-case analysis of national security documentation is comparative and

developmental – each document will be considered in light of the others. Together with the use of

process-tracing, this “increases the number of theoretically relevant observations.”24 Generalisation

across states is not the intended purpose of this thesis, though additional case studies could further

augment the findings presented here.

Case Selection

Several elements needed to be taken into account in identifying an appropriate case for analysis

relevant to both the utility of threat elevation theory and the concept of CCI. First, that these are

frameworks of understanding that are particularly relevant at the state level of analysis. While they

can be applied to lower levels of analysis like corporations or individuals (which will be discussed

in chapters three and five), given the fields of research, it was decided to use a sovereign state as

the unit of analysis in the selected case. Further research on alternate levels of analysis will be

necessary to determine the generalisability of the threat elevation framework, but that is beyond

the bounds of the current research and cannot be adequately conducted given resource and time

restraints. Having identified the state as the appropriate unit of analysis, it was then necessary to

decide which state would be the subject of the case study in order to be able to generate knowledge

concerning state elevation of CCI.

The UK is an influential state in economic, political, and security matters. It is well positioned to

be the subject of analysis, as a post-industrial and highly technologised state with an increasing

dependence on cyberspace. Moreover, the UK is a member of multiple influential security

organisations in which it wields significant influence. It is a member of the North Atlantic Treaty

Organisation (NATO); it is the leader of the British Commonwealth; it is one of the five permanent

23 Rob Van Wynsberghe and Smia Khan, “Redefining Case Study,” International Journal of Qualitative Methods 6, no. 2 (2007): 81. 24 Gary King, Robert O. Keohane, and Sidney Verba, Designing Social Inquiry: Scientific Inference in Qualitative Research (Princeton: Princeton University Press, 1994), 227.

Courteney O’Connor – PhD Thesis 2021

13

members (P5) of the United Nations Security Council (UNSC); and it is one of the five states that

form the Five Eyes intelligence and security cooperative, along with the United States of America

(US); Canada; Australia; and New Zealand (NZ). The attitudes, perception, and policies of the UK

have the potential to influence multiple states across a vast geopolitical spectrum. The uptake of

high technologies is a matter of policy in the UK, and successive governments have invested

significantly in pursuit of the articulated into of becoming a leading technological power.25 As such,

there has been considerable development of cyberspace policies and frameworks across the

government, and in military and security agencies. National security documents outlining the

strategic intent and imperatives of the UK are publicly available, demonstrating a relatively high

degree of openness and engagement with the democratic audience, and serve to signal the UK’s

interests and concerns. This allows a more systematic analysis of what changes over time, relative

to other states that don’t have the history of policy development or publication at an unclassified

level of such policies.

A time frame of the ten year period of 2008-2018 was selected for analysis. This time period

represents four changes of government across the two major political parties of the UK, and the

publication of nine strategic documents. The study ends before the full consequences of Brexit and

the UK’s withdrawal from the European Union (EU) are finalised or completely known,26 the

potential policy implications of which are beyond the scope of this thesis. Although the time frame

for the case study in this thesis ends at 2018, I recognise that developments such as the publication

of the Integrated Review in 2021 are important for understanding the development trajectory of

CCI in the UK. Accordingly, I address this is the conclusion to the thesis.

In choosing to analyse the understanding of a particular concept (cyberspace as a threat) through

the analysis of a class of information (national security documentation), this thesis engages in the

study of a specific aspect of a particular episode in history; in this case, a specific period of time

within which significant development of and events in cyberspace occurred on a global scale. The

within-case analysis of a single state allows for a close reading and examination while also producing

25 Chapter four articulates the United Kingdom elevation of the threat perception of cyberspace through a selection of national security documentation. For the most recent articulation of the intent to transform the United Kingdom into a leading digital power, see Cabinet Office of the United Kingdom, “Global Britain in a Competitive Age: The Integrated Review of Security, Defence, Development and Foreign Policy” (The APS Group, 2021), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/975077/Global_Britain_in_a_Competitive_Age-_the_Integrated_Review_of_Security__Defence__Development_and_Foreign_Policy.pdf. 26 ‘Brexit’ refers to the exit of the United Kingdom from the European Union following a 2016 referendum and subsequent negotiations.

Chapter 1 - Introduction

14

knowledge and potential lessons more broadly generalisable than the case study focus (the UK),

due to the global use and importance of the cyber domain. More specifically, the conclusions drawn

from the analysis of the UK’s threat perception of cyberspace and development of CCI in chapter

five, in the context of disinformation as examined in chapter six, will also produce lessons for states

of similar political structure and security concerns; namely, foreign interference and democratic

integrity.

The framework employed in this thesis is threat elevation analysis, a descriptive analytic tool that

describes the way in which a particular political issue or concern is elevated to either a risk or a

threat. Threat elevation analysis will be described thoroughly in chapter three, but from a

methodological point of view and as mentioned above, this thesis will employ process-tracing on

the selected national security documentation of the UK to discover whether the development of

the strategic conception of cyberspace in the UK has advanced according to the expected

assumptions and framework of threat elevation analysis. David Collier states that “process tracing

focuses on the unfolding of events or situations over time…[and] the descriptive component of process

tracing begins not with observing change or sequence, but rather with taking good snapshots at a

series of specific moments.”27 The national security documents selected for analysis serve as the

snapshots through which to examine the trajectory of change in the threat perception of cyberspace

in the UK.

Additionally, how or whether these assumptions prove true or false will indicate the integrity of

the framework as well as contribute to existing literature concerning the national security

perceptions, policies, and understandings of the UK in a continuously developing domain, namely

cyberspace. Whether or not there has been a successful elevation of a particular security concern

within cyberspace will then be analysed in chapter six, which examines the threat of disinformation

within the framework of threat elevation analysis, and according to the conclusions drawn in the

preceding analysis of the selected national security documentation of the UK.

Limitations and Challenges

The choice of case study analysis in this research design introduces several inherent limitations and

challenges to both the nature of the thesis and the generalisability of its conclusions. First, though

threat elevation has been for broad applicability (i.e., to a range of actors and situations), and despite

the criticisms of traditional securitisation theories of the continued focus on the state despite claims

27 Collier, “Understanding Process Tracing,” 824.

Courteney O’Connor – PhD Thesis 2021

15

to the contrary, this thesis examines how a particular state actor has elevated a particular domain,

concept, and problem — cyberspace, cyber counterintelligence, and disinformation. The choice to

examine state actor elevation was made in order to ascertain how at the highest levels, these

concepts and problems are being understood. The way that states understand and elevate risks and

threats will, through legislature and policy, affect the ways in which substate actors like corporate

groups and individuals are able to engage and act themselves. Threat elevation analysis can be

applied to substate actors and security concepts and problems, but this is largely outside the scope

of this thesis and must be reserved for future research. I have provided a typology of cyber

counterintelligence in chapter five that includes substate actors as a foundation for future research,

and threat elevation analysis requires no manipulation for the concepts and processes to be

applicable at the substate level.

In terms of the choice to study a national security problem (which can easily be perceived as

remaining firmly within the politico-military bounds of other strategic studies analytical traditions

and thus subject to similar criticisms as traditional securitisation theories), cyber counterintelligence

– as explained in chapter five – is concern at all levels of analysis but a particular problem for the

state specifically because cyber-enabled concerns are not bounded to politico-military spaces. Given

the ubiquitous nature of cyberspace and cyber-enabled platforms and devices, cyber

counterintelligence problems affect, and are affected by, multiple sectors beyond the political and

the military. How the state perceives cyberspace and cyber counterintelligence will in turn affect all

societal sectors. Further research will need to be done in these sectors – the environmental,

economic, and societal – and the intent of this thesis is to form part of the foundation of that

research.

By choosing to engage with a single state and conduct within-case analysis of the national security

documentation published by that state, it is possible that my conclusions may be affected by the

worldview of that particular country. The UK does not have the same perception of and response

to cyberspace and cyber-enabled security concerns as do other states; all states’ perception will

depend on context, capabilities, goals, domestic and external relationships, among other factors.

Moreover, the United Kingdom is a mature democracy; the method through which the UK elevates

risks and threats will likely be different to the ways in which new democracies or authoritarian

regimes will engage with threat elevation. There is also a need for future research to address the

relationship between the state and citizens vs. the state and non-citizens in terms of security

priorities and how cyber counterintelligence requirements and practices must be negotiated; how

Chapter 1 - Introduction

16

does threat elevation and then management or mitigation occur? Jurisdictional issues are as much

a problem between people in matters of cyber security and cyber counterintelligence as they are in

the technological infrastructure space.

In order to engage deeply with the national security documentation and ensure robust analysis,

however, I believe that limiting the analysis to a single case across a significant period of time

enabled me to more effectively draw out the evolution of threat perception and development of

CCI. By assessing the evolution of perspective in one state, I am able to test the integrity of the

threat elevation analysis framework as well as contribute to existing knowledge not just on British

policy in regard to cyberspace but also build a set of expectations as to how that evolution may

proceed which can be tested against other states or non-state actors in future research. Alternately,

further research may find that state perception of cyberspace and development of cyber intelligence

and counterintelligence frameworks may evolve along similar lines independently of a particular

system of governance. Also beyond the scope of this thesis but worthy of identifying for future

research is the recognition that substate actors such as corporations, organised groups and

individuals may elevate the threat of state actions, and so the perception of cyberspace and cyber

counterintelligence of these actors will be informed by different priorities and capabilities. While

this case study in chapter four was selected based on the availability of national security

documentation, classification restrictions on related material may have limited the available

documentation for analysis. I appreciate that it is not possible to generalise on the basis of one case

– what this thesis seeks to establish is a conceptual and methodological baseline from which to do

so in subsequent research.

The subject matter of this thesis also presented research difficulties:28 CCI is a nascent field and

very much a contemporary security concern. Cyberspace enables, or is a critical feature, of a

majority of the critical infrastructure and communications platforms upon which daily interaction

at all levels is dependent. Where intelligence and counterintelligence were traditionally the purview

of the state as the only actor with the intent capability to affect international affairs or with the

need to understand and possibly manipulate state behaviours, this is no longer the case. The state

is no longer the only actor engaging in intelligence and counterintelligence; at all levels, actors that

operate in cyberspace are capable of engaging in these practices. Increasingly, states are having to

interact with substate actors in both cooperative and offensive/defensive settings. The number of

28 Preliminary research on British approaches to traditional counterintelligence was conducted at the Imperial War Museum archives in London in 2019. Plans for follow-up fieldwork and archival research had to be abandoned with the onset of COVID-19 and associated restrictions on international travel.

Courteney O’Connor – PhD Thesis 2021

17

nonstate and substate actors that are capable of affecting the state or engaging in intelligence

collection and exploitation through cyberspace has increased, and this complicates the

counterintelligence environment. Interaction could be state vs. state; state vs. non-state; state vs.

individual; non-state vs. non-state; or non-state vs. individual. In elaborating a typology of CCI and

engaging in an examination of the actors that could employ these practices, I have attempted to

provide examples of the ways in which a variety of actors might engage in cyber counterintelligence

and for which reasons.

Chapter Outline

Chapter two of this thesis contains an overview of the literature concerning the three primary

bodies of research identified as relevant to this thesis: securitisation, counterintelligence, and

disinformation. These three fields contribute to the development of CCI offered in chapter five

and the discussion of disinformation as a threat to the state and to democratic integrity in chapter

six.

Chapter three develops the threat elevation analysis framework which underpins the analysis put

forward in this thesis. Threat elevation analysis describes the process according to which the

evolution of threats can be traced an analysed. Threat elevation is a multistep process that involves

first riskification of an issue and, if management of that risk fails, securitisation of that issue

followed by mitigation measures. The second half of this chapter contains an analysis of the threat

elevation of cyberspace, to contextualise the subsequent chapter.

Chapter four applies the threat elevation analysis framework of chapter three to the examination

of the threat perception of cyberspace in the UK, asking whether or not the development and

understanding of CCI can be ascertained based on that analysis. Using a selection of national

security documents from the period 2008-2018 inclusive, chapter four offers a specific case

application of the theoretical and conceptual contributions made by this thesis.

Chapter five offers a comprehensive development of the concept, typology, and utility of CCI,

building on the nascent body of literature in this field and further developing prior research. From

the starting point of a definition of CCI, subsequent sections of chapter five address the

development and practice of CCI before moving into the conceptual development of a typology.

In addition to the passive/active/proactive framework, this chapter also develops the

Chapter 1 - Introduction

18

strategic/tactical framework of CCI as a practice before moving into the final sections concerning

the actors in cyberspace that may or should utilise CCI.

The sixth chapter takes the analysis and conclusions drawn throughout the case study of the UK

and, through the lens of threat elevation analysis, and in relation to the development and utilisation

of CCI, considers a specific national security problem: the rise of disinformation and the dangers

that it can pose to democratic integrity in the cyber era. This chapter will examine two cases of

disinformation countermeasures: one successful, and one unsuccessful. This will serve to illuminate

potential policy and CCI directions into the future.

The seventh and final chapter of this thesis will summarise the conclusions of the research

undertaken throughout. Chapter seven highlights the key takeaways of the research and identifying

avenues of future investigation before offering some closing thoughts.

19

2 - Contextualising Cyber Counterintelligence: the Need for Threat

Elevation Analysis There are three broad areas of research to which different sections of this thesis are relevant. These

fields are securitisation studies; counterintelligence studies; and disinformation studies. The extant

literature in each field is substantial, more so in the former two than in the latter, though this is

changing as further research is conducted into the use of disinformation for the purposes of

foreign interference. The following literature review has been split into three sections, correlating

directly to the identified bodies of relevant literature and the overall narrative threat of the thesis.

The initial section is devoted to the body of work surrounding securitisation studies, as this forms

the foundation of the framework of analysis that will be developed and utilised throughout the

main body and analysis of this thesis. I cover the elements of securitisation and the sectors in

which the framework is intended to be used, before discussing several of the criticisms of

securitisation and identifying a way forward under a variant framework. That variant - threat

elevation analysis - will be used to evaluate the development of cyberspace threat perception in the

United Kingdom (UK) and whether there has been a contiguous development of

counterintelligence practice in cyberspace in the case study in chapter four. The second section of

this chapter will examine traditional counterintelligence, briefly covering its history and purpose

before moving into the nascent literature surrounding cyber counterintelligence (CCI). This

section defines and contextualises CCI and its importance in a highly digitalised international

system, which leads into the final section on disinformation. I briefly compare disinformation to

misinformation and propaganda, discussing the potential consequences of false information as a

form of foreign interference in cyber-enabled operations. This chapter is intended to provide an

overview of current perspectives and understandings in these three fields, which will be built upon

in later chapters.

Securitisation

Copenhagen securitisation theory (CST) is a method for understanding the construction of security

threats. Because the primary research question of this thesis engages with national security

documentation in order to identify and trace the evolution of a specific security concept, a

theoretical framework designed to analyse that process provides an ideal analytical lens. The

subjectivity inherent in process-tracing a concept in a qualitative analysis makes the use of more

traditional positivist frameworks less suited to this research, given its fundamentally exploratory

nature. This thesis does not seek to identify or articulate particular logical rules or laws that have

Chapter 2 – Contextualising CCI

20

the potential to be quantitatively proven, though this could be an avenue of future research. While

a realist analysis of state actions is possible, the thesis is not particularly concerned with the

systemic interactions of states beyond the potential consequences of disinformation campaigns

imposing external strategic narratives; additionally, non-state actors are herein recognised as

capable actors in cyberspace. Nor is this thesis substantively concerned with the exercise of power

as such: it is an examination of threat perception and the development of responses and policies

according to that perception. A framework that explores the process by which a particular entity

problematises security, then, is most appropriate to this research.

This section will be comprised of an examination and analysis of several key elements of CST. In

chapter three, I build on these existing concepts to further an alternative understanding of

securitisation. By utilising the existing framework of securitisation theory and approaching analysis

from a more post-positivist point of view, I develop my own analytical framework by which to

consider security phenomena. More specifically, I argue that securitisation would be better

understood as a tiered, threat elevation process, rather than a linear process of threat construction.

First conceived by scholars of the Copenhagen Peace Research Institute (COPRI), in Denmark,29

securitisation theory was intended to widen the view of security studies beyond the military sector,

to include the societal; political; economic; and environmental sectors in security calculations.30 It

is also closely related to the body of work surrounding regional security complex theory (RSCT),

wherein securitising dynamics are considered in regional contexts.31 These regional dynamics relate

directly to the security sectors articulated in CST and can explain how regional security dynamics

are created, exacerbated, or mitigated. While an important body of work, this thesis does not

engage with regional dynamics of, or approaches to, the threat elevation of cyberspace or the

development of counterintelligence in relation to cyber-enabled disinformation. The focus is on

the concept and application of securitisation principles more generally, placing an examination of

RSCT in relation to cyber beyond the scope of this thesis.

29 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis; Stritzel, Security in Translation: Securitization Theory and the Localization of Threat, 11. 30 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 49–162. 31 Buzan, Waever, and de Wilde, 9–20. See also Barry Buzan, “Regional Security Complex Theory in the Post-Cold War World,” in Theories of New Regionalism: A Palgrave Reader, ed. Fredrik Söderbaum and Timothy M. Shaw, International Political Economy Series (London: Palgrave Macmillan UK, 2003), 140–59, https://doi.org/10.1057/9781403938794_8; Barry Buzan, People, States, and Fear: An Agenda for International Security Studies in the Post-Cold War Era (New York: Prentice Hall, 1991).

Courteney O’Connor – PhD Thesis 2021

21

Given its focus on the construction of threat rather than the military concerns of the state,

Copenhagen securitisation stands apart from more traditional security and/or strategic studies,

and rather closer to critical security studies.32 Generally, securitisation fits within the bounds of the

constructivist field of research,33 for which the body of research has grown considerably in the

post-Cold War era.34 More specifically, Copenhagen securitisation scholars are often perceived to

be scholars of (the new) critical security studies, so named for the shift away from the state and

the military as the focal point of security concern.35 Copenhagen securitisation theory as perceived

herein is a (social) constructivist theory, regardless of theoretical roots.36

One of the central tenets of securitisation theory is the removal of a (political) concern from within

the bounds of ‘normal’ politics, and the relocation of that concern to the security arena. The

32 Military concerns do play a role in the security concerns of the state according to CST however as stated above, the military sector is/should be only one concern among several. That is to say, while military security concerns should be factored in to security calculations, according to Copenhagen securitization theory they should not be the primary or only focus. 33 Buzan et al suggest that security issues should be analysed “by taking an explicitly social constructivist approach to understanding the process by which issues become securitised.” Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 19. 34 See Thierry Balzacq, “Legitimacy and the ‘Logic’ of Security,” in Contesting Security: Strategies and Logics, PRIO New Security Studies (New York: Routledge, 2015), 1–10; Balzacq, “A Theory of Securitization: Origins, Core Assumptions, and Variants”; Thierry Balzacq, “The ‘Essence’ of Securitization: Theory, Ideal Type, and a Sociological Science of Security,” International Relations 29, no. 1 (March 1, 2015): 103–13, https://doi.org/10.1177/0047117814526606b; Balzacq, “Securitization Theory”; Thierry Balzacq, “The Three Faces of Securitization: Political Agency, Audience and Context,” European Journal of International Relations 11, no. 2 (June 1, 2005): 171–201, https://doi.org/10.1177/1354066105052960; Thierry Balzacq, Sarah Léonard, and Jan Ruzicka, “‘Securitization’ Revisited: Theory and Cases,” International Relations 30, no. 4 (December 1, 2016): 494–531, https://doi.org/10.1177/0047117815596590; Renata H. Dalaqua, “‘Securing Our Survival (SOS)’: Non-State Actors and the Campaign for a Nuclear Weapons Convention through the Prism of Securitisation Theory,” Brazilian Political Science Review 7, no. 3 (2013): 90-117,167-168; Myriam Dunn Cavelty, “From Cyber-Bombs to Political Fallout: Threat Representations with an Impact in the Cyber-Security Discourse,” International Studies Review 15, no. 1 (March 2013): 105–22, https://doi.org/10.1111/misr.12023; Rabea Hass, “The Role of Media in Conflict and Their Influence on Securitisation,” The International Spectator 44, no. 4 (December 2009): 77–91, https://doi.org/10.1080/03932720903351187; Knudsen, “Post-Copenhagen Security Studies: Desecuritizing Securitization”; McDonald, “Securitization and the Construction of Security”; Heikki Patomaki, “Absenting the Absence of Future Dangers and Structural Transformations in Securitization Theory,” International Relations 29, no. 1 (2015): 128–36; Columba Peoples and Nick Vaughan-Williams, “Introduction,” in Critical Security Studies: An Introduction, 2nd ed. (New York: Routledge, 2015), 1–12; Kees Ven Der Pijl, “La Disciplina Del Miedo. La Securitización de Las Relaciones Internacionales Tras El 11-S Desde Una Perspectiva Histórica,” Relaciones Internacionales 31 (2016): 153–87; Scott D. Watson, “‘Framing’ the Copenhagen School: Integrating the Literature on Threat Construction,” Millennium: Journal of International Studies 40, no. 2 (January 2012): 279–301, https://doi.org/10.1177/0305829811425889; Williams, “Words, Images, Enemies”; Williams, “Securitization as Political Theory: The Politics of the Extraordinary.” 35 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 4–5; Stritzel, Security in Translation: Securitization Theory and the Localization of Threat, 51. 36 Constructivism is a relatively new field of critical international relations, emerging as a unique School of thought approximately thirty years ago that focuses on the importance and effect of social action in (international) politics, and the action value of the individual. It focuses on how the meaning and value of institutions, security, and threats are socially constructed; that is to say, given meaning by individuals or groups. This focus on the social aspect of (political) action necessitates a deeper focus on the idea of identity that is not present (as much) in more traditional international relations theories like Liberalism and Realism, with their respective focuses on economy and institutions, and military security. Christian Reus-Smit, “Constructivism,” in Theories of International Relations, 5th ed. (New York: Palgrave Macmillan, 2013), 217–40.

Chapter 2 – Contextualising CCI

22

process through which this takes place – going from political concern to security threat – is the

securitisation process. Broadly speaking, the state of politics in which a securitisation process takes

place has been termed the ‘politics of the extraordinary’.37

The idea of politics of the extraordinary has been linked to German political philosopher Carl

Schmitt.38 As conceived by Schmitt, the politics of the extraordinary describes the political context

at the time of and immediately preceding the securitising process. As I argue in later sections, for

a political concern to be securitised there must be a justification for elevating an issue out of the

arena of ‘normal’ politics that necessitates emergency measures. The situation must be, therefore,

out of the ordinary.

The concept of ‘out of the ordinary’ introduces an ongoing criticism of Copenhagen securitisation

theory: the failure to identify what ‘normal’ politics looks like, and the location of the boundary

lines that must be crossed in a securitisation.39 Defining ‘normal’ politics is more complex an issue

than may be immediately apparent. It must be noted that ‘normal’ to any given actor is highly

subjective, and thus what presents itself as normal politics for one actor (usually, the state) may be

entirely extraordinary for another. In addition to the subjectivity of perception, ‘normality’ is also

context-dependent. For any actor in the international political system, there will be degrees of

‘normal’ for any given situation. Contextual normality also rests on multiple variables: form of

government (democracy vs. autocracy vs. theocracy); parties to a situation or conflict and the

historical relationship between those parties; possible outcomes; and (international) reputation are

among the variables that will factor into a calculation of the ‘normality’ of any given international-

37 Williams, “Securitization as Political Theory: The Politics of the Extraordinary,” 114–15; Andreas Kalyvas, Democracy and the Politics of the Extraordinary: Max Weber, Carl Schmitt, and Hannah Arendt (Cambridge: Cambridge University Press, 2008), https://doi.org/10.1017/CBO9780511755842. 38 Carl Schmitt is a controversial figure, as his work as both political philosopher and jurist is linked to Nazism and the Third Reich. Schmitt theorised extensively on the wielding of power in the political arena. Schmitt characterised the concept of ‘the political’ as “an exceptional decision defined within the friend-enemy distinction.” A declaration of enmity bears distinct resemblance to the concept of the existential threat in Copenhagen securitization. See also Kalyvas, Democracy and the Politics of the Extraordinary; Williams, “Words, Images, Enemies.” 39 Buzan, Waever and de Wilde acknowledge the difficulty of bounding the political sector, stating that “all threats and defences are constituted and defined politically.” Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 141. Buzan, Jones and Little define politics as “the shaping of human behaviour for the purpose of governing large groups of people.” Barry Buzan, Charles Jones, and Richard Little, The Logic of Anarchy: Neorealism to Structural Realism (New York: Columbia University Press, 1993), 35. Even here, characterizing any form of politics as normal will depend on the contextual and subjective biases of the analyst; a Western analyst would describe democratic politics as normal. An analyst who has lived all their lives under authoritarian rule might describe those conditions as normal. The boundary lines for securitizations are thus not fixed or immutable in any case. Waever contends that normal politics is defined as “politics that should be able to unfold according to normal procedures without this extraordinary elevation of specific “threats” to a pre-political immediacy,” which aligns with the threat elevation analysis framework offered in this thesis and is accepted as the functional definition of normal politics hereon. Waever, “Securitisation,” 15.

Courteney O’Connor – PhD Thesis 2021

23

scale situation or conflict. Issues of subjectivity and perception are ongoing challenges to political

theory as a field of research. While it is beyond the scope of this research to fully explore perception

and subjectivity in relation to securitisations, they are recognised contributing factors throughout

this thesis.

Accepting the difficulty of assigning a precise definition of normal politics, for the purposes of

this thesis, normal politics may be understood as the stable continuation of the formal processes

of institutionalised politics; that is, “characterised by widespread pluralism and political

fragmentation.”40 Put simply, when there is no issue beyond the norm that may unite disparate

political groups to securitise an overriding issue. This does not mean that normal politics should

be perceived as a state of peace; simply that beyond the generally accepted or known threats to

security,41 there are no issues that may prompt emergency measures (without unforeseen

catastrophe).

The next several sections present a discussion of the key elements of Copenhagen securitisation

theory herein understood. This serves a dual purpose: in addition to presenting a view of the

current conception of Copenhagen securitisation theory and exploring key concepts relating to

threat analysis, I will use these concepts to establish my own theoretical contribution in later

sections of this thesis. I will examine the elements that securitisation scholars have identified as

crucial to a securitising move: the speech act; the referent object; the existential threat; extreme

necessity; extraordinary measures; the securitising actor; and the relevant audience.

Elements of Securitisation Theory

In this section, I explore seven elements of CST that are crucial to the securitisation framework.

The definition and interplay of these elements dictate how securitisation in any form is interpreted

and applied. As seen in Figure 2 - The Elements of Securitisation Theory, these elements are:

the speech act; the referent object; the existential threat; extreme necessity; extraordinary measures;

the securitising actor; and the relevant audience.

40 Williams (2015), pp. 115-116 41 or risks: for a discussion of riskification see Corry, “Securitisation and ‘Riskification.’”

Chapter 2 – Contextualising CCI

24

The Speech Act

The defining element of CST is the focus on the speech act, or the way in which security is spoken.

The speech act is the initial securitising move of the/a securitising process, in which an existential

threat to a certain referent object is declared; it is a declarative, illocutionary act.42 According to

John L. Austin,43 the illocutionary act is that which is performed in saying something, so by

declaring something a security threat it becomes a security threat through illocutionary force. The

speech act also services to provide a justification: it is a discursive legitimation that at once

identifies an imminent threat to the referent object in question and offers solutions for threat

mitigation. The performative element of the speech act is important to both its legitimating force

and the securitising process as a whole: securitisation is only successful if it is accepted by the

relevant audience to which the speech act is directed.

The Referent Object

The referent object is the focus or subject of the securitising move. It is the thing, idea or person

that the securitising actor indicates is

42 Buzan, Waever & de Wilde. John Searle, in “A Taxonomy of Illocutionary Acts,” Minnesota Studies in the Philosophy of Science 7 (1975): 26–28; 32–33, considers a commissive illocutionary statement to be one “whose point is to commit the speaker….to some future course of action.” See also Balzacq, “A Theory of Securitization: Origins, Core Assumptions, and Variants,” 4–5. Politicians make commissive illocutionary statements in policies, speeches, and campaign platforms, for example. They are promises to some future action, usually utilised to gain support during a campaign or to score points against a political rival. Declarative illocutionary statements, on the other hand, are a type of performance. The successful performance of a declarative illocutionary act brings about an actual change. “Declarations bring about some alteration in the status or condition of the referred to object or objects solely in virtue of the fact that the declaration has been successfully performed” (“A Taxonomy of Illocutionary Acts,” 358, emphasis added.). Making a successful declaration then is sufficient to bring about a particular change. Assuming this to be the case, there is both enormous risk and enormous responsibility in endowing anybody with the legitimate authority to speak security. 43 J. L. Austin, How to Do Things with Words, 2nd ed (London: Oxford University Press, 1980), 8–15.

The speech act

The referent object

The existential threat

Extreme necessity

Extraordinary measures

The securitising

actorThe relevant

audience

Figure 2 - The Elements of Securitisation Theory

Courteney O’Connor – PhD Thesis 2021

25

a) threatened on an existential level, and

b) must therefore be protected (at all/significant cost if necessary).

For the state there exists many possible referent objects: forward military bases; its own territorial

integrity and/or that of its allies or neighbours; its currency; the members of the executive branch

(in case of crisis), and so on. The list of possible referent objects is extensive. Perhaps the most

important referent object to the state is its own sovereignty, to which the state owes its existence

and legitimacy. In this particular interpretation, the state itself is transformed into the ultimate

referent object, and within securitisation literature, this is often the case.44 Certain territories/lands,

or access to such could also be securitised, as was the case with Diego Garcia.45 An existing

(political) regime such as communism (as a governing political ideology), or capitalism (as a

governing economic policy) could be securitised at the systemic level if the failure of such

threatened the existence of the state. Corporations can securitise certain intellectual property,

particularly if the intellectual property (IP) forms the foundation of the company or part thereof.

Corporations can securitise access to territories and markets, either for sales or manufacturing.

Individuals can also securitise many referent objects; house, jobs, freedoms and liberties.46 As

previously stated, the size and scale or securitisation, and in part the referent object in question,

help to determine the success of the securitising process. Given the subjective aspects of how we

assign value to a given referent object, the success of a securitisation (as will be covered in later

sections) is complex.

The Existential Threat

If the referent object is what must be protected in a securitisation, the existential threat is the other

half of that equation: it is what must be protected against. An existential threat must, by definition,

be real and pressing,47 and pose the sort of danger that would without a doubt affect the continued

existence (in its current form) of the referent object. It is not enough for a high level of incurred

damage to be expected to term a threat existential unless the successful completion of that damage

would then threaten the very existence of the referent object. The existence of a nuclear bomb is

not in itself an existential threat to the state. The explosion of that bomb on state soil such that

the government is destroyed, and the state ceases to exist in its current form is the existential threat.

44 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 36; Stritzel, Security in Translation: Securitization Theory and the Localization of Threat, 15. 45 Mark B. Salter and Can E. Mutlu, “Securitisation and Diego Garcia,” Review of International Studies 39, no. 4 (October 2013): 815–34, http://dx.doi.org/10.1017/S0260210512000587. 46 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 40. 47 Rita Floyd, “Can Securitization Theory Be Used in Normative Analysis? Towards a Just Securitization Theory,” Security Dialogue 42, no. 4–5 (2011): 430, https://doi.org/10.1177/0967010611418712.

Chapter 2 – Contextualising CCI

26

In addition, it is also possible for events to be existentially threatening, rather than an actor or an

object. It should be accorded the same gravity and mitigation efforts as active security threats

posed by identified malicious actors, as in the case of armed conflict.

Beyond existential threats of a physical nature, it is also possible to consider a threat existential if

it endangers the current form or conception of the referent object, especially in the case of states. A

secession or coup would represent an existential threat to the current regime, but not necessarily

to the state itself unless significant territory exchange occurred as a direct result, such as if Scotland

successfully seceded from the Kingdom of Great Britain and Northern Ireland, or Quebec from

Canada. In terms of ideologies, rival ideologies can be existentially threatening; throughout the

Cold War, communism was considered an existential threat to democracies and formed an integral

part of the competition between the United States (US) and the Soviet Union (USSR). The

existential threat is the identified problem in a given situation which must be mitigated or guarded

against in order for the referent object to survive. It is the severity of the identified threat that is

used to justify the extraordinary measures for which a securitising actor is advocating or

campaigning.

Extreme Necessity

Another important element of securitisation theory is the existence or state of extreme necessity

required for a successful securitising move. Extreme necessity indicates that the threat is immediate

and pressing, introducing the element of time into the securitising equation. That combination of

extreme necessity and immediacy of requirement for mitigation contributes to the classification of

existential threat. In order to successfully securitise a threat, that threat must be (perceived by the

relevant audience as) immediate, pressing, and requiring certain extraordinary measures to mitigate

due to the extreme necessity of combatting said threat.48 By presenting an existential threat to the

actor in question, extreme necessity can be traded for extraordinary powers to mitigate that threat.

To return to the nuclear bomb example, there must be a real and immediate danger of that bomb

being used in order for this to be considered an extreme necessity. This is the element of

securitisation that troubles issues such as environmental decay and global warming. While they are

pressing issues and may (will) require extraordinary measures to mitigate, they are long term

problems and will rarely attain the levels of extreme necessity by that very nature.49 In the event of

48 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 24–26; Stritzel, Security in Translation: Securitization Theory and the Localization of Threat, 15; Williams, “Words, Images, Enemies,” 516. 49 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 71–95; Balzacq, Léonard, and Ruzicka, “‘Securitization’ Revisited,” 511–12.

Courteney O’Connor – PhD Thesis 2021

27

an unforeseen natural disaster or other engaging mechanism, that may change, but as a general rule

environmental dangers do not dictate extraordinary measures due to extreme necessity.50

Extraordinary Measures

Extraordinary measures are arguably the most obvious and contentious element of any securitising

process. The elements of a securitisation move, discussed above, must be fulfilled before

extraordinary measures can be approved; without any one of these elements, it is highly unlikely

that the securitisation will be successful. Extraordinary measures are those actions, policies, or

other method of threat mitigation that would ordinarily, in the course of ‘normal’ politics, not be

permitted or considered by the relevant audience. Extraordinary measures by definition would not

be within the ordinary range of means or powers, and once the immediate threat has been mitigated

these extraordinary powers would no longer be within the remit of use. A pre-emptive strike to

forestall the delivery of the nuclear bomb would be an extraordinary measure.

Having identified an existential threat to a given referent object, and qualifying the resort to

extraordinary measures by the extreme necessity of combatting said threat, those extraordinary

measures may be undertaken. For example, whether or not one agrees with the 2003 invasion of

Iraq, preventive warfare is an example of extraordinary measures of threat mitigation following

the identification (correct or otherwise) of an existential treat that has been successfully securitised

before a relevant audience.51 It should be noted that these powers of extraordinary measures ought

to cease once the existential threat has been mitigated, and the state (or other actor) should resume

politics as normal.

50 However, this may change given the droughts being experienced in certain parts of the world, particularly South Africa. Decades earlier than predicted, the South African capital of Johannesburg was expected to run out of water in approximately April of 2018, despite the water rationing implemented by the governing authorities. Water supplies were to be cut off on what authorities referred to as Day Zero, when water reservoirs reached perilously low levels and suburbanites would need to queue for water. Intense management measures like water quotes have managed to stave off Day Zero in an example of anticipated-event elevation. Christian Alexander, “A Year After ‘Day Zero,’ Cape Town’s Drought Is Over, But Water Challenges Remain,” Bloomberg.Com, April 12, 2019, https://www.bloomberg.com/news/articles/2019-04-12/looking-back-on-cape-town-s-drought-and-day-zero; Krista Mahr, “How Cape Town Was Saved from Running out of Water,” the Guardian, May 4, 2018, http://www.theguardian.com/world/2018/may/04/back-from-the-brink-how-cape-town-cracked-its-water-crisis; Craig Welch, “Why Cape Town Is Running Out of Water, and the Cities That Are Next,” Science, March 5, 2018, https://www.nationalgeographic.com/science/article/cape-town-running-out-of-water-drought-taps-shutoff-other-cities. 51 Preventive warfare should not be confused with pre-emptive warfare. Pre-emptive warfare is undertaken in response to an identified threat or adversary, which can be credibly believed to be planning an attack in the immediate or near future. Preventive warfare is somewhat less legitimate and more morally contentious. It is undertaken in order to prevent a security concern or adversary from attacking in the future, hypothetically removing their ability or desire to do so. Colin S. Gray, The Implications of Preemptive and Preventive War Doctrines: A Reconsideration (Carlisle Barracks, PA: Strategic Studies Institute, U.S. Army War College, 2007), v–x.

Chapter 2 – Contextualising CCI

28

The Securitising Actor

The securitising actor initiates the securitisation process. The securitising actor is someone who is

either an authorised agent, who can legitimately speak to security concerns, or someone who a

relevant audience perceives as having that authority.52 In other words, the agent must be powerful

enough or possessed of the right to speak (about security) for a certain body. Representatives of

the state such as government leaders and other political elites can ‘speak security’ (perform

discursive legitimation) for the state, depending on their (hierarchical) position and sphere of

influence. The state itself cannot be a securitising actor, only the referent object or indeed the

existential threat. Corporate actors, security officers or those in governing positions are usually

considered authoritative actors who may speak security for that collective. For political parties,

indigenous groups and other organised collectives, there is usually an elected or appointed leader

who is able to speak security for that collective. Regardless of the collective in question, the

securitising actor starts the securitisation process by identifying, usually verbally (according to the

Copenhagen School), an existential threat, claiming extraordinary measures will be necessary to

mitigate that threat, and justifying that process in front of a relevant audience.53 It is assumed that

the actor will have sufficient influence and enough authority over the relevant audience to make

successful securitisation possible.

Legitimate Authority

Legitimate authority is a term strongly tethered to the theory (or tradition) of Just War. According

to the Just War tradition, which is used primarily following a conflict to judge whether or not the

conflict was entered into and pursued in a just manner, one of the primary requirements for any

conflict to be just is that it was recognised by a legitimate authority.54 In the international

community, there are few entities that have the authority to judge whether or not a state has been

pursuing a just war (or conflict), or to decide whether a conflict has been just all things considered.

It could be argued that the United Nations is the only such authority, particularly considering the

Chapter 7 requirements of notice to the Security Council, accession to Security Council directives,

and the Charter of the United Nations giving the Security Council the authority to preside over

52 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 40–45. Note that legitimate authority is a contentious space, as is audience perception of where and with whom legitimate authority lies. I discuss actors with legitimate authority further in the next section as well as in chapter three, and the importance of audiences to CCI is considered throughout chapters four-six. 53 Buzan, Waever, and de Wilde, 40–45. 54 Howard M. Hensel, “Christian Belief and Western Just War Thought,” in The Prism of Just War: Asian and Western Perspectives on the Legitimate Use of Military Force, ed. Howard M. Hensel, Justice, International Law and Global Security (Farnham: Ashgate Publishing Limited, 2010), 42–44; James Turner Johnson, “Thinking Morally about War in the Middle Ages and Today,” in Ethics, Nationalism, and Just War: Medieval and Contemporary Perspectives, ed. Henrik Syse and Gregory M. Reichberg (Washington, D.C.: The Catholic University of America Press, 2007), 4–5.

Courteney O’Connor – PhD Thesis 2021

29

the relative ‘justness’ of international conflict.55 Nothing in the Charter prevents the use of force

in self-defence, but the Charter and in fact the Just War tradition have in place certain elements,

or variables, that must be considered and/or employed when entering into conflict. While it is not

within the scope of this thesis to examine the tenets of the Just War tradition, the concept of

legitimate authority has bled into many contemporary theories of warfare and of security, and as

such has a special place in the analytic toolbox of any political scientist. Cecile Fabre considers

legitimate authority a crucial element of the Just War tradition, and defines legitimate authority as

applying “to the right to wage war”; as in, which actors have the right to declare and make war

upon another party.56

In terms of the securitisation framework, both traditional and as offered in this thesis, a type of

legitimate authority is required in order to properly securitise a political concern into a security

threat. I do not intend to offer a definition of legitimate authority; it is not within the scope of this

research. However, for an actor to undertake the securitising process, it must be acknowledged

that they have the authority to do so; that is to say, if an actor is to speak security to a relevant

audience then that audience must in the first instance believe that they have a credible right to do

so. The Prime Minister of Canada, for example, is easily accepted as a securitising actor because

the security of the state is well within his jurisdiction. The second assistant to the junior

undersecretary of education, while fulfilling an important function within the Ministry of

Education, is not likely to be accepted as a legitimate authority in a securitising process because

the security of the state is not within his jurisdiction. That is not to say that the second assistant

cannot securitise an issue within his jurisdiction; simply that legitimate authority in any situation is

contextually relevant, and particularly at the state level must be commensurate with the nature and

danger level of the perceived security concern. In this respect, whether or not the securitising actor

is considered to have legitimate authority is also dependent on how that actor is perceived by the

relevant audience before which he performs his securitising move(s). In short, the securitising

actor must have the authority to legitimately securitise an issue before the relevant audience.

55 Alex Conte, Security in the 21st Century : The United Nations, Afghanistan and Iraq (New York: Routledge, 2006), 13–15; 127–28; 187–89, https://doi.org/10.4324/9781351149563. 56 Cecile Fabre, “Cosmopolitanism, Just War Theory and Legitimate Authority,” International Affairs 84, no. 5 (September 1, 2008): 963, https://doi.org/10.1111/j.1468-2346.2008.00749.x.

Chapter 2 – Contextualising CCI

30

Relevant Audience

The relevant audience comprises the parties before whom the securitising actor must engage in

the performative act of securitising a given threat.57 In democratic states, the relevant audience

more often than not is the general public, the support of which is necessary for any extraordinary

measure to be undertaken successfully. This is, essentially, a performance: the securitising actor

needs to convince the audience that a threat exists; that the aforementioned threat is existential in

nature; that the threat requires extraordinary measures to mitigate; and that the immediate and

pressing nature of the threat, i.e., the extreme necessity, justifies whatever additional powers are

being requested by the securitising actor. In a non-democratic regime, the relevant audience need

only amount to whatever political or military leaders there may be. In an authoritarian regime, the

relevant audience may only amount to the securitising actor alone. The nature of the relevant

audience is subject to the nature of the political system, and also to the nature of the threat and

the referent object.

The relative success an actor will enjoy when convincing the relevant audience of a threat will also

depend heavily on the “psycho-cultural orientation of the audience.”58 The historical relationship

between the attitude of the audience and a certain referent object, in addition to the historical

record of agreement with extraordinary measures or powers, will also play a part in the (expected)

success of a securitising move. An interesting point made by Myriam Dunn Cavelty is the existence

of what she calls ‘linguistic reservoirs’.59 Essentially, these reservoirs are stock phrases, specific

vocabulary and threat images that invoke a particular sentiment in the target audience. When

considering securitisation, or just security in general, there are a number of identifiable words and

phrases that are likely to add weight to your statements and also draw audience support or consent

for your position. Some are fairly explicit; invoking “national security”, for example, will quickly

draw attention to the issue you are attempting to securitise. When referring to the dangers of

cyberspace, hinting at or outright mentioning the possibility of a “cyber Pearl Harbour” or “digital

Pearl Harbour” (in the US particularly) immediately infers a threat to peace and security.60 Linking

a painful cultural and/or historical event with a contemporary threat allows (encourages) the

audience to draw parallels between the two, and if done correctly will attach similar sentiments to

57 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 40–42; Stritzel, Security in Translation: Securitization Theory and the Localization of Threat, 13, 22. 58 Balzacq, “The Three Faces of Securitization,” 172. 59 Dunn Cavelty, “From Cyber-Bombs to Political Fallout,” 108–10. 60 Elisabeth Bumiller and Thom Shanker, “Panetta Warns of Dire Threat of Cyberattack on U.S.,” The New York Times, October 12, 2012, sec. World, https://www.nytimes.com/2012/10/12/world/panetta-warns-of-dire-threat-of-cyberattack.html; Robert Windrem, “Pentagon and Hackers in ‘Cyberwar,’” ZDNet, March 5, 1999, https://www.zdnet.com/article/pentagon-and-hackers-in-cyberwar/.

Courteney O’Connor – PhD Thesis 2021

31

the current security threat as are attached to the historic event. Using a culturally sensitive event

to perpetuate threat perceptions using existing words and phrases from a linguistic reservoir lends

a gravity to the securitisation that, all things considered, it may not otherwise obtain short of

immediate and overwhelming disaster.61

The relationship between the audience and the securitisation is not complete upon the approval

(or rejection) of the claim to extraordinary measures. As Thierry Balzacq notes,62 audience consent

is inextricably linked to the legitimacy of any security policy or practice, and just as that consent is

provided, it may also be revoked at any time. This would render said policy or practice less

legitimate, or illegitimate altogether. According to Balzacq, the sum legitimacy of an actor or policy

is comprised of three elements: its legality, its justification, and the aforementioned (ongoing)

consent of the audience.63 Concerning the link between legitimacy and legality, Balzacq notes that

security practices are in part legitimised by conforming, or appearing to conform, to the legal rules

of a given political system.64 In line with claims of legality of a practice, the success of any actor’s

justificatory claim depends, to a large extent, on the leaders’ ability to persuade the audience that

security practices contribute in a credible way to the achievement of society’s values.65

According to the persuasiveness of the justificatory claim and the perceived legality of a suggested

security practice or policy, the relevant audience will then either give or deny consent to

extraordinary measures. Consent is a necessary condition for legitimacy; it introduces duties (upon

the audience) to comply with the policies developed by the securitising actor, who by virtue of that

consent has the right to pursue that policy or practice under the general expectation that their (the

audience’s) security will be increased as a direct result of said policy or practice.66

Following the implementation of extraordinary measures, audience consent is continuously

required for those measures, or they could be subject to three possible ends: (a) a deficit of

legitimacy; (b) the de-legitimation of the practice or policy, or; (c) the ultimate illegitimacy of the

61 Linguistic reservoirs are by no means limited to those who speak security, though for the purposes of this thesis are relatively simple to identify. Beyond words and phrases explicitly associated with the securitizing process, it is also important to take into account the growing societal dependence on, relationship with, and interactions through images. Images with a particular cultural resonance, such as those taken of the burning Twin Towers, can also aid a securitization. See Figure 4 - Photo: Robert Clark, Associated Press, 2001 62 Balzacq, “Legitimacy and the ‘Logic’ of Security,” 6–8; Balzacq, “Securitization Theory,” 346. 63 Balzacq, “Legitimacy and the ‘Logic’ of Security,” 6–8; Balzacq, “Securitization Theory,” 344. 64 Balzacq, “Securitization Theory,” 344–45. 65 Balzacq, “Legitimacy and the ‘Logic’ of Security,” 6; Balzacq, “Securitization Theory,” 345–46. 66 Balzacq, “Legitimacy and the ‘Logic’ of Security,” 7; Balzacq, “Securitization Theory,” 346.

Chapter 2 – Contextualising CCI

32

practice or policy.67 A deficit of legitimacy occurs upon the weakening, among the audience, of the

shared beliefs that initially justified the extraordinary measures. De-legitimation occurs when there

has been a demonstrable erosion of the consent that sustains extraordinary measures, and a

practice or policy will be found illegitimate if audience consent is totally and demonstrably

withdrawn, or if it is ultimately decided that its pursuit abrogates the legal order of a given political

system. It should be noted that while a policy or practice may be properly legal in a domestic sense,

it may be found to abrogate international law or vice versa, and thus be considered illegitimate.68

There are several questions that must be considered when examining a securitising process, and

CST as a whole. When considering the legitimacy of a securitising process in particular, it is

important to question whether the legitimacy of a security policy or practice depends on the

individual legitimacy of the constitutive elements (i.e., the securitising actor; the reference object;

the justification, etc.). More specifically, does the legitimacy of the overall action plan override the

potentially illegitimate nature of one or more of any of these elements? Further, is there a measure

for the level of consent that is required for a policy or practice (extraordinary measures), and thus

a securitisation process, to be legitimate (and successful)? While the notion of legitimacy may seem

a uniquely academic concern, the perceived legitimacy of any party trying to induce a particular

action or effect will always affect the audience’s response. In terms of a securitisation, it must be

acknowledged that for a policy-maker or government representative to successfully securitise an

issue, they must first convince the audience they have the legitimate right to do so.69 Whilst it is beyond

the scope of this thesis to fully explore each of these questions, they nonetheless represent an

influence on the design and application of securitisation theory in any form.

As will be discussed in later sections of this thesis, the audience is crucial to the understanding and

implementation of CCI. Cyberspace multiplies the attack surface that is vulnerable to hostile

actors, and therefore the surface that the state needs to secure. Individuals participate en masse in

cyberspace, and so represent a threat vector (if exploited for adversarial gain) and a vulnerability.

If a significant portion of the population fails to engage in CCI and security practices, the aggregate

security of the state is likely to be negatively impacted. The audience must therefore be convinced

67 Balzacq, “Legitimacy and the ‘Logic’ of Security,” 8; Holger Stritzel and Sean C Chang, “Securitization and Counter-Securitization in Afghanistan,” Security Dialogue 46, no. 6 (December 1, 2015): 552–53, https://doi.org/10.1177/0967010615588725. 68 Balzacq, “Legitimacy and the ‘Logic’ of Security,” 8. 69 For example, a president or prime minister would presumably have a greater legitimate authority to speak security than a deputy undersecretary for employment or transport. While these portfolios are certainly a matter of nation interest, it is unlikely that they will present an existential threat to the state such that if left unmitigated the state may otherwise cease to exist in its current form.

Courteney O’Connor – PhD Thesis 2021

33

of the necessity of engaging in these practices. In order to achieve this, the state must recognise

the audience as a vulnerability and design risk management measures; this will be discussed in-

depth in chapter six.

The Sectors of Security

One of the major intentions of CST is to widen the field of security studies beyond the traditional

focus on the military sector, and the traditional referent object of the state.70 Securitisation theory

recognises five key sectors on which security studies should focus, illustrated in Figure 3 - The

Sectors of Security: the military sector; the environmental sector; the economic sector; the

societal sector; and the political sector.71 While these sectors can be analytically separate fields, CST

recognises that security concerns can have an impact on or be influenced by multiples sectors

simultaneously;72 consequently, security issues are not and should not be viewed as sector-

exclusive. The following sections will address each of these security sectors separately, and then

consider the effects of cyberspace as a potential sixth sector, or as a functional pathway between

traditional sectors.73

70 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 1–5. 71 Buzan, Waever, and de Wilde, 7–8; 49–162. 72 Buzan, Waever, and de Wilde, 8. 73 Sectoral analysis is not unique to Copenhagen securitisation theory – it also features heavily in regional security complex theory (RSCT). An in-depth analytic comparison is beyond the scope of this thesis, but for scholarship around RSCT see Barry Buzan, “The Logic of Regional Security in the Post-Cold War World,” in The New Regionalism and the Future of Security and Development: Volume 4, ed. Björn Hettne, András Inotai, and Osvaldo Sunkel, The New Regionalism (London: Palgrave Macmillan UK, 2000), 1–25, https://doi.org/10.1007/978-1-137-11498-3_1; Buzan, “Regional Security Complex Theory in the Post-Cold War World”; David A. Lake and Patrick M. Morgan, Regional Orders: Building Security in a New World (University Park: Penn State Press, 2010); Lubna Sunawar, “Regional Security Complex Theory: A Case Study of Afghanistan-Testing the Alliance,” Journal of Security and Strategic Analyses 4, no. 2 (Winter 2018): 53–78.

Military Environmental Economic

Societal Political Cyber

Figure 3 - The Sectors of Security

Chapter 2 – Contextualising CCI

34

Military

The military sector, as the traditional focus of the security studies discipline, is the most widely

researched of the identified sectors in relation to security issues. It is also the most likely sector in

which the securitisation process can be considered highly institutionalised.74 Given the prevailing

conditions of anarchy in modern international history,75 the focus on military security and the

likelihood of military response to perceived threats has been high. With the increasing level of

globalisation and diversification of interdependent networks, it has been argued that it is time to

move beyond singularly militaristic views of security, both in terms of threat conceptions and

threat response.76 Despite critics’ claims that securitisation scholars’ focus remains, by and large,

on the military sector, securitisation has contributed to the broadening of the academic field.77

The sovereign state remains the most important referent object in the military sector, though not

the sole one.78 Generally speaking, the primary securitising actors are the political or military elites

of a state,79 though changing social structures may see the transformation of this presumption.

Military security agendas can usually be linked, in one form or another, to issues of state

sovereignty and the associated use of force to which the state has sole legitimate claim. 80According

to Buzan, Waever and de Wilde,81 “(m)ilitary security matters arise primarily out of the internal and

external processes by which human communities establish and maintain (or fail to maintain)

machineries of government.” While traditionally military security matters concerned the use of

force internally and/or external to the state, non-state actors are also a concern, as are rival

ideologies and issues such as (the control of) mass migration. This has consequences for our

understanding of threats to sovereignty, but also as relating to the function of the military apparatus

in the modern state. If the military is increasingly required to perform policing duties, or police

74 Buzan, Jones, and Little, The Logic of Anarchy: Neorealism to Structural Realism, 49. 75 Anarchy in the political sense refers to an international system of States for which there is no governing authority. As such, each States must constantly act for its own self-interest and security. According to Kenneth Waltz, “with many sovereign states, with no system of law enforceable among them, with each state judging its grievances and ambitions according to the dictates of its own reason or desire – conflict, sometimes leading to war, is bound to occur.” Kenneth W. Waltz, Man the State and War: A Theoretical Analysis, 2nd ed. (New York: Columbia University Press, 2001), 159. Consequently, “in anarchy there is no automatic harmony…A state will use force to attain its goals if, after assessing the prospects for success, it values those goals more than it values the pleasures of peace. Because each state is the final judge of its own course, any state may at any time use force to implement its policies. Because any state may at any time use force, all states must constantly be ready either to counter force with force or to pay the cost of weakness.” Waltz, 160. 76 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 3–5. 77 Balzacq, Léonard, and Ruzicka, “‘Securitization’ Revisited”; Nyman, “Securitization Theory,” 59–61; Williams, “Words, Images, Enemies.” 78 The referent object of a particular sector or securitizing move is whatever object has been designated in need of protection. Referent objects will be further examined in a later section of this thesis. 79 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 40. 80 Buzan, Waever, and de Wilde, 49–70. 81 Buzan, Waever, and de Wilde, 50.

Courteney O’Connor – PhD Thesis 2021

35

forces are required to act in an increasingly militarised fashion, then both bodies have undergone

and/or will undergo a paradigm shift in terms of the relationship between civil society and armed

forces.82

Environmental

Concern surrounding the environmental sector is comparatively new in both academic and

political discourse. Those who would securitise the environment correlate a decline in

environmental health and stability to the decline of same for the human race.83 More frequently in

recent years, pro-environment actors have succeeded in bringing issues concerning environmental

degradation to the global stage.84 What these actors have not succeeded in doing, however, is

convincing their audiences (not just the public, but policymakers in this instance) of the gravity of

the threat of environmental degradation: they have been unable to fully legitimise the claim of

existential threat that is required of a securitisation. Given the history of attempted securitisation

though, and the lack of immediacy that might defines an existential threat, there are grounds for a

claim of threat elevation in environmental issues. Threat elevation analysis will be articulated and

developed in chapter three.

Actors who would securitise the environment vary widely: state leaders, other politicians, activist

groups and influential celebrities have all contributed to the growing mediatisation of

82 See articles on the transfer of military technology and weapons to police forces in the US; see articles on the use of military forces in domestic counterterrorism in Europe. Charles J. Dunlap, Jr., “The Police-Ization of the Military,” Journal of Political and Military Sociology 27 (1999): 217–32; Timothy Edmunds, “What Are Armed Forces For? The Changing Nature of Military Roles in Europe,” International Affairs (Royal Institute of International Affairs 1944-) 82, no. 6 (2006): 1059–75; Human Rights Watch, “Transfer of Military Hardware to Police Could Lead to Abuses,” Human Rights Watch, August 30, 2017, https://www.hrw.org/news/2017/08/30/transfer-military-hardware-police-could-lead-abuses; Nathaniel Lee, “How Police Militarization Became an over $5 Billion Business Coveted by the Defense Industry,” CNBC, July 9, 2020, https://www.cnbc.com/2020/07/09/why-police-pay-nothing-for-military-equipment.html; Craig E. Merutka, “Use of the Armed Forces for Domestic Law Enforcement:” (Fort Belvoir, VA: Defense Technical Information Center, March 1, 2013), https://doi.org/10.21236/ADA589451; Tom Nolan, “Militarization Has Fostered a Policing Culture That Sets up Protesters as ‘the Enemy,’” The Conversation, June 2, 2020, http://theconversation.com/militarization-has-fostered-a-policing-culture-that-sets-up-protesters-as-the-enemy-139727; K. Jack Riley and Aaron C. Davenport, “How to Reform Military Gear Transfers to Police,” RAND Corporation, July 13, 2020, https://www.rand.org/blog/2020/07/how-to-reform-military-gear-transfers-to-police.html; Hugh Smith, “The Use of Armed Forces in Law Enforcement: Legal, Constitutional and Political Issues in Australia,” Australian Journal of Political Science 33, no. 2 (July 1998): 219–33, https://doi.org/10.1080/10361149850624. 83 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 79–80. 84 Jon Barnett, The Meaning of Environmental Security: Ecological Politics and Policy in the New Security Era (London: Zed Books, 2001); Niloy Biswas, “Is the Environment a Security Threat? Environmental Security beyond Securitization,” International Affairs Review 20, no. 1 (2011): 1–22; David Goldberg, “Lessons from Standing Rock - Of Water, Racism, and Solidarity,” The New England Journal of Medicine 384, no. 14 (2017): 1403–5; Temryss MacLean Lane, “The Frontline of Refusal: Indigenous Women Warriors of Standing Rock,” International Journal of Qualitative Studies in Education 31, no. 3 (March 16, 2018): 197–214, https://doi.org/10.1080/09518398.2017.1401151; Norman Myers, “Environment and Security,” Foreign Policy, no. 74 (1989): 23–41, https://doi.org/10.2307/1148850.

Chapter 2 – Contextualising CCI

36

environmental issues.85 It could also be argued that this securitisation has been met with some

success; accords and environmental protection treaties have been struck between states, a recent

example being the activation of the Paris Accords on November 4, 2016 following ratification by

the European Union (EU), Canada, Nepal, and India.86 These ratifications brought the Paris

Accords over the required threshold for activation, and aim to make certain requirements of states

in order to mitigate environmental damage.87 Certain arguments have been made that, in the

absence of imminent threat (meteor, super-quake, etc.) to the human race that the environment

has been ‘riskified’ rather than securitised,88 and this involves risk management rather than

reversion to extraordinary measures. While this is a coherent argument, a number of scientific

experts are (rather stridently) confirming that we as a race are tipping the global environmental

balance past what the Earth can conceivably support or recover from,89 and in this instance a

securitisation may be a better-supported process than a riskification. It could be argued that the

actions taken by individuals, organised groups and states over the last twenty years in response to

environmental concerns were the requisite period of riskification of a perceived threat, and that

we have now (necessarily) moved on to the subsequent securitising process.

When an issue is riskified rather than securitised, it is framed as less than existentially threatening;

the issue is positioned as a risk that should/can be managed within a certain framework. By nature,

85 Molly Beauchemin, “17 Celebrities Who Actively Work to Protect the Environment,” Garden Collage Magazine, November 8, 2017, https://gardencollage.com/change/climate-change/celebrities-care-environment-want-know/; Wane Buente and Chamil Rathnayake, “#WeAreMaunaKea: Celebrity Involvement in a Protest Movement,” in IConference 2016 Proceedings (iConference 2016: Partnership with Society, Philadelphia, USA: iSchools, 2016), https://doi.org/10.9776/16311; Geoffrey Craig, “Celebrities and Environmental Activism,” in Media, Sustainability and Everyday Life, by Geoffrey Craig, Palgrave Studies in Media and Environmental Communication (London: Palgrave Macmillan UK, 2019), 135–63, https://doi.org/10.1057/978-1-137-53469-9_6; Julie Doyle, Nathan Farrell, and Michael K. Goodman, “Celebrities and Climate Change,” Oxford Research Encyclopedia of Climate Science, September 26, 2017, https://doi.org/10.1093/acrefore/9780190228620.013.596; Ranker, “22 Celebs Who Are Saving the Environment,” Ranker, May 31, 2019, https://www.ranker.com/list/celebrity-environmentalists/celebrity-lists. 86 United Nations, “Paris Agreement - Status of Ratification,” United Nations Climate Change, 2016, https://unfccc.int/process/the-paris-agreement/status-of-ratification. 87 Oliver Milman, “Paris Climate Deal a ‘turning Point’ in Global Warming Fight, Obama Says,” the Guardian, October 5, 2016, http://www.theguardian.com/environment/2016/oct/05/obama-paris-climate-deal-ratification. 88 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 79–80; Corry, “Securitisation and ‘Riskification.’” 89 Stephen Leahy, “Climate Change Driving Entire Planet to Dangerous ’global Tipping Point‘,” Science, November 27, 2019, https://www.nationalgeographic.com/science/article/earth-tipping-point; Timothy M. Lenton et al., “Climate Tipping Points — Too Risky to Bet Against,” Nature 575, no. 7784 (November 2019): 592–95, https://doi.org/10.1038/d41586-019-03595-0; Michael Marshall, “The Tipping Points at the Heart of the Climate Crisis,” the Guardian, September 19, 2020, http://www.theguardian.com/science/2020/sep/19/the-tipping-points-at-the-heart-of-the-climate-crisis; Oliver Milman, “James Hansen, Father of Climate Change Awareness, Calls Paris Talks ‘a Fraud,’” the Guardian, December 12, 2015, http://www.theguardian.com/environment/2015/dec/12/james-hansen-climate-change-paris-talks-fraud; Fred Pearce, “As Climate Change Worsens, A Cascade of Tipping Points Looms,” Yale E360, December 5, 2019, https://e360.yale.edu/features/as-climate-changes-worsens-a-cascade-of-tipping-points-looms.

Courteney O’Connor – PhD Thesis 2021

37

a risk has the potential to become a threat but if managed properly, that potential can be mitigated.

By riskifying an issue rather than securitising it, actors can reduce the negative public impact

sometimes associated with the use of extraordinary measures, and avoid the necessity of those

extraordinary measures that threat mitigation requires in an existential crisis. Given the socially

accepted permanency of the environment and the lack of immediate perceivable threat to humanity

emanating from environmental degradation, then a securitisation may not be possible. This is not

to say that securitisation should not take place; merely that there may not be the political will to

expend the social, political, and economic capital required to securitise the environment. The

context is not extraordinary and so does not recognise special circumstances that require extra

powers of threat mitigation. In the event that the political context of environmental degradation

rises above and out of the ordinary, then arguments could be made for a securitising process with

a reasonable chance of success.

Regardless of classification, the environmental sector is considered a crucial field of security

studies.90 Given recent political practice, and the dangers of an unstable environment, this is rather

pragmatic. The environmental sector is also one of the broadest sectors, comprised of both a

political and a scientific agenda that span a diverse array of security concerns, including: the

disruption or degradation of ecosystems; energy concerns; population growth; food production;

economic concerns; and civil strife.91 These factors also contribute to global disease outbreak,

pandemic or epidemic, and control, which would severely threaten the security of a state if

sufficiently severe.

Environmental concerns are also, arguably, among the most difficult to solve given that solutions

must, by their nature, force a change in human behaviour on a global scale. Buzan, Waever and de

Wilde also take the time to point out that the environment can only be securitised insofar as the

human race (or some facet of human organisation/community) is the referent object of the

existential threat.92 Short of a literal planet-destroying threat, what takes place in the crust of the

Earth is unlikely to endanger the existence of the planet itself. As such, for a successful

environmental securitisation to occur, we as a race must be existentially (and near-immediately)

threatened.

90 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 71–94. 91 Buzan, Waever, and de Wilde, 73–75. 92 Buzan, Waever, and de Wilde, 76.

Chapter 2 – Contextualising CCI

38

Economic

The third security sector identified by CST is that of the economy. Economic security is a

controversial area of security studies, given the inherently insecure nature of the current

international economic system (liberal capitalism), and the heavy politicisation of economic

issues.93 Within the current system, Buzan Waever and de Wilde identify instability and inequality

as being the foci of economic security agendas.94 As the international economy is susceptible to

systemic crises, such as the 2008 Global Financial Crisis (GFC),95 the economic sector is

securitisable under the right conditions. Five issues are identified in Security: A New Framework for

Analysis96 as belonging to the economic security agenda:

a) The ability of an economy to support independent military mobilisation should the

need arise

b) Resistance to/ability to absorb market instability for resources such as oil

(economic dependencies)

c) Whether or not existing conditions will increase global inequality

d) The pressures of industrialisation on both international trade and the environment

e) International economic crisis in the case of insufficient political leadership,

protectionist trade policies, and/or an unstable global financial system.

Stable conditions in the international system are thus crucial to economic security, but also

something that are inherently impossible to either predict or ensure. States can (and do) take certain

measures intended to support international economic stability and security, but history has proven

true stability an improbable likelihood. For example, increasing automation of global financial and

stock markets also add a margin of risk to economic stability – algorithms can trade at unbelievable

speed and, without oversight, can cause enormous losses in mere seconds.97

As with all identified security sectors, the specific nature of an existential economic threat depends

entirely on the referent object in question. Generally speaking, while individuals and firms can

pose security economic threats at sub-state levels, the state has the greatest capacity for economic

securitisation. An existential economic threat to the state will have trickle-down effects for all

93 Buzan, Waever, and de Wilde, 95–118. 94 Buzan, Waever, and de Wilde, 95–118. 95 Reserve Bank of Australia, “The Global Financial Crisis | Explainer | Education,” Reserve Bank of Australia, April 17, 2019, Australia, https://www.rba.gov.au/education/resources/explainers/the-global-financial-crisis.html. 96 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 98. 97 Corporate Finance Institute, “Flash Crashes - Overview, Causes, and Past Examples,” Corporate Finance Institute, 2021, https://corporatefinanceinstitute.com/resources/knowledge/finance/flash-crashes/; Ben Hammersley, Approaching the Future: 64 Things You Need to Know Now for Then (Soft Skull Press, 2013), 151–52; 344–45.

Courteney O’Connor – PhD Thesis 2021

39

actors within that state, and very likely on all states with whom the threatened party has a

relationship.98 However, modern states, for the most part, are peculiarly immune to threats that

would be existential to other actors in the economic sector. The continued existence of certain

states (particularly developed ones that participate fully in the international economic and political

systems) is in the interest of the existing international system.99 As seen with Greece in recent

years, it is highly unlikely that a modern economy in the current international systems will be allowed

to fail.100 Given the inherent difficulties and diverse concerns in the economic sector, economic

securitisation is a challenging proposition.

Societal

The societal sector is arguably the sector furthest removed from traditional security studies. The

societal sector primarily concerns issues of identity. Communities, or societies, form and maintain

identities removed from their membership to a particular state.101 In certain communities, societal

boundaries are not coterminous with state boundaries, and so the societal security of one

community may involve many states and jurisdictions. One example of this situation would be the

Kurdish society, covering territory in several Middle Eastern states.102 Another example would be

the Palestinian nation.103 Buzan, Waever and de Wilde point out the importance of differentiating

between social security concerns and societal security concerns.104 Where social security refers

primarily to the individual and their economic concerns, societal security is “about collectives and

their identity.”105 There are ambiguities surrounding terms such as ‘societal’, and the associated

concept of a ‘nation’. For Buzan, Waever and de Wilde this can be ascribed to the peculiarly self-

constructed community, as it derives its existence from the self-association of individuals to an

imagined collective.106

98 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 104. 99 Failed or failing States, or States experiencing significant civil unrest can have an economic impact on neighbouring States, the general region, and the international system at large. Because the economy is tied to political and social security, an unstable or deficient economy can produce further security concerns such as piracy; kidnapping; and general disorder as individuals begin to act in a self-interested and survivalist manner. 100 Council on Foreign Relations, “Greece’s Debt Crisis Timeline,” Council on Foreign Relations, 2021, https://www.cfr.org/timeline/greeces-debt-crisis-timeline. 101 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 119–21. 102 Buzan, Waever, and de Wilde, 132–33; The Kurdish Project, “Where Is Kurdistan?,” The Kurdish Project (blog), 2021, https://thekurdishproject.org/kurdistan-map/. 103 Technically, the Palestinians have specified territory but it is not a state and the Palestinians themselves are widespread. United Nations, “History of the Question of Palestine,” Question of Palestine (blog), March 19, 2020, https://www.un.org/unispal/history/. 104 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 119–20. 105 Buzan, Waever, and de Wilde, 120. 106 Buzan, Waever, and de Wilde, 120–21.

Chapter 2 – Contextualising CCI

40

Buzan, Waever and de Wilde identify three primary areas of concern to the societal security agenda:

migration; horizontal competition; and vertical competition.107 When referring to the migration

concern, they refer to the cultural transformation of the receiving community as a result of

migration. Thus, identity shifts are a direct threat to extant societal identity. Horizontal competition is

herein understood as the cultural transformation of one society or community due to direct

influence of a proximate, dominant culture. In contrast, vertical competition refers to the way an

individual may view their own cultural identity due to either a centripetal or centrifugal cultural

shift; that is, whether they perceive a wider or narrower societal identity.108

In addition to these three primary societal concerns, depopulation is a possible fourth one as it

contributes to a loss of perceived identity. However, unless the depopulation threatens the

breakdown of society at large (such as in an ethnic cleansing), issues created by depopulation

should fall within the bounds of the three overarching issues identified above.109 In the case of

ethnic cleansing, a minimum of three sectors are involved: military, societal, and political.

Political

The fifth and final sector examined by CST is the political one, which concerns “the organisational

stability of social order(s).”110 In addition to non-military threats to political units, system-level

concerns are also possible referent objects in the political sector; Buzan et al. offer international

law as a political referent object, for example. 111 The political sector is also where (normative)

principles such as human rights, or potential human rights, can be securitised.112 Due to the broad

and diverse nature of what can be considered a political threat, and admitting that all security

concerns are political to some degree, the political sector security agenda is paradoxical and

problematic to deal with. At risk of incoherency, security threats that involve “military,

identificational, economic or environmental means are necessarily excluded from this sector.”113

107 Buzan, Waever, and de Wilde, 121. 108 Buzan, Waever, and de Wilde, 121. 109 Buzan, Waever, and de Wilde, 121. 110 Buzan, Waever, and de Wilde, 121. 111 Buzan, Waever, and de Wilde, 121. 112 While some human rights are considered a settled matter both normatively and legally (those enshrined in the Universal Declaration of Human Rights (UDHR)), such as freedom or religion or the right to necessities of life, human rights issues continue to develop. For example, Estonia has declared Internet access to be an unassailable human right. This is not a widespread view, but the fact remains that it is a recent development in the human rights dialogue. And in an increasingly digital-dependent, technologised society, it is possible that the right of access to the Internet will be adopted in other States Estonia one day may be considered the vanguard of a digital rights movement. M. Lesk, “The New Front Line: Estonia under Cyberassault,” IEEE Security Privacy 5, no. 4 (July 2007): 76–79, https://doi.org/10.1109/MSP.2007.98; Visit Estonia, “E-Estonia | Why Estonia?,” Visitestonia.com, 2021, https://www.visitestonia.com/en/why-estonia/estonia-is-a-digital-society. 113 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 121.

Courteney O’Connor – PhD Thesis 2021

41

As such, the security agenda of the political sector is composed of those issues which are

predominantly political in character.114

Political threats are those seen to disrupt or destabilise the organisational stability of the state. That

is to say, a political threat may involve something like a secessionist movement, which would

destabilise the existing state. It could be the targeting of a particular policy, or the total overthrow

of an existing political regime.115 According to Buzan et al., 116 “political threats are about giving or

denying recognition, support, or legitimacy.”117 These threats concern not just political actors such

as states, but also the institutions and processes involved in politics, assuming politics to be “the

shaping of human behaviour for the purpose of governing large groups of people.118

In addition to the modern territorial/sovereign state, three further referent objects of the political

sector are identified as quasi-super states (for example, the EU); organised societal groups (such

as tribes); and transnational movements (such as the major religions, or ideological movements

like communism).119 Due to the organised and often hierarchical nature that such actors

characteristically possess, the securitising actors are also usually well-defined. Each of the

aforementioned actors have leaders, institutional processes, or other authorised representatives

that may speak security on behalf of the collective.

The Cyber Sector

Having identified the traditional sectors across which CST analyses security, I now briefly consider

the cyber sector – is this a new sector of security, further widening the analytical field as Buzan et

al. intended? Or is cyberspace simply a pathway that connects the traditional sectors of security in

new or faster ways, introducing a variety of security issues which are nonetheless bounded in

traditional fields? This is an area which the current thesis is not specifically designed to address;

however, in examining the ways in which cyberspace is elevated as a threat to security, it may be

114 Buzan, Waever, and de Wilde, 142. 115 Buzan, Waever, and de Wilde, 142. 116 Buzan, Waever, and de Wilde, 142. 117 One example in which both provision and acceptance of legitimacy posing a political threat is the case of Taiwan. By refusing to recognise the political independence and legitimacy of Taiwan, the People’s Republic of China (PRC) poses a political threat to the governing authorities of Taiwan. Any recognition of political independence or legitimacy of Taiwan by third parties is perceived as a political threat to the PRC, as it views Taiwan to be a rightful territory of China. This particular political threat is highly securitisable; there are societal, economic, and military implications to the securitisation of such a politically charged issue as Taiwan. It is proof positive, however, that perception and context play a significant role in any securitizing process, and that issues on the political agenda often have flow-on effects in additional sectors. 118 Jones & Little 1993, 35 in Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 145–46. 119 Buzan, Waever, and de Wilde, 145–46.

Chapter 2 – Contextualising CCI

42

possible to ascertain whether or not it is in fact considered a new, sixth security sector, or if it is

simply considered seen by states to be a vehicle for traditional threats. The view of cyberspace by

a modern state will be covered further in the examination of threat perception by the UK in chapter

four, but I would argue that cyberspace is not necessarily a sixth domain or a vehicle for threats to

previous, traditional domains. Instead, I view cyberspace as the network or system of pathways

that connect all five of the previously-considered sectors. Cyberspace can of course be used as a

vehicle for attack and exploitation, and is often referred to as the cyber domain. For the purposes

of this thesis, however, I consider the cyber sector to be both a unique sector of interaction as well

a ‘connective tissue’ that connects the five sectors considered by Buzan, Waever and de Wilde.120

Cyberspace depends on physical infrastructure, and affects all sectors of society. If removed,

society would undergo considerable upheaval and change, but the other five sectors would still

exist. It is a system, or system of systems, as well as a disparate sector. It is a connective overlay

through which risks and threats can be both elevated and attenuated, extending and boosting the

risk and threat potential of previously specified analytical sectors. Because cyberspace has this

signal-boosting capacity that can radically alter threat and risk relationships and calculations,

traditional securitisation theories are not best equipped to actually integrate cyberspace into their

analytical frameworks. I offer an alternative in chapter three.

Criticisms of Securitisation Theory

Securitisation theory has been subject to (occasionally strident) criticisms since Buzan, Waever and

de Wilde first offered CST. Among its detractors, securitisation theory suffers a multitude of

weaknesses. In my examination of the existing body of literature, several of these weaknesses

seemed more prevalent; these observed weaknesses I examine and address in the following

sections. In an increasingly cyber-enabled international system, and in conditions where the

individual is an actor in the same space as the state (cyberspace), there are a number of elements

of CST that, in their current form, cannot explain the process of cyber ‘securitisation.’ Among

these criticised elements are the speech act; propaganda of the deed; the diffusion of legitimate

authority; the role of the international media and cyber-enabled communications; pre-existing

theoretical frameworks; post-Copenhagen theories; and framing.

120 The military; political; environmental; economic; and societal sectors.

Courteney O’Connor – PhD Thesis 2021

43

Relevance of Speech Acts

The speech act as understood in CST is an illocutionary act; it is the performative act that initiates

the active process of securitising a threat.121 The illocutionary force of a speech act is

communicated by what an actor does, in saying something. Specifically, for the purpose of

traditional securitisation theory, in saying that a particular threat is existential in nature to an

identified referent object, that actor makes or actively constructs that issue as a threat. The speech act

is the discursive legitimation, the designation of existential threat. However, in the contemporary

digital era, the speech act as a vocalisation by a legitimate securitising actor may not hold the

primary importance it once did. I would argue that by examining and reverting to the base

definition of discourse, it is possible to interpret the necessity of a speech act beyond just the

vocalised statement. This reduces the applicability of criticisms calling into question the relevance

of CST in the digital era,122 allowing a wider interpretation of the securitising move as a whole. If,

for example, an official publication by a legitimate authority can be considered a discursive

legitimation of a securitising process, the reliance of CST on actual speeches is reduced. Discursive

legitimation, simply put, is the act of making something lawful through a particular mode of

discourse.123 While discourse in popular understanding might be construed as a vocal

communication between parties, in fact it is defined as

a) a formal discussion of a topic in speech or writing; and

b) a connected series of utterances; text or conversation.124

Extrapolating further, Peoples & Vaughan-Williams state that “discourse commonly refers to

words, but can also include other data such as visual images, material objects, and social

institutions.”125 By accepting the validity of the communication of ideas and modes of thought

121 Hisham Abdulla offers a succinct explanation of the three variations of speech act: locutionary, illocutionary, and perlocutionary. The terms were originally explored by John Austin, still considered the authoritative work on the linguistic distinctions. “The locutionary act is the act of saying something with a certain sense and reference; the illocutionary act is the act performed in saying something, i.e., the act named and identified by the explicit performative verb. The perlocutionary act is the act performed by, or as a consequence of, saying something.” Austin, in Hisham Abdulla, “Locutionary, Illocutionary and Perlocutionary Acts Between Modern Linguistics and Traditional Arabic Linguistics,” Al-Mustansiriya Literary 55 (2012): 62. Accepting Judith Butler’s definition, Matt McDonald defines illocutionary acts as those performing a function at the moment of speech, and perlocutionary acts as those necessary for enabling particular actions. McDonald, “Securitization and the Construction of Security,” 572. 122 Balzacq, Léonard, and Ruzicka, “‘Securitization’ Revisited,” 510–17; McDonald, “Securitization and the Construction of Security,” 564–75; Williams, “Words, Images, Enemies,” 524–28. 123 Jens Steffek, “Discursive Legitimation in Environmental Governance | Elsevier Enhanced Reader,” Forest Policy and Economic 11 (2009): 314–15, https://doi.org/10.1016/j.forpol.2009.04.003; Eero Vaara, “Struggles over Legitimacy in the Eurozone Crisis: Discursive Legitimation Strategies and Their Ideological Underpinnings,” Discourse & Society 25, no. 4 (July 1, 2014): 502–3, https://doi.org/10.1177/0957926514536962. 124 Oxford Dictionary, “Discourse| Definition of Discourse by Oxford Dictionary,” Lexico Dictionaries | English, 2021, https://www.lexico.com/definition/discourse. 125 Peoples and Vaughan-Williams, “Introduction,” 6.

Chapter 2 – Contextualising CCI

44

through mediums beyond vocalisation by legitimate authority, to include (tele)visual images, the

concept of the ‘speech act’ can be applied in a more contemporary manner. As an example of the

securitising power of images, I would argue that the broadcasted image of the Twin Towers,

billowing smoke just before they collapsed on September 11, 2001 performed just as successful a

discursive legitimation of extraordinary measures as a speech made in the aftermath – see Figure

4 - Photo: Robert Clark, Associated Press, 2001. By this interpretation, an authorised press release

or published statement from an authoritative figure (or organ of a particular collective) may also

act as a discursive legitimation. Accordingly, documents such as national security strategies, direct

political communications et cetera may be substituted for the traditional speech act.

Figure 4 - Photo: Robert Clark, Associated Press, 2001

Propaganda of the Deed

An image such as the Twin Towers billowing smoke immediately before falling serves an additional

purpose beyond a securitising image. It is also indicative of the securitising nature of propaganda

of the deed; the act of terrorism securitising an issue before a legitimate authority can do so. An

act of terror brings the motivations associated with that act to the forefront of the public attention

by way of instantaneous broadcast media, in a day and age where those broadcasts will reach tens

of millions of people in dozens of countries within minutes or even seconds of that terrorist act

occurring. While we as a people may have become inured to violence on the media, terrorism still

Courteney O’Connor – PhD Thesis 2021

45

has the ability to shock. This shock, in addition to the brutality of terrorist acts and the deliberate

targeting of civilians for which terrorism is known, contributes to a sort of mass securitisation.126

The latter, one could argue, is the swiftest form of threat elevation. Rather than a political or

military leader having to convince a relevant audience (the public, in the case of liberal

democracies) of the necessity of extraordinary measures, after a significant act of terror, it is the

public who will elevate the perceived danger of a threat such that for which extraordinary measures

may be called upon.

While this is not common (the events of September 1, 2001 being unique in many respects), I

would argue that according to the elevation variant of securitisation theory that I will offer in this

thesis, the securitisation remains valid despite the unusual process of elevation. In point of fact,

the use of terrorism as a political tool and the subsequent advantages provided by broadcast media

could be considered the instigating elements of a securitising process. There is nothing to suggest

that an actor may not undertake an action with the intent of utilising that action, or the

consequences thereof, to support a securitisation in the immediate aftermath of the act itself.

According to this interpretation, the attack on the Twin Towers by Osama bin Laden’s al-Qaeda

was the opening salvo in a securitisation of Western presence in the Middle East. The existence of

Western culture, persons and influence in territory traditionally under the influence of Islamic faith

came to be considered an existential threat.127 By way of broadcasts, meetings, and communications

passed verbally, through email and video, Osama bin Laden (the securitising actor) convinced the

relevant audience (those of sympathetic faith or feeling) that for the Muslim world to overthrow

the yoke of Western influence and succeed triumphant, the West must be attacked and brought

low.128 He successfully securitised the existence and the global reach of the US in particular, and

through extensive planning and training, utilised extraordinary measures to strike against the

perceived threat.129 In turn, the attack on the World Trade Centre in September of 2001 securitised,

almost instantaneously, both Osama bin Laden as an individual and al-Qaeda as an organisation

with hostile intentions against the Western world. Terrorism, as propaganda of the deed, was

employed as part of a securitisation. In addition to propaganda of the deed as a securitising event

or justification for a securitising move, black propaganda and disinformation campaigns are

contributing to the elevation of threat perception in various security sectors – this will be examined

closely in chapter six.

126 Bruce Hoffman, Inside Terrorism, 3rd ed., Columbia Studies in Terrorism and Irregular Warfare (New York: Columbia University Press, 2017), 2–5; 32–36. 127 Hoffman, 95. 128 Hoffman, 141–45. 129 Hoffman, 141–45.

Chapter 2 – Contextualising CCI

46

Diffusion of Authority

In the past, it would have been reasonable to state that the only legitimate securitising actor would

have been a state or military leader, as the authoritative representative of their state. In modern

times, with the profusion of security threats to actors within the state as much as the state itself,

the number of actors who may legitimately speak security is growing. Beyond the ability of the

individual to speak security on their own behalf (somewhat subsumed by their social contract with

the state), those individuals can also speak security, when authorised, on behalf of various

collectives. While in itself this is not a particularly novel set of circumstances, it could be argued

that the security concerns of non-state actors are increasingly affecting the security concerns of

the state, in addition to receiving much attention for their own security status.

Non-governmental organisations (NGOs) such as Greenpeace and 350.org in the environmental

sector,130 Médecins Sans Frontières (MSF) and similar organisations in the global health sector are

speaking security on behalf of not just their (relatively) limited collective, but on an international

and even global scale.131 The various agencies within the United Nations (UN) framework also

speak security on an international to global scale,132 and because of their perceived authority and

social capital, these views are absorbed into multi and transnational corporations such as Apple

and Google. In turn, these can also speak security not simply on their own corporate concerns,

but increasingly on behalf of the general public, to include those who do not necessarily constitute

their customer base. One need only look at the conflict between the Apple Corporation and the

US Federal Bureau of Investigation (FBI) to see how these corporations are not only speaking

security, but how they are affecting security on a massive scale,133 with worldwide repercussions.

130 Greenpeace International, “Greenpeace International,” Greenpeace International, 2021, https://www.greenpeace.org/international; 350.org, “350.Org: A Global Campaign to Confront the Climate Crisis,” 350.org, 2021, https://350.org. 131 Médecins Sans Frontières, “Médecins Sans Frontières (MSF) International,” Médecins Sans Frontières (MSF) International, 2021, https://www.msf.org/. 132 UNAIDS, “UNAIDS | United Nations,” United Nations AIDS, 2021, https://www.unaids.org/en/Homepage; UNFPA, “UNFPA - United Nations Population Fund,” United Nations Population Fund, 2021, https://www.unfpa.org/; UNICEF, “UNICEF | United Nations,” United Nations Children’s Fund, 2021, https://www.unicef.org/; World Health Organization, “World Health Organization | United Nations,” United Nations World Health Organization, 2021, https://www.who.int. 133 Leander Kahney, “The FBI Wanted a Backdoor to the IPhone. Tim Cook Said No,” Wired, April 16, 2019, https://www.wired.com/story/the-time-tim-cook-stood-his-ground-against-fbi/; Arjun Kharpal, “Apple vs FBI: All You Need to Know,” CNBC, March 29, 2016, https://www.cnbc.com/2016/03/29/apple-vs-fbi-all-you-need-to-know.html; Jack Nicas and Katie Benner, “F.B.I. Asks Apple to Help Unlock Two IPhones,” The New York Times, January 7, 2020, sec. Technology, https://www.nytimes.com/2020/01/07/technology/apple-fbi-iphone-encryption.html.

Courteney O’Connor – PhD Thesis 2021

47

With this diffusion of the capacity to speak security, the possibility arises of securitisation losing

intellectual coherence, as the number of actors within the framework grows. However,

securitisation is a process with a finite number of actors when examined in context and on a case-

by-case basis. As long as the parameters of the securitisation process are clearly designated and the

examined issue remains in focus, securitisation theory (particularly the elevation variant I offer in

this thesis) as an analytical framework remains applicable.

Role of the International Media and Cyber-enabled Communications

More so than in previous eras, the media is proving capable of not only supporting a securitising

move, but also enacting the securitising process and legitimating speech acts. Instantaneous

broadcast of newsreaders and contemporaneous events has resulted in a greater public awareness of

national, regional, and international security concerns that previously may not have impacted the

public psyche.134 By broadcasting these images to a global audience, often live and therefore in

advance of any official communiqué, the international media are engaging in securitisation whether

knowingly or not. If, as I have argued here, widely broadcasted images can be accepted as

discursive legitimation, then by that action news outlets have become securitising actors in and of

themselves. In a live-streaming culture like the one we now experience, whoever ‘breaks’ the news

has the opportunity to put the first ‘spin’ on whatever information (images or otherwise) they are

broadcasting.135 Thus, it is the media and not just political and social elites that are capable of either

maintaining the position of an issue within the bounds of normal politics, or elevating that issue

to a security threat by communicating that interpretation to a global audience. I refer again to the

historic image of the Twin Towers, an image that is in fact a still of what was a live, instantaneous

global broadcast that immediately changed the nature of national and international security, and

associated threats, from what had previously been understood and that had underpinned global

strategic defence frameworks.

134 Megan Le Masurier, “Slow Journalism in an Age of Forgetting,” Opinion, ABC Religion & Ethics (Australian Broadcasting Corporation, June 18, 2019), https://www.abc.net.au/religion/slow-journalism-in-an-age-of-forgetting/11221092. 135 Bob Burton, Inside Spin : The Dark Underbelly of the PR Industry (Crows Nest: Allen & Unwin, 2007), https://research.usc.edu.au/discovery/fulldisplay/alma999899902621/61USC_INST:ResearchRepository; Claes H. de Vreese and Matthijs Elenbaas, “Spin and Political Publicity: Effects on News Coverage and Public Opinion,” in Political Communication in Postmodern Democracy: Challenging the Primacy of Politics, ed. Kees Brants and Katrin Voltmer (London: Palgrave Macmillan UK, 2011), 75–91, https://doi.org/10.1057/9780230294783_5; Michelle Grattan, “The Politics of Spin,” Australian Studies in Journalism, 1998, 32–45; Aldo Antonio Schmitz and Francisco Jose Castilhos Karam, “The Spin Doctors of News Sources,” Brazilian Journalism Research 9, no. 1 (2013): 96–113.

Chapter 2 – Contextualising CCI

48

This role of the securitising actor (one among many, it must be noted) is representative of the

transformation of news media from the objective watchdogs they have historically been considered

to be.136 While a certain perspective or bias is inherent in all written works, traditionally, the

journalist endeavoured to communicate objective facts, so that the reader, listener or viewer may

form their own opinion on a foundation of well-researched objectivity.137 Increasingly, particularly

with the corporatisation and conglomeration of media outlets, certain publications and broadcasts

have a somewhat deserved reputation for framing their communications in a particular way (often

politically-motivated), used to sway or otherwise influence opinions rather than provide objective

facts.

With this transformation of an unbiased news agent into political agent of change,138 the roles of

securitising actor and conveyor of securitising acts have converged, drastically reducing both the

timeframe in which a securitisation process may occur and the comparative size of the audience

that performative act of discursive legitimation may reach. With live and near-instantaneous

broadcast on a global scale, discursive legitimation can be performed on that same scale, instigating

an international securitising process that by nature is much larger than any historic securitisations.

The rise of social media, enabled by cyberspace, has only served to hasten this process.

This not only regionalises and globalises security issues more rapidly than may have previously

been possible, but it will also become apparent almost immediately if there is a strong support for,

or opposition to, the securitisation by way of various media and social media outputs. The ever-

increasing reach and influence of the global media is an element that needs to be included in any

analysis of contemporary securitisation processes. One need only point to the 2016, 2018, and

2020 US elections to observe the effects of the media, to include social media platforms, on both

136 Sandrine Boudana, “A Definition of Journalistic Objectivity as a Performance,” Media, Culture & Society 33, no. 3 (April 1, 2011): 385–98, https://doi.org/10.1177/0163443710394899; Peter Galison, “The Journalist, the Scientist, and Objectivity,” in Objectivity in Science, ed. Flavia Padovani, Alan Richardson, and Jonathan Y. Tsou, vol. 310, Boston Studies in the Philosophy and History of Science (Cham: Springer International Publishing, 2015), https://doi.org/10.1007/978-3-319-14349-1; Einar Thorsen, “Journalistic Objectivity Redefined? Wikinews and the Neutral Point of View,” New Media & Society 10, no. 6 (December 1, 2008): 935–54, https://doi.org/10.1177/1461444808096252. 137 Boudana, “A Definition of Journalistic Objectivity as a Performance”; Galison, “The Journalist, the Scientist, and Objectivity”; Thorsen, “Journalistic Objectivity Redefined?” 138 I note that the role of the media is intended to be objective and unbiased, insofar as that is possible. It is important to clarify that many modern news sources are not objective or unbiased, and too few of those services clarify their biases. For the purposes of this thesis, I accept the theoretical definition of news media as unbiased. See Boudana, “A Definition of Journalistic Objectivity as a Performance”; Thorsen, “Journalistic Objectivity Redefined?”

Courteney O’Connor – PhD Thesis 2021

49

political and social life.139 Studies indicate that social media in particular affected both the

information people had access to and the political parties and candidates they were likely to

support. In fact, it had the effect of supporting preconceived opinions and further separating the

political stances between the left and the right - social media platforms in conjunction with

traditional news sources radicalised political views and resulted in chaotic, scandal-riven elections

still being debated years later. In fact, the global media has done much to securitise the presidency

of Mr. Donald Trump, 45th President of the US. As a case study, the 2016 election showcased a

number of theoretical elements of interest, such as the enmity/exclusion dynamic, the politics of

emergency, and the link between existential threats and the sovereign/decision-making apparatus.

These elements, found in CST, can also be observed in the Schmittian realism framework, leading

to the criticism that CST does not truly offer any unique analytical tools to the body of political

theory at large.140 The sixth chapter of this thesis examines the relationship between CCI and

disinformation. One of the primary vectors of disinformation in contemporary times are the major

social media platforms, on which ‘fake news’ travels up to six times faster than accurate

information, or ‘true news.’141 Disinformation can deepen enmity/exclusion dynamics, induce

political emergency and reduce the integrity of democratic structures.

Pre-existing Analytical Frameworks

Like many political theories, the points of analysis and certain elements of the CST framework are

similar to those employed by other frameworks. For several pertinent elements, however, there is

an undeniably strong correlation between the framework of traditional securitisation, and so-called

Schmittian realism. This has contributed to the argument that in fact, CST does not contribute to

existing theoretical knowledge in any real terms. Michael Williams notes the links between

139 Sinan Aral and Dean Eckles, “Protecting Elections from Social Media Manipulation,” Science 365, no. 6456 (August 30, 2019): 858–61, https://doi.org/10.1126/science.aaw8243; Alessandro Bessi and Emilio Ferrara, “Social Bots Distort the 2016 U.S. Presidential Election Online Discussion,” First Monday 21, no. 11 (November 3, 2016), https://doi.org/10.5210/fm.v21i11.7090; Emilio Ferrara et al., “Characterizing Social Media Manipulation in the 2020 U.S. Presidential Election,” First Monday, October 19, 2020, https://doi.org/10.5210/fm.v25i11.11431; R. Kelly Garrett, “Social Media’s Contribution to Political Misperceptions in U.S. Presidential Elections,” PLOS ONE 14, no. 3 (March 27, 2019): e0213500, https://doi.org/10.1371/journal.pone.0213500; Sangwon Lee and Michael Xenos, “Social Distraction? Social Media Use and Political Knowledge in Two U.S. Presidential Elections,” Computers in Human Behavior 90 (January 1, 2019): 18–25, https://doi.org/10.1016/j.chb.2018.08.006. 140 Felix Ciutǎ, “Security and the Problem of Context: A Hermeneutical Critique of Securitisation Theory,” Review of International Studies 35, no. 2 (2009): 301–26; Floyd, “Can Securitization Theory Be Used in Normative Analysis? Towards a Just Securitization Theory”; Kalyvas, Democracy and the Politics of the Extraordinary; Rita Taureck, “Securitization Theory and Securitization Studies,” Journal of International Relations and Development 9, no. 1 (March 2006): 53–61, http://dx.doi.org.virtual.anu.edu.au/10.1057/palgrave.jird.1800072. 141 Peter Dizikes, “Study: On Twitter, False News Travels Faster than True Stories,” MIT News | Massachusetts Institute of Technology, March 8, 2018, https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308; Larry Greenemeier, “False News Travels 6 Times Faster on Twitter than Truthful News,” PBS NewsHour, March 9, 2018, https://www.pbs.org/newshour/science/false-news-travels-6-times-faster-on-twitter-than-truthful-news.

Chapter 2 – Contextualising CCI

50

securitisation theory and the idea of politics of the extraordinary as conceived by German political

philosopher Carl Schmitt.142 Schmittian realism has several notable elements: the friend/enemy

dichotomy or enmity/exclusion division, the exceptionalism of the decision-maker, and the

aforementioned politics of the extraordinary. The correlation between these elements and the

securitisation concepts of threat perception, the (legitimacy of the) securitising actor, and the

existence of extreme necessity is clear. While not the first to observe the distinct similarities

between securitisation and Schmittian realism, Williams offers a succinct and logical examination

of the relationship.

Noting that, despite broad conception of securitisation and the politics of the extraordinary as

negative phenomena, Williams states that both are in fact capable of positive impact. He goes on

to identify the politics of the extraordinary as “the declaration of existential threat and, if

successful, the generation of the capacity to break free of the rules of ‘normal’ politics.”143 The

positive possibilities of the politics of the extraordinary are when the focus of the securitisation

places “the question of constituent power at the centre of concern.”144 Accordingly, once you

move beyond the traditional (and Schmittian) friend/enemy dichotomy, or enmity and exclusion

calculation, the possibility exists for securitisation to become a valuable tool for positive social

change, particularly if constituent power in the form of the relevant audience can be harnessed in

order to de-securitise security issues. ‘Extraordinary’, in a political sense, does not necessarily need

to be negative. Using a global health example, both HIV/AIDS and Ebola were securitised in the

positive sense. Having identified an existential threat to the audience, governments securitised

these diseases and pursued mitigation in the form of therapies, treatments, and vaccines.145

The mobilisation of the polity is another notable similarity between securitisation and Schmittian

realism, alone with the comparative mutability of the concept of security according to the

(subjective) views of the decision-maker (for Schmitt) or the actor that can ‘speak security’ (for

securitisation scholars). “As revolutionary movements demonstrate perhaps most clearly, the

142 Williams, “Securitization as Political Theory: The Politics of the Extraordinary.” 143 Williams, 115. 144 Williams, 115. 145 HIV/AIDS refers to Human Immunodeficiency Virus/Acquired Immunodeficiency Virus. Colin McInnes and Simon Rushton, “HIV/AIDS and Securitization Theory,” European Journal of International Relations 19, no. 1 (March 1, 2013): 115–38, https://doi.org/10.1177/1354066111425258; Michael J. Selgelid and Christian Enemark, “Infectious Diseases, Security and Ethics: The Case of Hiv/Aids,” Bioethics 22, no. 9 (2008): 457–65, https://doi.org/10.1111/j.1467-8519.2008.00696.x; Roxanna Sjöstedt, “Exploring the Construction of Threats: The Securitization of HIV/AIDS in Russia,” Security Dialogue 39, no. 1 (March 1, 2008): 7–29, https://doi.org/10.1177/0967010607086821; Marco Antonio Vieira, “The Securitization of the HIV/AIDS Epidemic as a Norm: A Contribution to Constructivist Scholarship on the Emergence and Diffusion of International Norms,” Brazilian Political Science Review (Online) 2, no. SE (December 2007): 0–0.

Courteney O’Connor – PhD Thesis 2021

51

politics of the extraordinary can emerge from unexpected sources and through surprising

processes that overturn, disrupt or challenge prevailing structures or practices.”146 This sounds

remarkably similar to the extraordinary measures granted by a successful securitisation process.

Williams quotes Ole Waever’s assertion that for him, security is to be understood as exceptionality

and emergency as defined by Schmitt, but tempered by an “Arendtian understanding of politics”,147

or, an understanding that extraordinary politics could have a positive impact.

If the logic of security is defined by a friend/enemy construction, then it is not unreasonable to

assume that securitisation will be negative: after all, the primary use of the securitisation framework

is the analysis of how a political concern is elevated to the level of a security threat, by nature a

negative condition. Additionally, the terms ‘politics of the extraordinary’ and ‘emergency’ carry a

negative cultural connotation that lends credence to the idea that securitisation is an inherent

negative process. However, if you move beyond the considerations of enmity and exclusion that

characterise Schmitt’s theoretical concept, and beyond the exceptionality of the decision-maker,

then the securitisation process does not necessarily have to be negative. Regardless of one’s

perception of the positive or the negative impact of securitisation, there is an unmistakable

similarity between securitisation theory and Schmittian realism. If that is the case, then

securitisation theory may not necessarily be solidly placed within the constructivist field of

philosophy. It is on this possibility that I first considered the structure of elevation theory (see

below), sharing in both realist and constructivist elements, and hopefully serving as a bridge

between fields.

State-centrism in Securitisation Research

One of the original purposes of securitisation theory was to reduce the traditional academic and

philosophical focus on the state within strategic studies as a discipline.148 However, throughout

Security: A Framework for Analysis, and indeed the body of securitisation literature, that focus on the

state has remained.149 Often in examinations and discussions of securitisation theory, the state is

the referent object in question. While this does not negate the value of that research, it does call

into question whether or not securitisation theory should truly be considered as separate from

traditional strategic studies, or indeed the overall realist school of thought. In addition, while

Buzan, Waever and de Wilde purposefully examined sectors beyond the military, much of the

146 Williams, “Securitization as Political Theory: The Politics of the Extraordinary,” 116. 147 Williams, 118. 148 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 36–40. 149 Knudsen, “Post-Copenhagen Security Studies: Desecuritizing Securitization.”

Chapter 2 – Contextualising CCI

52

existing work on securitisation theory by other academics not only holds the state as the reference

object but considers, specifically, various elements of security in the military sector and in the

military sense. This thesis also focuses primarily on an examination of the way in which a state

actor has observable elevated a particular threat. In this instance, the choice to examine state actor

elevation was made in order to ascertain how at the highest levels, these concepts and problems

are being understood. The ways that states legislate and draft policies are affected by threat

perception. In turn, these laws and policies will affect the ways in which substate actors like

corporate groups and individuals are able to engage and act themselves. Beyond the case specific

to this thesis, however, threat elevation analysis has been designed for and can be applied to

substate actors and security concepts and problems as well, though this is outside the scope of the

current research project and so must be undertaken in future.

Post-Copenhagen Securitisation Theories

Copenhagen securitisation theory, so named for the geographic location of the scholars by whom

the theory was conceived, is not the only variation of securitisation theory. Many scholars have

their own securitisation variants, including the one I offer in this thesis. However, there are two

main alternatives to the (traditional) CST; the Welsh School, and the Paris School. As would be

expected, each of these Schools of thought focuses on a different element of securitisation. The

Welsh School, or Aberystwyth School, “linked the study of security to the …. goal of human

emancipation”.150 This draws even further away from the traditional (realist) view of security being

focused on the state and the military concerns thereof than does Copenhagen securitisation theory.

Firmly within the social constructivist field of political theory, scholars such as Ken Booth and

Richard Wyn Jones are broadly considered to be representative of this particular School of

securitisation theory.151 The Paris School, on the other hand, focuses more on developing a

sociological approach to (the analysis of) “the conduct of everyday security practices ranging from

policing to border patrol”.152 While this is closer to the traditional state-based view of security, it

is distanced from realist conceptions of such by its focus on the micro-practices, or sub-state levels

of security. That is to say, on the various constituent practices that actually provide and maintain

the security of the state in real terms. In considering the strategic understanding and development

of the threat perception of cyberspace and parallel evolution of CCI, the more relevant framework

150 Peoples and Vaughan-Williams, “Introduction,” 9. 151 Peoples and Vaughan-Williams, 9; Floyd, “Towards a Consequentialist Evaluation of Security,” 328. See also Ken Booth, “Security and Emancipation,” Review of International Studies 17, no. 4 (1991): 313–26; Richard Wyn Jones, Security, Strategy, and Critical Theory (Boulder: Lynne Rienner Publishers, 1999). 152 Peoples and Vaughan-Williams, “Introduction,” 9.

Courteney O’Connor – PhD Thesis 2021

53

is the state-level securitisation theory. While you could certainly examine cyber security issues from

the Welsh and Paris perspectives, these Schools of securitisation theory have been developed to

analyse very particular issues, and they do this very well. Because of that niche development,

however, they have less utility across the wider realm of strategic problems. They are better suited

to specific, bounded analyses than the more state-level, systemic analytical tool of Copenhagen

securitisation.

While there are a great many scholars of securitisation theory, as Peoples and Vaughan-Williams

argued in Mapping Critical Security Studies, it is important not to become too attached to the concept

of identifying scholars and theories by this tripartite, geographically-oriented view of

securitisation.153 Each of these three main forms of securitisation have overlapping elements and

fields of analysis, which link them together just as much as their varying analytical foci separate

them from each other. Balzacq also notes that all three Schools are peculiarly Eurocentric, and this

is an important analytical bias to recognise when utilising (or critiquing) securitisation theory.154

While enumerating and detailing the differences and similarities of the various forms of

securitisation theory is not within the scope of this thesis, it is important to recognise that

securitisation is a widely-researched academic field. In addition, by providing a brief overview of

the main alternatives to CST, I am better able to place the elevation variant of securitisation within

the theoretical field at large (see below). As previously stated, according to Balzacq, there are

common elements across each variant of securitisation theory so far offered, and this is also true

of the elevation variant used in this thesis. Just as the Copenhagen, Welsh, and Paris Schools of

securitisation focus on certain areas of securitisation which have come to epitomise each of them,

so have I utilised themes common to each form of securitisation theory while employing an

alternate analytical view that has resulted in a unique theoretical framework within the overall field

of securitisation theory. It is my belief that this newer form of securitisation will also be better

suited to modern concerns over, and utilisation of, advanced communications technologies.

Framing

Scott Watson sees securitisation theory as a subfield of framing, or the way in which information

is communicated in order for it to be perceived in a certain way.155 Watson identifies securitisation

theory so conceived as a uniquely European mode of thought, noting that in American scholarship,

framing, (rather than securitisation) is the preferred method of analysis and addresses much of the

153 Peoples and Vaughan-Williams, 9–10. 154 Peoples and Vaughan-Williams, 9–10. 155 Watson, “‘Framing’ the Copenhagen School.”

Chapter 2 – Contextualising CCI

54

same material. Watson’s argument is that security is a ‘master frame’ in security studies, and that

securitisation theory should be considered a subfield of framing.156 Among the similarities between

the framing and securitisation frameworks is the assertion “that linguistic-grammatical

composition is essential to understanding political outcomes.”157 Construction of threat through

discursive legitimation aside, both frameworks also identify the importance of power structures

and relationships, and distinguish “audience, commentator and culture” as crucial elements in the

construction of threats.158 However, Watson admits that there is a drawback to conflating the two

approaches; framing focuses primarily on the public as the relevant audience in question and the

effect of framing in the media. Thus, any insights may not be applicable to securitisations in which

the relevant audience is not the general public. As the concept of framing does lend itself to my

argument that the media can be securitising actors (by framing information and thus guiding

perception), I would argue that framing is an element of the securitisation process, rather than that

securitisation is a subfield of framing.

Moving Beyond CST

Having examined much of the existing literature on CST, I would suggest that rather than view

CST as a form of analysing threat construction or conflating it with framing, it may be more useful

to reformulate securitisation and then relocate it closer to, if not within, the realist school as a

theory of threat elevation.159 I would further argue that, taking into account the various elements

of securitisation theory as laid out here, securitisation would be better utilised to explain how a

threat is elevated from a normal political issue to a security threat, rather than constructed into one.

It is not the manner in which an issue is discussed that makes that issue a threat, so assertions that

threats are socially constructed are arguable. Rather, the perception of an issue as threatening to a

particular object relies on reaching an understanding of the particular damage that threat could

accomplish. There must be either previous incidences wherein damage has been caused by that

threat and it must be treated accordingly, or there should be a reasonable belief with evidentiary

support that a particular identified ‘threat’ may or will cause damage in the future. This is an

156 Watson, 280. 157 Watson, 283. 158 Watson, 284. 159 Realism, according to Jack Donnelly, is “a tradition of analysis that stresses the imperatives states face to pursue a power politics of the nation interest….” and notes that realism is the oldest theory of international relations. Jack Donnelly, “Realism,” in Theories of International Relations, by Scott Burchill et al., 3rd ed. (Palgrave Macmillan, 2005), 29, http://dro.deakin.edu.au/view/DU:30010384. Realism focuses on the problems faced by, and methods of persuasion and mitigation available to, the sovereign state. As such, realism is mainly concerned with the military sector, and how that sector affects and is affected by other sectors.

Courteney O’Connor – PhD Thesis 2021

55

acknowledgement of an existing reality rather than the construction of a novel threat. As such, the

result of a securitising process is the elevation of both the issue to the status of threat, and the

measures in the correlating security policy, rather than the construction of the threat itself.

To use the Cold War as an example, the creation and deployment of nuclear weaponry was the

result of a security policy that evolved out of the recognition of hostile rivals and the threat they

represented.160 Once further parties obtained nuclear weapons and the concept of mutually assured

destruction (MAD) came into play, it could be argued that the normalisation of MAD and the

signing of nuclear accords and de-proliferation/non-proliferation treaties then desecuritised the

nuclear issue (at least to some extent).

It was after the advent of the Internet and the diffusion of access to cyberspace that the

vulnerabilities of the new domain began to become a security concern. It is then that we started to

see the securitisation of cyberspace; crucially, after it has been proven to be a threat vector.161

According to this more realist conception of securitisation, then, the speech act elevates an existing

perception of threat out of the bounds of normal politics. A successful securitising move would

be one that results in the creation or transformation of a security policy or strategy beyond the

bounds of those that already exist. In this context, the creation of national security policies and

strategies specific to cyberspace are measures resulting from the recognition of the threat elevation

of cyberspace.

I acknowledge that threat perception is socially constructed insofar as it is the securitising actor,

as a person or collective of people, that perceives something to be threatening to a given referent

object, and all perception is subjective and based on lived experiences.162 States are likely to identify

rising threats based on pre-existing conditions, so the particular context in which a threat is

identified and elevated is key.163 In particular, the historical relationships between the conflicting

160 See Elbridge Colby, “The Role of Nuclear Weapons in the U.S.-Russian Relationship,” Carnegie Endowment for International Peace, 2016, https://carnegieendowment.org/2016/02/26/role-of-nuclear-weapons-in-u.s.-russian-relationship-pub-62901; Carl Kaysen, Robert S. McNamara, and George W. Rathjens, “Nuclear Weapons After the Cold War,” June 21, 2018, https://www.foreignaffairs.com/articles/1991-09-01/nuclear-weapons-after-cold-war; David S. Meyer, “Framing National Security: Elite Public Discourse on Nuclear Weapons during the Cold War,” Political Communication 12, no. 2 (April 1, 1995): 173–92, https://doi.org/10.1080/10584609.1995.9963064; John Swift, “The Soviet-American Arms Race | History Today,” History Today, 2009, https://www.historytoday.com/archive/soviet-american-arms-race; Godfried van Benthem van den Bergh, “The Taming of the Great Nuclear Powers,” Policy Outlook (Carnegie Endowment for International Peace, 2009). 161 Refer to chapter four for an exploration of the development and riskification of cyberspace. 162 See Kenneth Boulding, The Image: Knowledge in Life and Society (Ann Arbor: University of Michigan Press, 1956), https://www.press.umich.edu/6607/image. 163 See Waltz, Man the State and War: A Theoretical Analysis.

Chapter 2 – Contextualising CCI

56

or hostile parties will affect the outcome of a securitisation. This concept of the threat matrix is

not unique to the state, either. A current and evolving awareness of existential threats can be drawn

down through all levels, including the collectives and the individuals identified as actors within this

thesis. Thus, a riskifying or securitising move only elevates that which is already viewed as a

concern from the bounds of normal politics to a security risk or a security threat, requiring either

management or extraordinary measures for resolution and giving the securitising actor the impetus

to create or transform a security policy in a way that may not otherwise be acceptable to the

relevant audience.

Successful securitisation so understood can be seen in the passing of the Patriot Act;164 the creation

and operation of mass surveillance programs such as the National Security Agency’s (NSA)

PRISM.165 The ongoing wars in the Middle East were once successfully securitised but can now be

argued to have lost the necessary ongoing consent of the relevant audience.166 The growing number

of agencies and government or military branches with dedicated cyber security forces speaks to

the successful elevation of cyberspace attack and exploitation to a serious threat to security;

whether it is an existential one depends on the body of literature and to which author or actor you

are referring. Additionally, treating securitisation as a method of elevation reduces reliance on the

key concept of audience acceptance which is difficult to gauge. Relevant audience acceptance in

securitisation-as-elevation theory could be understood as either having been neutralised or at least

as have become non-oppositional. Given the ‘social’ construction of perceived threat, the existence

of that threat must be accepted so the primary relevant audience for a successful policy creation

or transformation would in the first instance be politicians or security practitioners rather than the

general public.

Having examined traditional securitisation theory and identified the potential for development of

that framework, the next section provides an overview of the existing literature surrounding the

field of counterintelligence. Counterintelligence has a long history and is an accepted practice of

164 United States Department of Justice, “The USA PATRIOT Act: Preserving Life and Liberty,” United States Department of Justice, 2001, https://www.justice.gov/archive/ll/highlights.htm. 165 Gordon Carera, Intercept: The Secret History of Computers and Spies (London: Weidenfeld & Nicolson, 2015), 352–56; 371–76. 166 Andrew Bacevich, “Endless War in the Middle East,” Cato Institute, June 13, 2016, https://www.cato.org/catos-letter/endless-war-middle-east; Anthony H. Cordesman, “America’s Failed Strategy in the Middle East: Losing Iraq and the Gulf,” 2020, https://www.csis.org/analysis/americas-failed-strategy-middle-east-losing-iraq-and-gulf; Simon Jenkins, “The US and Britain Face No Existential Threat. So Why Do Their Wars Go On?,” the Guardian, November 15, 2019, http://www.theguardian.com/commentisfree/2019/nov/15/britain-and-us-wars-conflicts-middle-east; Bruce Riedel, “30 Years after Our ‘Endless Wars’ in the Middle East Began, Still No End in Sight,” Brookings (blog), July 27, 2020, https://www.brookings.edu/blog/order-from-chaos/2020/07/27/30-years-after-our-endless-wars-in-the-middle-east-began-still-no-end-in-sight/.

Courteney O’Connor – PhD Thesis 2021

57

national security and statecraft. However, I contend that modern approaches to

counterintelligence have not evolved apace with developments in cyberspace and related high

technologies. Additionally, counterintelligence in cyberspace needs to be, at least in part, devolved

to the level of the individual to contribute to a greater degree of relative security for the state, as

individuals can be intelligence collection targets for foreign actors. Chapter four of this thesis

evaluates the British approach to CCI through process-tracing and discourse analysis; chapter six

examines disinformation through the lens of CCI and foreign interference. The next section of

this chapter is not an exhaustive examination as the literature on traditional intelligence and

counterintelligence studies is extensive, but will give a broad, fundamental understanding of the

field out of which the later development of CCI is born.

Counterintelligence

Counterintelligence is an essential part of security, both as a mirror and as part of intelligence as a

practice. Both intelligence and counterintelligence have been evident as an essential part of

statecraft for thousands of years.167 The earliest example of counterintelligence that comes to mind

when considering the history of the discipline is the creation and use of ciphers to pass messages

in ancient times; perhaps the best-known example is the Caesar cipher. This cipher, named for

Julius Caesar who is said to have created it, consists of a simple three letter shift used to encrypt

messages between the Roman Emperor and his commanders and allies.168 In terms of state

security, secrets can mean the difference between victory and loss, life and death, peace and war.

The practice of counterintelligence is active and, at times, proactive, while consistently crucial to

the continued security of the sovereign state. Counterintelligence as an academic field is more

recent, but references can be found to counterintelligence practices spanning back millennia, even

though it may not specifically be termed as such.169 There are many definitions of

counterintelligence,170 and many opinions on where it falls on the spectrum of importance to

167 See Sara Perley, “Arcana Imperii: Roman Political Intelligence, Counterintelligence, and Covert Action in the Mid-Republic” (Canberra, The Australian National University, 2016); Rose Mary Sheldon, “Caesar, Intelligence, and Ancient Britain,” International Journal of Intelligence and CounterIntelligence 15, no. 1 (January 2002): 77–100, https://doi.org/10.1080/088506002753412892; Rose Mary Sheldon, “A Guide to Intelligence from Antiquity to Rome,” The Intelligencer 18, no. 3 (2011): 49–51. 168 Christopher Andrews, The Secret World: A History of Intelligence (New York: Penguin Books, 2019), 46, https://www.penguin.com.au/books/the-secret-world-9780140285321. 169 Andrews, The Secret World: A History of Intelligence; Perley, “Arcana Imperii: Roman Political Intelligence, Counterintelligence, and Covert Action in the Mid-Republic.” 170 Stan A. Taylor, “Definitions and Theories of Counterintelligence,” in Essentials of Strategic Intelligence, ed. Loch K. Johnson, Praeger Security International Textbook (Santa Barbara: Praeger, 2015), 285–302; Loch K Johnson and James J Wirtz, Intelligence: The Secret World of Spies : An Anthology (New York: Oxford University Press, 2011), 70–71; Mark Lowenthal, Intelligence: From Secrets to Policy, 8th ed. (Thousand Oaks: SAGE Pulications, Inc, 2020), 201.

Chapter 2 – Contextualising CCI

58

security, but I would argue that the essential purpose of counterintelligence is to actively deny your

opponent access to information that you are trying to keep secret.

Traditional Counterintelligence

In this section on traditional counterintelligence, I will briefly discuss three aspects of this field.

The first is the discipline of counterintelligence: in other words, how counterintelligence has been,

and is being, undertaken by intelligence practitioners. The second aspect I will consider is the

development of counterintelligence; in terms of development, I have found that

counterintelligence is in large part both reactionary to, and reliant on, the improvement or

innovation of new technologies. It is imperative that counterintelligence practitioners be familiar

with the technologies that could be used by hostile intelligence collectors, and I believe that the

development of this field is largely correlative to the development of technologies of various types.

The last aspect I will consider, and perhaps the most important in terms of security, is the purpose

of counterintelligence. This encompasses what the express purpose is; some of the stated

counterintelligence goals of various intelligence agencies and the governments to which they

belong; and why counterintelligence is such an important discipline in the first place.

As a precursor to the discussion of counterintelligence, I need to draw attention to its vague and

complicated nature. According to Mark Lowenthal, counterintelligence is one of the most difficult

of the intelligence disciplines to discuss because it cannot be neatly pigeonholed into any one area:

it isn’t collection, but involves collection. It isn’t analysis, but involves analysis, and can create

analytical opportunities. It isn’t human intelligence, but it can involve human intelligence. It is not

solely a law enforcement issue, but it is crucial to the law enforcement process.171

Counterintelligence, as a practice and as a consideration, should pervade all aspects of (intelligence

and) security, and not be considered a separate or outlying step in the overall intelligence process:

to do so is to risk failure. Lowenthal identifies three branches of counterintelligence, none of which

are mutually exclusive: collection, defensive, and offensive.172 With that in mind, we can look to

counterintelligence’s history and development, and its purpose.

171 Lowenthal, Intelligence: From Secrets to Policy, 201–28. 172 Lowenthal, 201–28. These will also be considered further in relation to counterintelligence in cyberspace in chapter three.

Courteney O’Connor – PhD Thesis 2021

59

History and Development

As stated above, counterintelligence is the practice of denying an adversary access to information

that they want, which you are trying to keep secret. This is a very basic definition, but it will form

the foundation of this section. One of the most famous of (known) counterintelligence agents,

James Angleton of the US’ Central Intelligence Agency (CIA), once described counterintelligence

as operating in a ‘wilderness of mirrors’,173 where the wilderness was the “myriad of stratagems,

deceptions, artifices, and all the other devices of disinformation which…intelligence services use

to confuse and split the West…producing an ever-fluid landscape where fact and illusions

merge.”174 This description remains one of the best I have ever found of the ground covered by

traditional counterintelligence, despite its less-than-academic formulation. It is the job of the

counterintelligence agent to be the professional paranoid in a community that makes a career and

a practice of paranoia.

Counterintelligence is conducted in many different forms, including communications encryption

like the Caesar cipher, though in modern times cryptography has significantly improved.175

Counterintelligence is also a matter of human intelligence (HUMINT); of discovering intelligence

agents reporting to foreign intelligence services and either flipping them,176 or providing them with

misleading information (disinformation) in order to confuse the actor to whom they report.177

Disinformation and misdirection are key practices of the counterintelligence discipline, and much

of what is publicly known about previous counterintelligence operations can be reduced to one of

these two practices, if not both.

173 Referring to the ‘looking-glass nature’ of counterintelligence and, particularly during the Cold War when Angleton ran the CIA’s counterintelligence branch, the difficult and often disliked practice of tracking and identifying moles within one’s own intelligence service; of constant suspicion of the loyalties of fellow agents. John Kimsey, “‘The Ends of a State’: James Angleton, Counterintelligence and the New Criticism,” The Space Between: Literature and Culture 1914-1945 13 (2017), https://scalar.usc.edu/works/the-space-between-literature-and-culture-1914-1945/vol13_2017_kimsey; see also David Robarge, “‘Cunning Passages, Contrived Corridors’: Wandering in the Angletonian Wilderness,” Studies in Intelligence 53, no. 4 (2009): 49–61. 174 Angleton here referred entirely to Soviet bloc intelligence services, but the definition can apply to all foreign intelligence services. David Robarge, “Moles, Defectors, and Deceptions: James Angleton and CIA Counterintelligence,” Journal of Intelligence History 3, no. 2 (December 2003): 31, https://doi.org/10.1080/16161262.2003.10555085. 175 For a history of intelligence and counterintelligence (including cryptography and cryptanalysis), see Andrews, The Secret World: A History of Intelligence. 176 A term for convincing (in one way or another) a foreign intelligence agent to turn against their employer and spy for you instead. 177 Miron Varouhakis, “An Institution-Level Theoretical Approach for Counterintelligence,” International Journal of Intelligence and CounterIntelligence 24, no. 3 (September 2011): 494–509, https://doi.org/10.1080/08850607.2011.568293; Frederick L. Wettering, “Counterintelligence: The Broken Triad,” International Journal of Intelligence and CounterIntelligence 13, no. 3 (October 2000): 265–300, https://doi.org/10.1080/08850600050140607.

Chapter 2 – Contextualising CCI

60

Counterintelligence is an extremely difficult field in which to work. In addition to investigating,

misdirecting, misleading and protecting against (known) agents of adversarial intelligence services,

it is also the responsibility of counterintelligence agencies and agents to monitor and investigate

those within the domestic intelligence services and citizenry who may be operating for a foreign

actor from within their home state.178 This practice became particularly important during the Cold

War and immediately following, when spies like Robert Hanssen and Aldrich Ames in the US, and

Klaus Fuchs in Britain were discovered to have been spying for the Soviet Union across decades

and from within important security-related programmes (Ames and Hanssen were CIA and FBI

agents, respectively, while Fuchs worked on the Manhattan Project).179

Of the examples available in the public domain, Christopher Andrews’ 2018 history of intelligence

and counterintelligence provides an authoritative examination, covering several thousand years of

known cases.180 World War II in particular provides several cases of exemplary counterintelligence

operations that changed the course of the war effort in favour of the Allies. Operation Mincemeat

in Britain;181 the breaking of the Enigma machine codes by the Bletchley Park codebreakers;182 and

the Bodyguard operations designed to divert Hitler’s forces during the D-Day landings are

incredible examples of the importance, utility, and effect of counterintelligence operations.183 Each

of these operations required extensive planning, considerable risk and extremely tight operational

security in order to achieve the desired outcomes and influence the overall war effort in favour of

the Allies.

Counterintelligence is, by nature, a largely reactionary discipline; the practice of counterintelligence

consists of subverting the actions of others and preventing the loss of state secrets and exploitation

of that information, as well as protecting that information to begin with. Much like cyber security,

in the intelligence world, to be on the defensive is both more dangerous and more difficult than

being on the offensive, because you cannot defend against that which you do not know and have

178 Varouhakis, “An Institution-Level Theoretical Approach for Counterintelligence.” 179 Federal Bureau of Investigation, “Robert Hanssen,” Page, Federal Bureau of Investigation, 2001, https://www.fbi.gov/history/famous-cases/robert-hanssen; Federal Bureau of Investigation, “Aldrich Ames,” Page, Federal Bureau of Investigation, 2012, https://www.fbi.gov/history/famous-cases/aldrich-ames; MI5, “Klaus Fuchs | MI5 - The Security Service,” Security Service | MI5, 2020, https://www.mi5.gov.uk/klaus-fuchs. 180 Andrews, The Secret World: A History of Intelligence. 181Andrews, 256; see also Ben MacIntyre, “Operation Mincemeat,” Bloomsbury Publishing, 2010, https://www.bloomsbury.com/uk/operation-mincemeat-9781408809211/. 182 Andrews, The Secret World: A History of Intelligence, 616–17; see also Michael Smith, The Secrets of Station X: How the Bletchley Park Codebreakers Helped Win the War (Hull: Biteback Publishing, 2011). 183 Andrews, The Secret World: A History of Intelligence, 657; see also Ben MacIntyre, Double Cross: The True Story of the D-Day Spies (London: Bloomsbury Publishing, 2012), https://www.bloomsbury.com/uk/double-cross-9781408819906/.

Courteney O’Connor – PhD Thesis 2021

61

not seen. The point of counterintelligence is to attempt to defend against the physical zero-day

exploits;184 to find the insider threat; to evaluate the likelihood of information leaks and to find

and subvert intelligence agents operating under orders from foreign (and often hostile) parties.

Following Lowenthal, there are three overlapping branches of counterintelligence: collection,

defensive, and offensive.185 Counterintelligence collection involves obtaining information about

the intelligence agencies, collection, and overall activities of a hostile or conflicting party when

those efforts are concentrated on oneself or one’s allies. Defensive counterintelligence are the

efforts and operations undertaken to subvert those intelligence activities concentrated on the

intelligence operations, activities, and/or agencies of oneself or one’s allies by another actor.

Offensive counterintelligence refers to the practice of manipulating an adversary’s intelligence

activities either by flipping agents, or through a disinformation campaign.186 Per Lowenthal,

counterintelligence “refers to efforts taken to protect one’s own intelligence operations from

penetration and disruption by hostile nations of their intelligence services.”187 Paul Redmond

provides definitions accepted by the US; Russia; and the UK which all bear similarities but in fact

are not identical.188 For example, the Russian and British definitions of counterintelligence contain

reference to counter-subversion tactics and operations, whereas the American definition does not.

The general point to be made is that there are a number of alternate and/or competing definitions;

it is possible to add a number of other activities to this definition that would increase the size of

the field of counterintelligence.189 In addition to outward focused efforts where the opposition is

assumed to be foreign parties operating in or against the sovereign state domestically,

counterintelligence efforts must also include the investigation and subversion of the efforts and

operations undertaken by domestic parties on behalf of foreign states or actors, as well as for the

purposes of domestic groups for which those spies may be working.

184 A previously unknown vulnerability or flaw in a program or code, that allows hostile parties to compromise a program, device, or network. Crowdstrike, “What Is a Zero-Day Attack? | CrowdStrike,” crowdstrike.com, 2021, https://www.crowdstrike.com/cybersecurity-101/zero-day-exploit/. 185 Lowenthal, Intelligence: From Secrets to Policy, 201–2. These will also be considered further in relation to counterintelligence in cyberspace in chapter three. 186 Lowenthal, 201–2. 187 Lowenthal, 201–2. 188 Paul J. Redmond, “The Challenges of Counterintelligence,” in Intelligence: The Secret World of Spies, ed. Loch K. Johnson and James J. Wirtz, 3rd ed. (New York: Oxford University Press, 2011), 295. 189 John Ehrman, “What Are We Talking About When We Talk about Counterintelligence? - CIA,” Studies in Intelligence 53, no. 2 (2009), https://www.cia.gov/resources/csi/studies-in-intelligence/volume-53-no-2/what-are-we-talking-about-when-we-talk-about-counterintelligence-pdf-172-0kb/; Paul Redmond in Lowenthal, Intelligence: From Secrets to Policy, 202.

Chapter 2 – Contextualising CCI

62

Technically speaking, counterintelligence is also both a process and a product; according to

Johnson & Wirtz, the counterintelligence process is what raw information has to go through in

order to become actionable intelligence; a counterintelligence product is the final, actionable

intelligence on the foreign intelligence activities, operations and agencies that are operating against

the target state.190

Counterintelligence is also a term of organisational nomenclature, referring to the agencies,

departments and services whose mandate is to undertake counterintelligence on behalf of the state.

In real terms, counterintelligence is partially so difficult to understand because so many actors have

different understandings of what counterintelligence is; how it functions; and what it should be.

Irrespective of the accepted official and/or organisational definition of counterintelligence,

however, there is a general acceptance of how important counterintelligence is to the continued

security of the modern state. Every major text examining the history, the evolution, and/or the

discipline of intelligence contains a section on counterintelligence and the ways in which

counterintelligence protects the sovereign state.191

Purpose

The essential purpose of counterintelligence has already been examined in the sections above;

however, there are observable changes between the requirements of counterintelligence practices

and practitioners now and counterintelligence practices and practitioners even just twenty years

ago. Because counterintelligence is a reactionary discipline and evolves with the technology of the

times, just like many other sectors of modern life, counterintelligence has undergone (and

continues to undergo) a sea change in expectations and deliverables now that so much of the

world’s business is conducted in or through cyberspace. I refer elsewhere to the problems of

volume, velocity and variety in the amount of data to which intelligence agencies have access in

my brief examination of the Snowden leaks and the extent of the National Security Agency data

dragnet.192 Due to the volume of data both already available and constantly being created, there is

also a serious veracity problem: being able to verify that the data collected is accurate. Given also

the low threshold for entry into cyberspace and the ease with which an individual can create an

enormous volume of data both verifiable and false, this forms part of the argument for dragnet

surveillance: it is impossible to know what you will need until you need it, so it is best to collect

190 Johnson and Wirtz, Intelligence, 288. 191 Andrews, The Secret World: A History of Intelligence; Johnson and Wirtz, Intelligence; Lowenthal, Intelligence: From Secrets to Policy; Taylor, “Definitions and Theories of Counterintelligence.” 192 See chapter six, section on data collection and the Internet of Things.

Courteney O’Connor – PhD Thesis 2021

63

and store as much as possible so that the information pool is there for exploitation once agencies

do have a target.193

In cyberspace as much as in the physical world (if not more so), counterintelligence activities and

operations are required as active defence measures: cyberspace is an offensively advantaged

domain, and considering that counterintelligence is already a reactionary discipline, cyberspace

poses a great number of problems for counterintelligence practitioners. The zero-day exploit issue

is of constant concern to most users of cyberspace but particularly to the government agencies

and departments tasked with defence of the areas of cyberspace used regularly by state actors.

Hostile actors in possession of zero-day exploits for programs used by government devices, for

example, could provide unauthorised access to any number of sensitive or classified information

and systems. It is also a concern for offensive purposes, as actors with knowledge of a current and

un-patched zero-day exploit have leverage over every other actor in the international system that

uses that software, of which there could be tens of millions.194

In addition, the same problems that cyberspace presents to those trying to secure the activities and

interests of the state in cyberspace are also posed to corporations, groups, and individuals. Identity

theft, credit card fraud, data exfiltration and email scams are now ubiquitous in cyberspace and

barely a day goes by without a new article being published detailing how this or that actor lost

millions of clients’ personal data.195 Despite the requirement for counterintelligence having

expanded beyond the sovereign state, however, I contend that the state is still the actor with the

greatest vested interest in the resilience and security of the cyber domain, and so in the practices

of offensive and defensive counterintelligence. While the state may not be the only significant actor

193 It is worth noting that dragnet surveillance is ethically, legally, and politically problematic. 194 The WannaCry ransomware, for example, exploited a known flaw in a Microsoft system, using an NSA hacking tool called EternalBlue, stolen and then leaked by the Shadow Brokers group in April 2017 Andy Greenberg, “The Strange Journey of an NSA Zero-Day—Into Multiple Enemies’ Hands,” Wired, May 7, 2019, https://www.wired.com/story/nsa-zero-day-symantec-buckeye-china/. That same vulnerability is now known to have been in the possession of at least three actors before the 2017 WannaCry operation - China, North Korea, and Russia. Greenberg; Lily Hay Newman, “How Leaked NSA Spy Tool ‘EternalBlue’ Became a Hacker Favorite,” Wired, March 7, 2018, https://www.wired.com/story/eternalblue-leaked-nsa-spy-tool-hacked-world/. 195 BBC News, “Sony Pays up to $8m over Employees’ Hacked Data”; Josh Fruhlinger, “Marriott Data Breach FAQ: How Did It Happen and What Was the Impact?,” CSO Online, February 12, 2020, https://www.csoonline.com/article/3441220/marriott-data-breach-faq-how-did-it-happen-and-what-was-the-impact.html; Fruhlinger, “Equifax Data Breach FAQ”; Haskell-Dowland, “Facebook Data Breach”; Brian Krebs, “Adobe Breach Impacted At Least 38 Million Users – Krebs on Security,” Krebs On Security (blog), 2013, https://krebsonsecurity.com/2013/10/adobe-breach-impacted-at-least-38-million-users/; Ragan, “Raising Awareness Quickly”; Rushe, “JP Morgan Chase Reveals Massive Data Breach Affecting 76m Households”; Cory Scott, “Protecting Our Members,” LinkedIn, May 18, 2016, https://blog.linkedin.com/2016/05/18/protecting-our-members; Martyn Williams, “Inside the Russian Hack of Yahoo: How They Did It,” CSO Online, October 4, 2017, https://www.csoonline.com/article/3180762/inside-the-russian-hack-of-yahoo-how-they-did-it.html.

Chapter 2 – Contextualising CCI

64

in cyberspace,196 the positioning and responsibilities of the state with regard to domestic and

foreign policy making, as well as social order and law enforcement, ensure the centrality of the

state in analyses of cyberspace, intelligence and security. Particularly with respect to the function

of counterintelligence in cyberspace, the state is responsible for identifying and mitigating the

intelligence operations and actors of adversaries, both state and (increasingly) non-state. Given the

volume of interactions conducted in cyberspace, it is crucial that CCI be understood and

undertaken at the state level as well as the individual level. Moreover, given the ease of use in

cyberspace in relation to the conduct and diffusion of disinformation and information operations,

the counterintelligence responsibilities of the state have expanded considerably. There is a

reasonable case to argue that individual contribution to aggregate national cyber security by way

of conducting individual CCI is necessary for cyber resilience and security. Disinformation as a

CCI issue is discussed further in the penultimate chapter.

Defining cyber counterintelligence

Despite the necessity of CCI practices, there has been relatively little academic investigation and

analysis of how CCI is actually defined, understood, and applied. While many states have now

published cyber strategies and designated new agencies, branches, or departments as responsible

for securing the state in cyberspace (to include, directly or indirectly, CCI measures and standards),

the definition problem found in traditional counterintelligence is multiplied in cyberspace. Because

of the low threshold of entry to cyberspace and the immense – and growing – number of

individuals that access and use cyberspace as part of everyday life, CCI practices are actually fairly

widespread, even if individuals may not realise they are conducting CCI practices to begin with.

For example, something as simple as changing your email and social media passwords every eight

to twelve weeks can be considered active, anticipatory CCI: doing so frequently ensures that even

if a malicious actor acquires your current password and is using your accounts or devices, once the

passwords are changed, there is a chance that access will be revoked to those actors.197 However,

an article published in the Washington Post in 2018 revealed that more than 5,000 passwords used

by officials in the government of Western Australia used a variation including the word

“password”; more than 1,400 used the password “Password123” which is a worrying breach of

196 Nicolò Bussolati, “The Rise of Non-State Actors in Cyberwarfare,” in Cyberwar: Law and Ethics for Virtual Conflicts, ed. Jens David Ohlin, Kevin Govern, and Claire Finkelstein (New York: Oxford University Press, 2015), 126, https://doi.org/10.1093/acprof:oso/9780198717492.003.0007. 197 Only a chance, as they may have already constructed back doors into your applications and systems. Conversely, other experts maintain that strong, unique passwords should not need to be changed as frequently, unless a breach is suspected. See Paul A Grassi, Michael E Garcia, and James L Fenton, “Digital Identity Guidelines” (Gaithersburg, MD: National Institute of Standards and Technology, June 22, 2017), https://doi.org/10.6028/NIST.SP.800-63-3.

Courteney O’Connor – PhD Thesis 2021

65

both common sense and security protocols.198 It is a demonstration of how important CCI is in

practice, and how much work remains in communicating to democratic audiences the importance

of good CCI and security practices.

Moving back to the problem of defining CCI, this is in large part due to the difficulty of settling

on a generally accepted definition of traditional counterintelligence. In addition, and considering

our great and growing reliance on cyberspace-enabled technologies, traditional counterintelligence

practices and definitions focus almost exclusively on the state as both the victim of intelligence

operations and the instigator of intelligence operations, thus making counterintelligence a state-

centric discipline. Taking into account that the focus of this thesis is on state-based CCI and the

conceptual complexities involved, I will not go into the various purposes, definitions and tools of

CCI as used by individuals or groups; it is beyond the scope of this thesis to adequately treat these

subjects. I will briefly examine these actors as they relate to state security in chapter five, but

engaging further on these actors in relation to CCI is a matter for future research.

State based definitions and practices of CCI need to evolve beyond the traditional, pre-cyber

understandings of both aggressor and victim in terms of both intelligence and counterintelligence.

The state could conceivably be a victim of malicious cyber activity conducted by individuals or

groups, and the official understanding of CCI must evolve to deal with these relatively new attack

vectors. It is no longer true that the only threat to the security of the state would be another state,

though it must be said few entities exist that could pose an existential threat to the state without

the backing of one. So, while CCI practices to an extent are generally applicable regardless of the

actor being combatted or the attack vector of the threat being mitigated, both official and academic

understandings of state security in relation to non-state parties must be developed to take into

account all threats to the state.

In Cyber counterintelligence: Concept, actors and implications for security I offered a working definition of

CCI intended to take into account all parties that could present a threat to the security of the state

198 Taylor Telford, “1,464 Western Australian Government Officials Used ‘Password123’ as Their Password. Cool, Cool.,” Washington Post, August 23, 2018, https://www.washingtonpost.com/technology/2018/08/22/western-australian-government-officials-used-password-their-password-cool-cool/. In a 2019 report by the UK’s National Cyber Security Centre, the world’s most-breached passwords are “123456” and “123456789” David Bisson, “‘123456’ Remains the World’s Most Breached Password,” Security Boulevard, April 22, 2019, https://securityboulevard.com/2019/04/123456-remains-the-worlds-most-breached-password/; National Cyber Security Centre, “Most Hacked Passwords Revealed as UK Cyber Survey Exposes Gaps in Online Security,” National Cyber Security Centre, April 21, 2019, https://www.ncsc.gov.uk/news/most-hacked-passwords-revealed-as-uk-cyber-survey-exposes-gaps-in-online-security.

Chapter 2 – Contextualising CCI

66

requiring CCI to mitigate. That definition, first identified in chapter one and reiterated here, is the

one which will inform the analysis of CCI offered in this thesis:

cyber counterintelligence… concerns the methods, processes and technologies employed to prevent, identify, trace, penetrate, infiltrate, degrade, exploit and/or counteract the attempts of any entity to utilise cyberspace as the primary means and/or location of intelligence collection, transfer or exploitation; or to effect immediate, eventual, temporary or long-term negative change otherwise unlikely to occur.199

By defining CCI in this way, the intent is that all possible attack vectors are taken into account in

considering overall security in cyberspace, while retaining the intellectual coherency required for

this definition to apply. Non-state actors of various kinds and sizes are capable of affecting state

security through the collection and exploitation of intelligence, and in more ways than just a direct

attack on government servers, software, applications or data, though these are all possible targets

and will likely remain targets far into the future.200

The danger that non-state actors pose to the state in cyberspace, I would argue, is mainly through

the public loss of perceived security when democratic institutions, financial entities, the defence

forces, and health institutions suffer cyber-attacks like the ransomware that attacks on hospitals in

the UK, Germany, and the US state of Illinois,201 or the loss of client data during the JP Morgan

Chase hacks.202 These sorts of losses affect not only the reputations and profit/loss balances of

the victim institutions, but for those with a perceived link to the state, there is also a loss to the

state itself, both in revenues and in reputation. After all, if important institutions are unable to

repel cyber-attacks, and if the government of their home state is further unable to either attribute

the attacks or aid in the defences of these institutions, then that state might be vulnerable to further

attacks, from better equipped malicious actors.203 These sorts of security failures do not represent

199 O’Connor, “Cyber Counterintelligence,” 112. 200 Dale Beran, “The Return of Anonymous,” The Atlantic, August 11, 2020, https://www.theatlantic.com/technology/archive/2020/08/hacker-group-anonymous-returns/615058/; Bussolati, “The Rise of Non-State Actors in Cyberwarfare”; Nina Kollars, “Cyber Conflict as an Intelligence Competition in an Era of Open Innovation,” Texas National Security Review Special Issue: Cyber Conflict as an Intelligence Contest (2020), https://tnsr.org/roundtable/policy-roundtable-cyber-conflict-as-an-intelligence-contest/. 201 Russell Brandom, “UK Hospitals Hit with Massive Ransomware Attack - The Verge,” The Verge, May 12, 2017, https://www.theverge.com/2017/5/12/15630354/nhs-hospitals-ransomware-hack-wannacry-bitcoin; Nicole Wetsman, “Woman Dies during a Ransomware Attack on a German Hospital,” The Verge, September 17, 2020, https://www.theverge.com/2020/9/17/21443851/death-ransomware-attack-hospital-germany-cybersecurity. 202 Reuters Staff, “JPMorgan Hack Exposed Data of 83 Million, among Biggest Breaches in History,” Reuters, October 2, 2014, https://www.reuters.com/article/us-jpmorgan-cybersecurity-idUSKCN0HR23T20141002; Kim Zetter, “Four Indicted in Massive JP Morgan Chase Hack,” Wired, October 10, 2015, https://www.wired.com/2015/11/four-indicted-in-massive-jp-morgan-chase-hack/. 203 Cyber-attacks against defence contractors, for example, have resulted in states like China exfiltrating large volumes of sensitive information concerning military technologies like radar systems for fighter planes. Philip

Courteney O’Connor – PhD Thesis 2021

67

an existential threat to states, but the accompanying demonstration of vulnerability to malicious

actors can undermine the public perception of (and trust in) the ability of the state to ensure the

resilience and security of critical systems. As I argue in chapter three, rather than needing to

securitise the threats posed, what these events contribute to is an elevation of cyber-enabled attacks

to a security risk, allowing for a wider array of management options and techniques, the tone and

intent of which can be communicated to the public.204 They form part of a historical record upon

which state representatives can draw when justifying further elevation of security concerns or risks,

legitimising the request for further resources or powers. Because this record exists, the state is

better able to either institute management measures, or to draw upon the record for a justification

of extraordinary powers in order to mitigate an immediate threat. Elevation perspectives allow for

a more nuanced approach to understanding the evolution of risks and threats in an increasingly

connected and cyber-enabled international system

Beyond the audience perception of state vulnerability to cyber-attacks, however, there is also the

more insidious vulnerability of public institutions to cyber-enabled threats such as disinformation

campaigns. Information is transmitted extremely rapidly through cyberspace and via cyber-enabled

platforms and devices, making disinformation campaigns both difficult to track and to counter.

Because disinformation campaigns involve the infiltration of false information into a particular

political ecology for the purpose of changing behaviour or policy in a target environment,

counterintelligence is a particularly useful framework of analysis. The decreasing integrity of the

information environment available to the general public should be of significant concern to the

state, and in fact to substate actors that also depend on information integrity to function.

Cyberspace challenges the notions of information integrity and control – cyber counterintelligence

offers a useful framework for understanding and processing the challenges of cyber-enabled

disinformation campaigns.

Disinformation

A vital aspect of counterintelligence is the production of, and response to, deliberately misleading,

false, or manipulative information: disinformation. Disinformation campaigns, defined as the

Dorling, “China Stole Plans for a New Fighter Plane, Spy Documents Have Revealed,” The Sydney Morning Herald, January 18, 2015, https://www.smh.com.au/national/china-stole-plans-for-a-new-fighter-plane-spy-documents-have-revealed-20150118-12sp1o.html; Tom Westbrook, “Joint Strike Fighter Plans Stolen in Australia Cyber Attack,” Reuters, November 10, 2017, https://www.reuters.com/article/us-australia-defence-cyber-idUSKBN1CH00F. 204 In some circumstances, public communications can include techniques for individuals to assess their own interactions with cyberspace and provide an array of tools for personal privacy assurance, or basic individual CCI practices.

Chapter 2 – Contextualising CCI

68

deliberate introduction of false information into a target environment in order to instigate

behavioural or political change, is a particular concern in an era where the transmission of

information can occur instantaneously, and where the potential reach of that information is global.

Where once it may have been possible to limit the spread of disinformation due to geographical

or linguistic differences, or through the difficulty of information transmission, cyberspace and

related technologies have removed those obstacles. According to Thomas Rid, the modern era of

disinformation began around the 1920s, when the practices that had previously been considered

‘political warfare’ started evolving. Rid traces this evolution through four major ‘waves’ of

disinformation evolution.205 The first wave he identifies as having formed in the interwar period

of the late 1920s and 1930s; it was characterised by influence operations and forgeries, perfected

and used by both Eastern and Western blocs.206 During the second wave in the post-World War

II years, disinformation became a profession: for the CIA, targeted revelation of truths and

forgeries, while subversion was still called political warfare. For the Soviet Union, disinformation

became the preferred term.207 Either way, “the goals were the same: to exacerbate existing tensions

and contradictions within the adversary’s body politic, by leveraging facts, fakes, and ideally a

disorienting mix of both.”208 The third wave of disinformation Rid identifies as having evolved in

the late 1970s, and during the late Cold War years disinformation – by then active measures in the

Soviet Union – became a well-resourced art form honed to what Rid characterises as “an

operational science of global proportions, administered by a vast, well-oiled bureaucratic

machine.”209 The fourth and final wave of disinformation as identified by Rid developed in the

205 Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (London: Profile Books, 2020), 6. 206 Rid, 6–7; See also Alma Fryxell, “Psywar By Forgery,” Studies in Intelligence, 1961, https://doi.org/10.1037/e740372011-001. 207 According to Garth S. Jowett and Victoria O’Donnell, disinformation is another term for black propaganda: “disinformation is usually considered black propaganda because it is covert and uses false information. In fact, the word disinformation is a cognate for the Russian dezinformatsia, taken from the name of a division of the KGB devoted to black propaganda.” Garth S. Jowett and Victoria O’Donnell, Propaganda & Persuasion (Newbury Park: SAGE Publications, 2018), 23. Jowett and O’Donnell also articulate three forms of propaganda which are classified according to the acknowledged relationship to the source of information and the accuracy of the information communicated: white, grey, and black. “White propaganda comes from a source that is identified correctly, and the information in the message tends to be true…Black propaganda is when the source is concealed or credited to a false authority and spreads lies, fabrications, and deceptions…Gray propaganda is somewhere between black and white propaganda. The source may or may not be correctly identified, and the accuracy of the information is uncertain.” Jowett and O’Donnell, 17–22. See also Jacques Ellul, Propaganda: The Formation of Men’s Attitudes (New York: Vintage Books, 1973); Bruce A. Arrigo, The SAGE Encyclopedia of Surveillance, Security, and Privacy, vol. 3 (SAGE Publications, 2016), 825–28. 208 Rid, Active Measures: The Secret History of Disinformation and Political Warfare, 7. 209 Rid, 7. See also Nicholas J. Cull et al., “Soviet Subversion, Disinformation and Propaganda: How the West Fought Against It - An Analytic History, with Lessons for the Present,” LSE Consulting (London: LSE Institute of Global Affairs, 2017), https://www.lse.ac.uk/iga/assets/documents/arena/2018/Jigsaw-Soviet-Subversion-Disinformation-and-Propaganda-Final-Report.pdf; Herbert Romerstein, “Disinformation as a KGB Weapon in the Cold War,” Journal of Intelligence History 1, no. 1 (June 2001): 54–67, https://doi.org/10.1080/16161262.2001.10555046.

Courteney O’Connor – PhD Thesis 2021

69

mid-2010s, with novel technologies and the internet reshaping and breathing new life into

disinformation.210 According to Rid, however, this evolution is not necessarily a positive one – the

access to increasingly large swathes of foreign populations and the ease with which this is

accomplished has resulted in disinformation campaigns that are more active than they are

measured.211 While disinformation operations are rapid-fire and multi-level, they are also more

easily identified and mitigated – where the forgeries of the second and third wave might become

institutional ‘knowledge’, and were very carefully created and disseminated, modern disinformation

is frequently identified with ease.

Rid makes a specific point which I discuss further in chapter six of this thesis, namely that

disinformation is particularly corrosive to liberal democracies because it not only reduces trust in

the democratic institutions we use and rely on to structure our societies, but it also damages our

ability to properly assess and discriminate fact from fiction when we make decisions as the relevant

audience of governments that we elect.212 Disinformation is therefore a concern at all levels of a

liberal democracy: very much for the governing administration, as disinformation is more than

anything an issue for intelligence agencies (both to conduct and to mitigate) – in a cyber-enabled

system, CCI becomes a pillar of democratic integrity and resilience. Disinformation also becomes

a threat to the democratic audience, who can be more easily targeted by adversary intelligence

communities through participation in cyberspace and particularly via the massive social media

platforms. As such, and perhaps uniquely to cyberspace, counterintelligence that can contribute to

the aggregate security of the sovereign state becomes at least partially the responsibility of the

individual, to say nothing of how the personal cyber security of the individual will increase once

CCI practices are employed – for an in-depth examination of who uses (should use) CCI, see

chapter five.

Successful disinformation operations also depend on the hostile actor’s understanding of the local

political climate and sentiments of the target state – in order to introduce a strategic narrative to

the target information ecology, the hostile actor must be in tune with ‘local’ sentiment and

emotion, have knowledge of the domestic political myths and a good grasp on the nation’s values.

210 Bente Kalsnes, “Fake News,” in Oxford Research Encyclopedia of Communication (Oxford: Oxford University Press, 2018), https://doi.org/10.1093/acrefore/9780190228613.013.809. See also Alina Polyakova and Daniel Fried, “Democratic Defense against Disinformation 2.0,” Report (Atlantic Council, June 9, 2019), https://apo.org.au/node/242041. 211 Rid, Active Measures: The Secret History of Disinformation and Political Warfare, 7. 212 Rid, 7–8. See also Don Fallis, “A Conceptual Analysis of Disinformation,” in IConference 2009 Proceedings (iConference, University of Illinois, 2009), 8, http://hdl.handle.net/2142/15205.

Chapter 2 – Contextualising CCI

70

Vladimir Petrovich Ivanov, former head of the Soviet Service A (the KGB disinformation unit)

gave this description of disinformation:

Based on the analysis of all the material…the officers are obliged to find the overwhelming outbreaks of crises, dissatisfaction, friction, disagreement, rivalry and struggle in the enemy camp. The discovery of such looming crises…and then the identification of the most sensitive vulnerabilities, required scientific knowledge and a scientific approach, knowledge of the objective processes in the world and in the country of residence.213

Disinformation is usually seeded by adversary intelligence communities in order to achieve a

particular objective. Whether that objective is a specific decision by a target audience (e.g. candidate

A over candidate B), or the sowing of confusion among a population, or the degradation of the

pillars of democracy – it is seeded for a purpose. It is the responsibility and function of

counterintelligence to identify and mitigate these information operations. Disinformation

undertaken at scale that is not mitigated can have national and even international effects – consider

the disinformation surrounding the COVID-19 vaccinations that is being circulated by the Russian

Federation, as an example.214 COVID-19 disinformation will be briefly considered in a vignette in

chapter six in the context of a broader discussion on the importance of audiences in efficient CCI.

Instantaneous communication and transmission of data has increased not only the reach of

disinformation campaigns, but the sectors in which it is possible for these campaigns to adversely

affect the target. That these campaigns are often informed and enabled through the exploitation

of intelligence gathered through prior operations make them a particular concern to democratic

states – in order to enhance the cyber security position of the state, the intelligence apparatus' need

to elevate the response to disinformation through the instigation of CCI measures and education.

Misinformation

Misinformation is not the same as disinformation; while similar, the terms should not be used

interchangeably. Misinformation refers to information which is “unintentionally inaccurate or

misinterpreted,”215 whereas disinformation is “deliberately false or misleading information”216 that

is communicated in order to achieve a particular political goal. Misinformation can be dangerous

to liberal democracies in similar ways as disinformation, in that democratic audiences are as

213 Rid 2020 p.315 214 OECD, “Combatting COVID-19 Disinformation on Online Platforms,” July 3, 2020, https://www.oecd.org/coronavirus/policy-responses/combatting-covid-19-disinformation-on-online-platforms-d854ec48/; Scott, “In Race for Coronavirus Vaccine, Russia Turns to Disinformation,” POLITICO, November 19, 2020, https://www.politico.eu/article/covid-vaccine-disinformation-russia/. 215 Jennifer Hunt, “The COVID-19 Pandemic vs Post-Truth” (Global Health Security Network, 2020), 5, https://www.ghsn.org/resources/Documents/GHSN%20Policy%20Report%201.pdf. 216 Hunt, 5.

Courteney O’Connor – PhD Thesis 2021

71

susceptible to believing misinformation as they are to disinformation: psychologically, on the side

of the target, there is no difference - the information is false. Instead, the difference lies in the

intent and the origin. Misinformation is false information, but it is unintentionally so –

misunderstanding or misinterpretation has resulted in the belief of the inaccurate information.

Those who communicate misinformation, then, do so in the belief that what they are conveying is

true: there is no inherent malicious intent. Regardless of malicious intent, however, the

communication of inaccurate or misleading information can still cause damage to institutional and

democratic legitimacy and integrity. There can also be far-reaching implications if false information

is transmitted to large segments of the population, as with global health or political conspiracy

theories. As such, misinformation may not be introduced maliciously into an information ecology,

but it still has the potential to damage political and democratic integrity through the acceptance of

misleading or inaccurate information into the belief structures of the democratic audience. It is

therefore a counterintelligence requirement to identify and mitigate the effects of misinformation

as soon as practicable. In relation to both disinformation and misinformation, it is crucial that

governments recognise the risks to audience psychology of such information operations and

manage those risks accordingly. I discuss this in detail in chapter six.

Conspiracy theories

Conspiracy theories are a form of misinformation,217 in that they are created, supported, and

communicated (for the most part) by communities of individuals who genuinely believe that their

theory is grounded in evidence and within the bounds of belief with sufficient explanation. Some

conspiracy theories are well known; that Area 51 contains aliens that crash-landed in Roswell, for

example.218 That Neil Armstrong never landed on the moon,219 or that climate change is an

imposed fiction are other examples.220 The individuals that engage in these communities do so out

217 Though I note here that conspiracy theories may be deliberately created, seeded, spread, and maintained as active disinformation programs. While ordinarily characterisable as misinformation, it is important to remember that conspiracy theories can just as easily have their origins in disinformation and influence campaigns: disinformation and misinformation are not mutually exclusive. 218 Shelley E. McGinnis, “Dallas, Roswell, Area 51: A Social History of American ‘Conspiracy Tourism.’” (Chapel Hill, University of North Carolina, 2010); Martin Parker, “Human Science as Conspiracy Theory,” The Sociological Review 48, no. 2 (2000): 191–207; Lucianne Walkowicz, “The Source of UFO Fascination,” Issues in Science and Technology 35, no. 4 (2019): 12–14. 219 Lauren Nowakowski, “The Moon Landing: Fake Movie Set or the Real Deal?,” The Psychology of Extraordinary Beliefs, March 29, 2019, https://u.osu.edu/vanzandt/2019/03/; Viren Swami et al., “Lunar Lies: The Impact of Informational Framing and Individual Differences in Shaping Conspiracist Beliefs About the Moon Landings,” Applied Cognitive Psychology 27, no. 1 (2013): 71–80, https://doi.org/10.1002/acp.2873. 220 Karen M. Douglas and Robbie M. Sutton, “Climate Change: Why the Conspiracy Theories Are Dangerous - Karen M. Douglas, Robbie M. Sutton, 2015,” Bulletin of the Atomic Scientists 71, no. 2 (2015): 98–106, https://doi.org/10.1177/0096340215571908; Joseph E. Uscinski, Karen Douglas, and Stephan Lewandowsky, “Climate Change Conspiracy Theories,” in Oxford Research Encyclopedia of Climate Science (Oxford: Oxford University Press, 2017), https://doi.org/10.1093/acrefore/9780190228620.013.328.

Chapter 2 – Contextualising CCI

72

of belief. They can also be incredibly damaging, as recognised by Jennifer Hunt – “[i]n 2019, the

Federal Bureau of Investigation (FBI) identified conspiracy theories as a form of domestic

extremism in that they channel violence toward perceived enemies, writing that ‘these conspiracy

theories very likely encourage the targeting of specific people, places, and organisation.’”221

Particularly in the wake of the domestic unrest in the US and the contributions of senior political

leaders to the misinformation surrounding the 2018 and 2020 elections, as well as the ongoing

impacts of the novel coronavirus pandemic, conspiracy theories are taking on an increasingly

violent nature.222

Propaganda

Another related concept and political communication tool is propaganda. Recent research has

indicated that the Russian Federation is employing disinformation against Western-developed and

produced COVID-19 vaccines, with the presumed intent of garnering further support for and

purchase of Russia’s own Sputnik vaccine.223 Coronavirus conspiracy theories will be discussed

further in chapter six. According to Mark Lowenthal, propaganda is a synonym for

disinformation.224 While they are certainly related, I disagree that they are one and the same.

Whereas disinformation is false information communicated in the express pursuit of a political

goal, propaganda can just as easily (and often does) contain completely accurate information. I do,

however, agree with Lowenthal’s specific definition of propaganda, which he identifies as “the old

political technique of disseminating information that has been created with a specific political

outcome in mind. Propaganda can be used to support individuals or groups friendly to one’s own

side or to undermine one’s opponents.”225 Jason Stanley has a different, but complementary,

221 Hunt, “The COVID-19 Pandemic vs Post-Truth,” 6. 222 Arthur C. Brooks, “Opinion | Conspiracy Theories Are a Dangerous Threat to Our Democracy,” Washington Post, September 3, 2019, https://www.washingtonpost.com/opinions/2019/09/03/conspiracy-theories-are-dangerous-threat-our-democracy/; John Cook and Stephan Lewandowsky, “Coronavirus Conspiracy Theories Are Dangerous – Here’s How to Stop Them Spreading,” The Conversation, April 20, 2020, http://theconversation.com/coronavirus-conspiracy-theories-are-dangerous-heres-how-to-stop-them-spreading-136564; Graham Lawton, “Conspiracy Theories,” New Scientist, 2021, https://www.newscientist.com/definition/conspiracy-theories/; N.C., “Conspiracy Theories Are Dangerous—Here’s How to Crush Them,” The Economist, August 12, 2019, https://www.economist.com/open-future/2019/08/12/conspiracy-theories-are-dangerous-heres-how-to-crush-them. 223 Michael R. Gordon and Dustin Volz, “Russian Disinformation Campaign Aims to Undermine Confidence in Pfizer, Other Covid-19 Vaccines, U.S. Officials Say,” Wall Street Journal, March 7, 2021, sec. Politics, https://www.wsj.com/articles/russian-disinformation-campaign-aims-to-undermine-confidence-in-pfizer-other-covid-19-vaccines-u-s-officials-say-11615129200; Scott, “In Race for Coronavirus Vaccine, Russia Turns to Disinformation”; Mark Scott and Carlo Martuscelli, “Meet Sputnik V, Russia’s Trash-Talking Coronavirus Vaccine,” Politico, April 1, 2021, https://www.politico.eu/article/russia-coronavirus-vaccine-disinformation-sputnik/; Elise Thomas, “Covid-19 Disinformation Campaigns Shift Focus to Vaccines,” The Strategist, August 23, 2020, https://www.aspistrategist.org.au/covid-19-disinformation-campaigns-shift-focus-to-vaccines/. 224 Lowenthal, Intelligence: From Secrets to Policy, 235. 225 Lowenthal, 235.

Courteney O’Connor – PhD Thesis 2021

73

understanding226 of propaganda, defining it as political rhetoric. Stanley identifies a significant

problem in relation to liberal democracy that is best described as ‘harmful propaganda’: if the

political rhetoric causing the harm cannot be mitigated, then the epistemic basis of liberal

democracy is flawed and in fact we cannot continue to use the descriptor of liberal democracy.227

He makes the link between successful propaganda and belief structures as well, which I examine

further in chapter six. That link also explains the increase in both conspiracy theory communities

and the subsequent rise in violence against those who are not members of that community, an

example of in-group/out-group thinking (or enmity and exclusion dynamics, discussed earlier in

this chapter through the examination of Schmittian realism).228 According to Stanley,

…the mechanisms that underlie effective propaganda are implicated even in barriers to liberal democracy that seem not to involve them…underlying effective propaganda are certain kinds of group identities. Some group identities lead to the formation of beliefs that are difficult to rationally abandon, since abandoning them would lead them to challenge our self-worth. When our identity is tied up with that of a particular group, we may become irrational in these ways. When this occurs, when our group affiliates are such as to lead us to these kind of rigidly-held beliefs, we become especially susceptible to propaganda.229

The belief persistence in communities with group identities can result in irrational behaviour, and

the persistent faith in information that can be credibly proven wrong. Adversaries and hostile

actors, with sufficient knowledge of the local political myths and the sentiment of the target

populations, can take advantage of the group identity to impose external strategic narratives that

benefit the adversary. In other words, if propaganda is understood as political rhetoric designed

and communicated to induce a specific political outcome, effective propaganda threatens both the

audience and the integrity of liberal democracies. Counterintelligence measures need to be

designed and undertaken to identify and mitigate propaganda operations, as they may have

negative effects on audience beliefs, belief structures and behaviour. Counterintelligence actors

can also use propaganda as a countermeasure in counter-propaganda operations, thus engaging in

proactive counterintelligence. Because cyberspace enables ease of access to target audiences, cyber-

enabled propaganda operations have the potential for pervasive effect and broad distribution,

requiring a concentrated and organised response.

226 Jason Stanley, How Propaganda Works (Princeton: Princeton University Press, 2015), 4. 227 Stanley, 10–11. 228 Stanley, 10–11. 229 Stanley, 19.

Chapter 2 – Contextualising CCI

74

Computational propaganda

The widespread access to cyberspace and the internet, as well as the broad audience participation

in massive social media platforms, has encouraged a milieu in which computational propaganda is

capable of shaping public debate and inducing both psychological and behavioural change in

targeted populations. “Computational propaganda – the use of algorithms, automation and big

data to shape public life – is becoming a pervasive and ubiquitous part of everyday life.”230 This is

not a tool specific to hostile actors, in any respect – generic advertising uses computational

propaganda to induce customer purchases, for example. The use of computational propaganda,

however, can massively increase the reach and impact of disinformation and interference

campaigns. Computational propaganda can serve as a form of information control or information

laundering, where certain facts or sources (accurate or otherwise) can be signal-boosted by bot

accounts in order to gain traction in mainstream media, thus increasing the size of the recipient

population. The necessity for effective and efficient CCI becomes more stringent in the face of

computational propaganda tools, which can signal-boost and transmit disinformation faster and

more effectively than human agents or even networks of agents. Computational propaganda can

also be employed in counterintelligence operations, as it is itself a neutral set of tools and tactics.

Computational propaganda, and particularly the use of social media and communications

platforms in relation to disinformation and foreign interference, are themes I return to in the sixth

chapter of this thesis.

Conclusion

This chapter has provided a broad overview of the relevant bodies of literature necessary to answer

the question with which the research engages: how state threat perception of cyberspace has

developed, and how or whether that perception is influencing the evolution of CCI as a response

to cyber-enabled threats, such as disinformation. In assessing the literature on traditional

securitisation theories, I reached the conclusion that rather than viewing securitisation as a process

of social construction of threats, an evolution and modernisation of the framework is necessary in

order to answer my research question. Instead of social construction of threat, political concerns

undergo threat elevation processes that are based on, and justified by, pre-existing security

conditions.

230 Samantha Bradshaw and Philip N. Howard, “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation” (Oxford: University of Oxford Computational Propaganda Research Project, 2019), i, https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf?utm_source=newsletter&utm_medium=email&utm_campaign=timestop10_daily_newsletter.

Courteney O’Connor – PhD Thesis 2021

75

This more nuanced understanding of the elevation of threats also informs the second body of

literature addressed in this chapter: that of traditional counterintelligence, which must also evolve

in order to adequately counter information operations in the cyber age. I discussed the

development and purpose of traditional counterintelligence before using that literature review to

lay the foundation of future chapters by examining how CCI should be defined.

Counterintelligence provides a useful framework through which to understand the problems posed

by instantaneous communications technologies, like the decreasing integrity of information that

informs polities. The final section of this chapter reviewed the literature relation to disinformation,

providing a brief discussion of the development and use of disinformation, and how it is

differentiated from other forms of false or misleading information. Modern disinformation

campaigns are increasingly conducted through cyber-enabled technologies and platforms, and are

informed by and affect intelligence collection and exploitation, making them a particular problem

for counterintelligence actors. Having laid this foundation, the next chapter articulates the threat

elevation analysis framework which is used as the analytical lens for the remainder of the thesis.

76

3 - Threat Elevation Analysis In this chapter, I offer an alternative conception of securitisation theory than that of the

Copenhagen School. As I subsequently argue, developing this alternative view allows for a more

practical and comprehensive view of cyber counterintelligence (CCI). Where traditional

counterintelligence was the explicit purview of the state actor, in a cyber-enabled information age,

there is a growing necessity for substate actors to also engage in CCI measures in order to

contribute to the overall cyber security of the state. Cyber counterintelligence is one of the four

branches of cyber security identified in the introduction to this thesis – see Figure 1 The Elements

of Cyber Security. How then can we assess whether or not the importance of CCI has been

adequately recognised and supported in national security policies and documentation? A nuanced

analytical framework that recognises the broader array of securitising actors, methods of discursive

legitimation of threat elevation, and contemporary supra- and interstate security concerns such as

disinformation will inform the examination of CCI understanding and implementation.

In the traditional view, securitisation is a process by which an issue is constructed as a security

threat via a speech act, a performative act of discourse designed to convince a particular audience

of the existentially threatening nature of the chosen issue (designated threat) to the state. I argue

that, rather than accept that the process of securitisation constructs threats without relevant

evidence to support the claim of existential danger, that securitisation should be viewed as a

process of elevation. As an elevation process, securitisation would thereby take into account such

variables as the historical relationship between the hostile or conflicting parties, and the evidence

for which a claim of existential threat is being made. There would need to be a record of risk, both

actual and perceived, such that there is a pre-existing reasonable belief that the hostile party in

question is capable of presenting an existential threat to the state or its interests. In this respect,

securitisation as a threat elevation engages more with realist understandings of the effects of pre-

existing structural conditions without negating the basic functions of the securitisation framework.

Moreover, this reconstruction of the notion of securitisation as threat elevation has significant

implications for CCI.

Securitisation as Threat Elevation Theory

Securitisation theory herein understood is not a method or theory of (social) construction of

threats within the field of critical security studies. Rather, it is a method, or theory, of elevation from

political issue to security threat, requiring and resulting in the creation or change of actionable

policy. With this overall concept in mind, this chapter will examine and build on the concept of

Courteney O’Connor – PhD Thesis 2021

77

securitisation as elevation, reformulating existing securitisation theory into a framework that is

applicable and relevant in the modern context. It provides answers to the questions - what are the

pillars of elevation theory? How do the changes made to the original securitisation framework

contribute to a better understanding on the development, use, and exploitation of cyberspace?

Once established, elevation theory will be used to examine how, or if, cyberspace has been

securitised within an analysis of the history and development of the cyber domain. When did

cyberspace become an attack vector, and how? What have been the responses to the weaponisation

of cyberspace by actors within the identified collectives? How has all of this impacted on the field

of intelligence? Again, in addition to elevation being a relevant contribution to security studies, this

is in service of providing a better and more effective way of understanding and applying CCI.

Difference to Traditional CST

Traditional or Copenhagen securitisation theory (CST) forms the foundation upon which all

successive forms of securitisation were built. As such, I have made minimal changes to the number

or type of variables that must be taken into account for a securitisation process according to

elevation theory. One possible criticism of this approach is that by utilising an incredibly similar

analytical toolbox to construct my variant theory, the resulting framework will lack sufficient

disparity to produce any ‘new’ knowledge. My answer to this criticism is that broadly speaking,

most international relations theories draw from the same box of variables in their analysis. The

merit of any particular international relations theory, whether from realist, liberal, or constructivist

schools of thought, is based upon the utility of the conclusions that can be reached using that

framework’s particular analytical lens. Elevation theory is one such lens.

The primary difference between traditional securitisation theories and elevation theory is the

presumption of historical relationship between parties in a state of conflict or pre-conflict. I argue

that conflict at the international level, be it regional or global, cannot spark from nothing. Every

country has a historical relationship upon which decisions of alliance, amity or enmity are based.

This cannot be doubted, for these relationships influence and are influenced by political and

military decisions every day, and have been since the creation of the modern Westphalian state

system in 1648.231 It is the change in balance of any one or more of these relationships that will

cause the actions of the state in question to be considered more or less hostile. We should not

231 Geoffrey Wiseman and Paul Sharp, “Diplomacy,” in An Introduction to International Relations, ed. Anthony Burke, Jim George, and Richard Devetak, 2nd ed. (Cambridge: Cambridge University Press, 2011), 259, https://doi.org/10.1017/CBO9781139196598.022.

Chapter 3 - Threat elevation analysis

78

confuse this concept with balance of power theory,232 because elevation theory is not concerned

with the construction or breakdown of alliances and bandwagons in the international system and

the consequent breakdown of power relationships. Elevation theory instead seeks to explain how

an issue becomes an existential security threat, the process through which it must go before it can be

the instigating factor of a securitising move by a relevant actor with legitimate authority to speak

security. This process, which traditional securitisation theory sees as one of (social) construction,

in the theoretical framework offered by this thesis becomes a process of elevation based on existing

variables, both provable and perceived.

Threat Elevation Process

Recall from chapter two that CST is the general theory that threats are securitised, and

extraordinary powers of mitigation engaged, when a legitimate actor speaks security before a

relevant audience. In contrast to securitisation, I am presenting here a process of threat elevation.

The threat elevation process begins, at base, with either an examination, or at least a general

understanding, of the historical relationship between hostile or conflicting parties, and how the

recent actions of those parties ought to be understood in light of that relationship. It could be

argued that any single existential threat is either an action, or the result of an action, by either an

adversary or the state that is securitising a perceived threat. If such is the case, then there must be

evidentiary support in the recent or historical relationship between conflicting parties that would

lend credence to the perception of malicious intent and subsequent threat of danger. It would be

difficult, if not impossible, to securitise an issue that has not previously been considered

threatening in one way or another.

This is where elevation theory departs from traditional securitisation theory. In CST, presumably,

any political issue or concern can be securitised as long as the securitising actor can convincingly

represent that issue or concern as existentially threatening to the referent object (that object

typically being the sovereign state).233 The introduction of the historical relationship as variable,

and the foundation of elevation theory, depend on a pre-existing acknowledgement of the threat

that a party has, or could represent. This acknowledgement must be based at least in part on the

historical record, or there would be no grounds for a securitisation. It is unlikely that a threat of

232 A discussion of balance of power theory is beyond the scope of this thesis. See Partha Chatterjee, “The Classical Balance of Power Theory,” Journal of Peace Research 9, no. 1 (1972): 51–61; T. V. Paul, James J. Wirtz, and Michel Fortmann, Balance of Power: Theory and Practice in the 21st Century (Stanford University Press, 2004). 233 Though as noted in chapter two, this is dependent on the referent object and in some significant part, the sector being securitised.

Courteney O’Connor – PhD Thesis 2021

79

existential nature could appear from absolutely nothing, and I hold that no statesman would

attempt to securitise an issue if there is no record of past threat or danger. It would prove extremely

difficult to persuade a relevant audience to perceive danger where danger, or at least threat of

danger has not previously existed. Any issue which could conceivably be securitised, then, must

already be on the radar of political and/or military authorities, and, to some degree, their audience.

If this is the case, then such an issue cannot be constructed as a security threat, because the threat

or the potential of threat has already been acknowledged to exist. If a threat already exists, albeit

as an object of concern or a political issue, then it must be elevated if the danger surrounding said

issue increases. Thus, securitisation is the process by which a political concern becomes a security

threat.

The political issues or concerns that are the subject of an elevation process can be separated into

three broad categories, which I have labelled first, second, or third order. Figure 5 - Securitisation

and the orders of concern (see next page) is a pyramidal depiction of the three. Third order

concerns are those within the bounds of normal politics. These issues can be resolved within the

normal institutional (political) processes, and are not necessarily of military concern. Certainly, they

are not considered threatening enough to require an excess of attention or in need of extraordinary

measures to mitigate. Generally speaking, elevation theory is concerned with the second and third

orders, and the journeys and transitions between the three orders of concern. The second order

issues are those which are currently being riskified; they have been identified as risks or threats to

state security and are in need of more attention than third order issues but are not yet considered

existentially threatening. First order issues are those issues which have been fully securitised; they

are completely beyond the realms of normal politics and regular institutional processes; they

require extraordinary measures of threat mitigation and represent an existential threat to the state,

whether perceived or real. The journey from third order to second order is the riskification process,

and the journey from second to first order the securitising process. The whole process is one of

elevation, with riskification concerned with the transition from first to second order issues, and

securitisation concerned with the transition from second to first. Riskification was first identified

and elaborated by Olaf Corry, and his work has greatly influenced the elevation theory

framework.234 In order to properly understand elevation theory, I will now examine these separate

but related processes.

234 Corry, “Securitisation and ‘Riskification.’”

Chapter 3 - Threat elevation analysis

80

Riskification and Securitisation in Cyberspace

As defined by Olaf Corry, riskification is a process by which an issue is identified as a possible

security threat and mitigated as such. 235 A security risk, of course, is not at the same level of danger

as a threat in the sense of securitisation theory (any variant), in which the threat is considered

existential. It is important, as stated in Corry’s work, to differentiate between those issues or

concerns which are risks, and those which are threats and thus likely to be the subject of a

securitising process. However, for the purposes of this thesis, Corry’s elaboration of the nature of

the process of riskification will be adopted as part of the securitisation framework. In riskification

according to Corry, it is perfectly acceptable to designate something as a risk and no social

construction of threat is required: it is assumed that the acknowledgement of the risk is a settled

matter amongst those that handle security concerns. As such, no speech act is required in order to

designate a risk, and threat construction only becomes a variable when the risk increases to the

point of requiring a securitisation process. This is one of the primary differences between

riskification and securitisation as theoretical frameworks — the focus on the element of risk is why

second order concerns in elevation theory have been nested within the riskification framework.

Elevation performs a crucial intermediary function between third order concerns (which can be

handled through regular institutional process and thus constitute normal politics) and first order

concerns (which are or have been the subject of securitisation and require extraordinary measures

of threat mitigation) – refer to Figure 5 - Securitisation and the orders of concern.

235 Corry.

Courteney O’Connor – PhD Thesis 2021

81

Figure 5 - Securitisation and the orders of concern

The true differences between a security risk and a security threat, as outlined by Corry, are the

sense of immediacy of the problem at hand and the existential nature of that problem.236 A risk is

“a scenario necessarily located in the future which is connected to a policy proposal [that offers] a

way of preventing that risk from materialising into real harm,”237 whereas a threat is necessarily

more immediate a danger and thus a concern of the first order. Therefore, second order risks have

been acknowledged as having the potential to become a threat but do not (at present) pose enough

actual or perceived danger to be elevated to a first order threat. If, and only if, the nature of the

risk changes such that the dangers posed are imminent and potentially existential, can that risk

even qualify as a potentially securitisable concern.

This is also where the notion of historical relationship and record of malicious or hostile intent

must be considered. If a hostile party or a security concern has already been riskified, and this has

already been acknowledged as a potential threat, then while it can be elevated through securitisation,

it cannot be constructed. There must be evidence of potential danger in order for something to be

236 Corry, 244. 237 Corry, 244.

Chapter 3 - Threat elevation analysis

82

designated a risk in the first instance. And of course, without that acknowledgement of danger, a

third order concern is unlikely to be successfully securitised and thus raised to a first order concern,

by even the most legitimate and authoritative of actors.

After an issue has been riskified, if the potential danger posed is perceived to increase or be

extremely likely to increase, and it poses some (or can be presented as posing some) existential risk

to a given referent, then that issue becomes securitisable. The journey an issue undertakes from

second order risks (having been successfully riskified) to first order threats, is the securitising

process itself. The reverse of this process, wherein threats are de-securitised and risks are de-

riskified, is called threat attenuation.238

This is where the interaction of the exigent variables occurs, where the legitimate actor authorised

to speak security performs the speech act before a relevant audience, with the intent of convincing

that audience of the existential nature of the threat in question. The ultimate goal of the securitising

process, a successful securitisation, sees that actor or a legitimate (institutional) body of their

choosing imbued with extraordinary powers (outside the norm for institutional processes) to

undertake extraordinary measures of threat mitigation in the face of that (perceived) threat. It

should be noted that even in the case of a successful riskification, there is no guarantee that any

given securitising process will be successful. There is also no guarantee that a securitisation that

might be considered morally necessary (such as environmental security) will be successful in the

face of economic, military, and political commitments, pressures and consequences. Elevation

theory does not seek to provide a moral explanation or analysis of how a political concern becomes

a security threat; it simply seeks to explain how the securitisation framework is best utilised to

analyse the process of elevation between third, second, and first-order security concerns. In other

words, to elevate a third order concern to a second order risk, the issue must undergo riskification.

To elevate a second order risk to a first order threat, the issue must undergo securitisation.

238 Threat attenuation receives less attention in this thesis than threat elevation, in large part due to the premise that if threat mitigation and/or risk management measures are successful, then those threats/risks have by definition undergone threat attenuation. Mitigation and management are attenuation practices. I acknowledge that this is an area of threat elevation analysis which necessitates further development, but it is beyond the scope of this thesis, as my focus is on the perceived threat elevation of cyberspace.

Courteney O’Connor – PhD Thesis 2021

83

Relevance and Applicability

As many scholars have noted, and as has been discussed in earlier sections of this thesis, the

securitisation framework as outlined by Buzan, Waever and de Wilde has several weaknesses.239 In

chapter two, one of the weaknesses that I identified was the assumption that the securitising

process examined and explained the way in which an individual or representative of a given group

constructed a security threat, as though from thin air. One would logically assume that for the

purposes of the speech act at least, evidence would be required to prove existential threat to the

relevant audience before extraordinary powers would be granted, or even to convince the audience

of the existential nature of a given threat in the first instance. However, I was unable to find an

examination of this particular issue in the literature surveyed, and it is to fill this apparent gap in

the literature that I have constructed elevation theory as an alternative. By introducing the variables

of historical relationship and evidentiary support of (perceived) threat of/or danger, the

securitising process becomes one of threat elevation rather than threat construction. In so doing,

elevation theory moves to occupy a type of middle ground between its original constructivist field

of theory, and the more structural, military-focused realist field of theory. The three-tiered

approach to understanding the various levels of threat, with the riskification process representing

the transition between the third and second orders of concern, and the securitisation process

representing the transition between the second and first orders of concern, offers a framework

which can be easily applied to a multiplicity of scenarios.

This applicability to a wide range of potential scenarios is particularly useful when analysing long-

term securitising processes. When securitisation is considered an elevation process rather than a

bottom-up construction process, there is the possibility of analysing or examining greater time

spans. Particularly at the international and global scale, the rise of security threats from political

concerns often take place over a period of months or even years, with a multitude of speech acts

and even securitising actors before the securitisation is complete and/or successful.240 It may be

possible to point to one particular speech act and identify it as the conclusive securitising move,

but that may also be inordinately difficult. In elevation theory, any number of securitising acts over

a given period of time can contribute to the elevation from political concern to risk, or from risk

to fully securitised threat. In the current digital age, with near-instantaneous media coverage of

239 Knudsen, “Post-Copenhagen Security Studies: Desecuritizing Securitization”; McDonald, “Securitization and the Construction of Security”; Williams, “Words, Images, Enemies”; Williams, “Securitization as Political Theory: The Politics of the Extraordinary.” 240 This is not to discount swiftly-rising or ‘surprise’ threats, though it could be argued that with the benefit of hindsight and the elevation framework, the growth of a threat can always be traced and analysed. Elevation theory is not a predictive framework; it is ex post facto analytic.

Chapter 3 - Threat elevation analysis

84

every newsworthy issue globally, we have become accustomed to hearing about the ‘threats’ almost

as soon as they become apparent. However, by becoming inured to the spectacularised threats,

people may have become less aware of the broad range of threats which have not necessarily been

deemed spectacular enough for mediatisation. In an age where almost everything is available

immediately, our view of security has narrowed both in focus and in time; fewer people are aware

of the ‘broader picture,’ as it were, because as a general rule we have a higher disregard of the

middle-to-distant past than previous generations. This is a clear and concerning analytical oversight

on the part of citizens, academics, policymakers, and the media that deliver us so much news, so

rapidly. Reintroducing the value of time (in elevation theory) is an attempt to extend analytic

perspicacity such that a greater understanding of security threats is possible.

In terms of applying elevation theory to cyberspace, and eventually CCI as the primary focus of

this thesis, the element of time is particularly important. While cyberspace (and the Internet) in its

current form is what we as a people are most familiar with, both cyberspace and the Internet have

existed for decades, albeit in different forms. The elevation of cyberspace from a technological

and communications miracle, to a political concern, to a military risk, to a security threat took place

over a period of decades rather than weeks or months. While there are highly mediatised instances

of cyber-attacks and exploitation that it would be possible to identify as having instigated

securitising moves, it is impossible to identify a singular instance to which the securitisation of

cyberspace (in the original terms of threat construction rather than elevation) can be traced.241 It has

been a process of elevation spanning decades. Like the juggernaut that cyberspace is, it has only

been recently, as use and exploitation of cyberspace and related technologies has increased

monumentally, that the public at large have begun to realise the vulnerabilities that cyberspace can,

has, and will represent to general understandings of security.

The Threat Elevation of Cyberspace

In many respects, it is possible to state that cyberspace has been elevated whether this is

acknowledged and admitted, or not. Entire security industries have been built around the

opportunity and the vulnerability of cyberspace. The emergence at the end of the twentieth century

of official computer emergency response teams, or CERTs, is proof that cyberspace was moving

through an elevation process.242 CERTs indicate that cyberspace at the least has been successfully

241 This point is borne out in the case study of the UK, presented in chapter four. 242 European Union Agency for Cybersecurity, “CSIRT Inventory,” Topic, ENISA | European Union Agency for Cybersecurity, 2021, https://www.enisa.europa.eu/topics/csirts-in-europe/csirt-inventory; International Cyber

Courteney O’Connor – PhD Thesis 2021

85

riskified, and is an issue no longer solely confined within the bounds of normal politics. It has only

been in recent years, however, that cyberspace has been subject to the securitising process: more

evidence is becoming available to suggest that the use of cyberspace as an attack vector is increasing

in both volume and severity.243 Information that previously would have been extremely difficult (if

not impossible) to obtain except through meticulous intelligence operations may now be easier to

acquire through cyberspace. In addition to both state and non-state actors having the capacity to

launch cyber-attacks on vulnerable infrastructures, cyberspace is an intelligence vulnerability as

well. Terabytes of information are exfiltrated and exploited every year, and it is often improbably

difficult to attribute attacks to a specific party.244 These factors, among others, have contributed to

the elevation of perceived danger, both real and potential, of cyberspace.

Having examined traditional securitisation theory, and offered an elevation variant that I believe

to be particularly applicable to cyberspace as well as being a relevant analytical tool for any conflict

situation, this section will now analyse the development of cyberspace as we know it today, through

the lens of elevation theory. If securitisation is a process of elevation, then we should be able to

examine the recent history of cyberspace in such a way that the process from technological marvel

to security threat is not only evident, but occurs in a logical sequence with the requisite evidentiary

support. In addition, the nature of the relationship between parties that have perpetrated and

suffered cyber-attacks and/or exploitation during the period examined will also bear upon our

understanding of the nature of the elevation process. I posit that the threat perception of

cyberspace has been elevated on two levels, albeit not mutually exclusive: elevation has occurred

at the individual state level, and it has also occurred at the international (global) level. The manner

in which individual states have elevated, or are elevating the threat perception of cyberspace has

contributed to a global threat elevation process, which is in turn affecting the manner in which

Center, “C.E.R.T in Rest of the World,” International Cyber Center | George Mason University, 2021, http://www.internationalcybercenter.org/certicc/certworld; United Nations Institute for Disarmament Research, “The Cyber Index: International Security Trends and Realities” (Geneva: United Nations Institute for Disarmament Research, 2013), https://www.unidir.org/files/publications/pdfs/cyber-index-2013-en-463.pdf. 243 Adi Gaskell, “The Rise in State-Sponsored Hacking in 2020,” CyberNews, February 5, 2020, https://cybernews.com/security/the-rise-in-state-sponsored-hacking/; Interpol, “INTERPOL Report Shows Alarming Rate of Cyberattacks during COVID-19,” Interpol, August 4, 2020, https://www.interpol.int/en/News-and-Events/News/2020/INTERPOL-report-shows-alarming-rate-of-cyberattacks-during-COVID-19; Sean Joyce et al., “How to Protect Your Companies from Rising Cyber Attacks and Fraud amid the COVID-19 Outbreak,” PwC, March 2, 2021, https://www.pwc.com/us/en/library/covid-19/cyber-attacks.html. 244 Collin S. Allan, “Attribution Issues in Cyberspace,” Chicago-Kent Journal of International and Comparative Law 13, no. 2 (2013 2012): 55–83; David D. Clark and Susan Landau, “Untangling Attribution,” in Proceedings of a Workshop on Deterring Cyberattacks: Informing Strategies and Developing Options for U.S. Policy (Washington, D.C.: National Research Council of the National Academies, 2010), 25–40, https://doi.org/10.17226/12997; Thomas Rid and Ben Buchanan, “Hacking Democracy,” SAIS Review of International Affairs 38, no. 1 (2018): 3–16, https://doi.org/10.1353/sais.2018.0001.

Chapter 3 - Threat elevation analysis

86

states perceive the cyber domain. The self-reinforcing nature of cyberspace elevation is not

particular to the cyber domain, and could be observed in other contexts as part of the elevation

framework. Outside influence and perception is an integral part of the elevation framework, and

third-party perception of or response to the (perceived) threat of cyberspace could in fact be the

instigating event that begins a riskification or securitisation process.

Elements of Threat Elevation Analysis

I turn now to how the elevation process affects the elements of securitisation identified in chapter

two. In large part, the interaction of the elements of securitisation remain the same when examined

through the lens of elevation theory. There will be some differences, usually to modernise a part

such that it is more applicable to a contemporary political system where digital technologies are

proliferating rapidly.

The Speech Act

As discussed in chapter two, there have been several criticisms of the concept of the speech act,

usually in terms of what differentiates a securitising speech act from a general political speech.245

In a digitalised society such as we now have across the majority of the world, for a securitisation

process to be valid only when instigated through a speech act performed by a legitimate actor is

unreasonable. Whereas in the past a much smaller echelon of individuals dealt in concerns that

could threaten the state and public communication surrounding national security concerns was

less common, in modern times, communications are instantaneous and the broadcast media have

just as much chance of instigating a riskifying or securitising process as some elected officials. This

is due in part to the media having been given the legitimacy and authority to transmit, analyse, and

opine on issues of security almost immediately after those issues become apparent.

If we take once again the September 11 attacks as an example, mere moments after the first Tower

was hit, the disaster was being broadcast worldwide in real time; small children in New Zealand

kitchens watched the second Tower being hit at near the same time as those on the streets of

Manhattan.246 If, as previously discussed, a discursive legitimating act can include transmission of

ideas by more than one medium (i.e., the spoken word), then the image of the Towers billowing

smoke just before collapsing was the instigating discursive act securitising, across the world,

245 See chapter one, section concerning the speech act. 246 This author speaks from experience, having viewed the second collision on the small television in my New Zealand kitchen, which was tuned to the local news station.

Courteney O’Connor – PhD Thesis 2021

87

Islamic terrorism against the West.247 The elevation of Islamic terrorism from second order risk to

first order threat was supported by the various declarations from news media organisations such

as Le Monde (Nous sommes tous américains),248 through to state officials - former President George

Bush, for example, in a number of speeches following the September attacks;249 Prime Minister

Blair of the United Kingdom (UK);250 President Chirac of France;251 and Prime Minister Howard

of Australia, among others).252 Or, to take a more radical view, it could be argued that Islamic

terrorism had been a second order risk for over a decade prior to the Twin Tower attack, and that

9/11 was the culminating act of the elevation process, ending in a fully securitised and legitimated

existential threat against the US, and Western allies as a matter of course. Regardless of how this

particular instance is interpreted, the point of this section is to underline that the speech act can

no longer be defined as an explicitly verbal discursive act if securitisation is to continue being an

applicable political theory. So, from the elevation perspective, the “speech act” should in fact be

termed “the discursive legitimation” in order to include all relevant examples of a discursively

legitimating act and will be so named hereafter.

The Referent Object

When talking about security in terms of elevation, the referent object in question does not undergo

any change. Regardless of whether you are analysing through a traditional securitisation lens or the

elevation lens offered in this thesis, the object deemed to be at risk is not affected. The nature of

247 Matea Gold, Maggie Farley, and Print, “World Trade Center and Pentagon Attacked on Sept. 11, 2001,” Los Angeles Times, September 12, 2001, https://www.latimes.com/travel/la-xpm-2001-sep-12-na-sept-11-attack-201105-01-story.html. See also Figure 4 - Photo: Robert Clark, Associated Press, 2001 248 Jean-marie Colombani, “Nous sommes tous Américains,” Le Monde.fr, September 13, 2001, https://www.lemonde.fr/idees/article/2011/09/09/nous-sommes-tous-americains_1569503_3232.html. 249 George W. Bush, “President Bush Addresses the Nation,” Washington post, September 20, 2001, https://www.washingtonpost.com/wp-srv/nation/specials/attacked/transcripts/bushaddress_092001.html. 250 Tony Blair, “Full Text of Tony Blair’s Speech to Parliament,” The Guardian, October 4, 2001, http://www.theguardian.com/world/2001/oct/04/september11.usa3. 251 Jacques Chirac, “Allocution de M. Jacques Chirac, Président de la République, sur les attentats terroristes à New York et à Washington, la solidarité de la France avec les Etats-Unis et la coopération de la France dans la lutte contre le terrorisme, Washington, le 19 septembre 2001.,” elysee.fr, September 19, 2001, https://www.elysee.fr/jacques-chirac/2001/09/19/allocution-de-m-jacques-chirac-president-de-la-republique-sur-les-attentats-terroristes-a-new-york-et-a-washington-la-solidarite-de-la-france-avec-les-etats-unis-et-la-cooperation-de-la-france-dans-la-lutte-contre-le-terrorisme-washington-le-19-sep; Jacques Chirac, “Lettre de M. Jacques Chirac, Président de la République, adressée à M. George Walker Bush , Président des Etats-Unis d’Amérique, à la suite des attentats ayant frappé les villes de New York et Washington, Paris le 11 septembre 2001.,” elysee.fr, September 11, 2001, https://www.elysee.fr/jacques-chirac/2001/09/11/lettre-de-m-jacques-chirac-president-de-la-republique-adressee-a-m-george-walker-bush-president-des-etats-unis-damerique-a-la-suite-des-attentats-ayant-frappe-les-villes-de-new-york-et-washington-paris-le-11-septembre-2001. 252 John Howard, “Application of the ANZUS Treaty to Terrorist Attacks on the United States,” Prime Minister of Australia | John Howard, September 14, 2001, https://web.archive.org/web/20051022123644/http://www.pm.gov.au/news/media_releases/2001/media_release1241.htm; see also The Guardian Staff, “World Leaders Express Outrage,” The Guardian, September 11, 2001, http://www.theguardian.com/world/2001/sep/11/september11.usa10.

Chapter 3 - Threat elevation analysis

88

the perceived threat (second order risk or first order threat) will affect where on the elevation

spectrum the referent object is placed and what measures are allowable to secure it, but the object

is the same. The referent object, at all times, is that which requires protection.

The Existential Threat

Elevation theory differentiates between stages and processes. There is the stage at which a concern

is gauged to be either third order political concern, second order risk, or first order threat. There

are inherent processes; riskification elevates third order political concerns to second order risks,

and securitisation elevates second order risks to first order threats. Existential threats of the kind

examined by securitisation theories must first have been acknowledged as risks and then elevated

to threats, and so are only perceivable or examinable during the securitisation process or when

they are deemed to be a first order security threat. A threat cannot be existential until that point,

because by definition it must pose a risk of threatening the existence of the state in order to be

securitised in the first place. Whereas in traditional securitisation frameworks, the perception of

existential threat constitutes the beginning of the securitisation process, in the elevation

framework, an existential threat can only exist at the third and final stage of the threat elevation

pyramid, and an issue must undergo two (hierarchical) elevation processes before extraordinary

measures of threat mitigation can be considered or requested.

Extreme Necessity

Like the referent object, the variable of extreme necessity is both self-explanatory and does not

require any changes between the securitisation and elevation frameworks. In order for the

securitising process to be successfully realised, there must be an environment of extreme necessity

for extraordinary powers of threat mitigation to be allowable. A sudden change to the level of

potential risk or one in the balance of power in the international system, such that the security

environment itself requires that a state employ more than the ordinary allowable security measures,

would be enough to start elevating a second order risk to a first order threat. Without satisfying

the requirement of extreme necessity, a risk cannot be elevated to a threat and so must (continue

to) undergo risk management rather than threat mitigation. By that same token, if a securitisation

has been instigated or successfully completed, and the security environment becomes less

threatening, then the loss of the element of extreme necessity would see a first order threat start

the desecuritising process back down to second order risk, and thence through deriskifying back to

the third order political concern. Desecuritisation and deriskification are attenuation processes as

defined by the threat elevation analysis framework, attenuation being the ultimate goal of

Courteney O’Connor – PhD Thesis 2021

89

management and mitigation measures — see Figure 5 - Securitisation and the orders of concern.

As such, the element of extreme necessity is of crucial importance to the elevation framework, and

must always be at the forefront of considerations when analysing the stage or process of an

elevation.

Extraordinary Measures

In an elevation process (and, indeed, a securitising process), extraordinary measures are the end

result toward which the entire process leads. The extraordinary measures are the desired end of a

securitisation; the granting of beyond-ordinary powers and means in order to defend against or

mitigate a perceived existential threat. One example of extraordinary measures employed after a

successful securitisation would be the passing of the Patriot Act in the US, which granted the

American government and military above-ordinary powers in order to combat terrorism and

terrorist suspects domestically and abroad.253 The granting of extraordinary measures is much the

same in both traditional securitisation and the elevation framework offered in this thesis, the

exception being that for a traditional securitisation the entire process leads toward the

extraordinary measures. By contrast, in elevation theory, the extraordinary measures can only be

sought in the final third stage of the elevation process, when an identified second-order risk is being

actively elevated to a first-order threat, or has been successfully securitised. There is a narrower

window of opportunity for extraordinary measures of threat mitigation to be sought or utilised in

the elevation framework considering that riskification must occur first, and by definition will not

require extraordinary measures to mitigate; risk management will suffice unless the danger

associated with that risk increases. Risk management measures and processes will continue until

either the risk is attenuated (it abates as a result of the management measures, or by virtue of the

risk actor losing power or credibility of danger), or a further elevating act is undertaken. While risk

management may take many forms, at the state level and in relation to perceived national security

risks, I posit that an action like the creation of a branch or agency specific to the identified risk

would be a likely management tool. Alternately, the addition of task-specific mandates to an

existing agency or branch would not be unlikely, and, as such, actions are designed for long-term

management of specific areas and issues.

The Elevating Actor

Like the elements of referent object and the existential threat, the securitising actor performs the

same function in both securitisation theory and the elevation framework. Elevations can be

253 United States Department of Justice, “The USA PATRIOT Act: Preserving Life and Liberty.”

Chapter 3 - Threat elevation analysis

90

undertaken at any level of analysis, from individual to group/community, corporate, state, regional,

international and global. The elevating actor will depend entirely upon the context of the

acknowledged risk or perceived threat in question. Unlike traditional securitisation theory, I would

go so far as to argue that entities, in addition to single actors (even as representatives of a state)

are now capable of performing the function of an elevating actor. It could be argued that the mass

media play a role in elevation processes, given that they collect and transmit much of the day-to-

day knowledge of non-security specialists about the actual state of world, regional, or domestic

security, in or about any given area. When searching for evidence of a claimed security threat, or

information about a perceived risk, it is possible to look to news sources such as CNN; ABC; Al

Jazeera; the Washington Post; Le Monde; and many other traditionally respected journalistic

sources. With that sphere of influence, and a higher ability to both generate and report on news

than almost any other global influencing actor, the broadcast media particularly are well placed to

play the part of the elevating actor, especially when doing so in conjunction with still images or

video footage with which to support an elevating move. This is a discursive legitimating act if the

news source performing it is reputable and with a wide enough influence (if members of the

political or military elite are liable to be influenced by that source), then the elevation process may

be started by that news source.

Legitimate Authority

As with traditional securitisation theory, the idea and element of legitimate authority is connected

with the elevating actor. If any actor tried to initiate a riskification or securitisation, there would

be no chance of success without the reasonable belief of the audience that the securitising actor

has the legitimate authority to do so. The context of the situation in question will also affect the

authority of the securitising actor, and this must be factored into the securitising calculations. As

discussed in the previous section, however, elevating actors are no longer necessarily confined to

the political or military elite of a state. Large corporations such as Apple and Google, as well as

news media organisations, are also capable of instigating a riskification or securitisation.

The question remains whether or not actors within or representing these organisations then have

the legitimate authority to validate an elevating move. There is no easy answer to this question. If

the elevating move is performed in such a way that the threat is made clear and the consequences

evident, and the threat and consequences will apply to the existence of the state (in its current

form), and those actors are perceived by the audience as being legitimate authorities, then I would argue that

the elevating move is valid. Authority, when not attached to a specific role in a hierarchy, is an

Courteney O’Connor – PhD Thesis 2021

91

intangible notion, and depends largely upon situational context and audience perceptions, which

could result in gradations of authority. If, then, the relevant audience to an elevating move

perceived the securitising actor as having the authority to provide information on and riskify or

securitise a threat, then the actor has the legitimate authority to do so.

There is of course an argument against this point of view: if we expand the concept of legitimate

authority further than specific positions within the hierarchy of government and military, then we

threaten to dilute the concept, and the framework it operates within, to the extent that it will be

rendered analytically useless. However, elevation theory has been specifically formulated to deal

with the way threats are understood and responded to in the contemporary digital age: to forsake

the elevating capability of actors with immense influence in the digitalised society of today would

be to hobble the framework. If it cannot apply to all actors capable of elevating an issue today, then

it will not apply to all securitising, securitised, and securitisable threats currently in existence. And

that theory would, in fact, be analytically useless.

There are limitations to the concept of the securitising actor as conceived by elevation theory,

however. I maintain that the context of a situation will affect the legitimate authority of the

securitising actor: an individual with no relevant expertise or influence on the government or

military of the state is unlikely to find themselves in the position of securitising actor. Even if they

did acquire the attention of the relevant audience, it would be highly improbable that they would

consider the individual a legitimate authority in order for an elevating process to begin.

Relevant Audience

The relevant audience is an element that changes not at all between traditional securitisation theory

and the new elevation framework offered here. The relevant audience is the group of individuals

before whom the securitising act must be performed, and by whom the discursive legitimation of

threat must be accepted for an acknowledged security risk to move through the securitising process

before being acknowledged as an existential threat. The relevant audience does not necessarily

mean the general public, as may be considered in the case of a modern democracy. The relevant

audience could be restricted to the highest-ranked military officers and government officials in a

given administration, before whom an actor could begin the securitising process. The same rules

would apply to an elevation in either case: the (would-be) securitising actor must be accepted as

having the legitimate authority to speak security; they must convince the audience of the extreme

necessity of mitigating a security threat, which they must also convince the audience through

Chapter 3 - Threat elevation analysis

92

discursive legitimation is an existential threat against the state (or, perhaps, its interests around the

globe). Having performed the securitising act, the relevant audience must then decide if consent

will be given for the securitising actor to set in motion extraordinary measures of threat mitigation,

to accord to the perceived threat acknowledgement of its existential nature and imminent danger.

If the securitisation will ultimately prove unsuccessful, and be desecuritised once again to a second

order risk, it must then be controlled through risk management techniques; without recourse to

emergency measures. In plain terms, it is the relevant audience that decides in the end whether an

elevation will be successful and the threat securitised. It is also the relevant audience that decides

whether support for extraordinary measures will be sustained past the initial moves toward

mitigating the security threat, and so whether the relevant audience is one person, an entire

populace or section thereof, it is arguably the second most important element in the entire

elevation framework (the first being the threat itself). Just as the audience must be considered in

any elevating process, however, in the cyber era, the audience is also a crucial variable in risk and

threat calculations. The audience, or the democratic public, are now potential targets for hostile

disinformation operations, and are thus themselves both a vulnerability and a threat vector. Cyber-

enabled communications technologies and social media platforms such as Twitter broaden the

audience to a potentially global size – messages can either be broadly transmitted and signal-

boosted with computational propaganda tools, or narrow-casted to specific audiences, such as with

Facebook groups.

Elevation and Sectors in Cyberspace

As in traditional securitisation theory, elevation theory can be applied across the full spectrum of

security sectors in which cyberspace plays a role. It is perhaps even more applicable than CST,

given that there are few, if any, areas of security in which cyberspace is not either a functional or

critical part. One of the original goals of securitisation theory was to introduce a framework which

would broaden the analysis of security beyond the concept of military security to which traditional

security and strategic studies holds.254 Evidently, securitisation considers a broad range of problems

in a number of areas. Elevation theory, viewing securitisation as part of a process of elevation

rather than threat construction, is ideally suited to consider both sectors and problems of security

where cyberspace is concerned. The following sections are a brief analysis of the way in which

elevation theory can be applied to each of the sectors identified by the original securitisation

framework: the military sector; the environmental sector; the economic sector; the

community/society sector; and the political sector.

254 Buzan, Waever, and de Wilde, Security: A New Framework for Analysis, 1–5.

Courteney O’Connor – PhD Thesis 2021

93

Military

As the traditional focus of security studies, there is an enormous body of existing literature on the

importance of military security; how to utilise military force; and how to defend against the same.

Securitisation theory in all forms is about broadening the analytical field of security beyond military

concerns. As with many other sectors of life, both public and private, cyberspace has become

increasingly crucial to the effective organisation, maintenance, and deployment of military forces

worldwide. Not only is cyberspace the primary mode of communications for militaries and

administrations world-wide, there are also entire defence industries constructed around the

requirements of using, exploiting and defending interests in cyberspace. It should be noted that

both cyber-based and cyber-aided tools and equipment are utilised by the military sector. Uniquely,

cyberspace is both a tool used to defend a state’s interests, and one of the state’s interests that

requires protection.

One of the most notable early examples of cyber-aided military operations occurred with the 1990

US-led invasion of Iraq, intended to expel the army of Saddam Hussein out of Kuwait.255 This

Operation Desert Storm was a success, and the unbelievable coordination and firepower displayed

by the US’ military left much of the world reeling in shock. In large part, the astonishing success

of Desert Storm was due to the unprecedented uptake by the American military of advanced

communications and targeting technologies made possible by the increasing use of cyberspace.256

This is an example of networked warfare, and Desert Storm in many ways changed the security

landscape in terms of traditional military concerns. It proved the advantages of advanced

communications technologies in an undeniable fashion. A decade later, the US military again

employed (ever more) advanced technologies during the 2003 American-led invasion of Iraq

following the terrorist attack on the World Trade Centre in September 2001.257 The technology of

the new millennium, in the sense of military security, was dual purpose: in addition to increasing

255 Oriana Pawlyk and Phillip Swarts, “25 Years Later: What We Learned from Desert Storm,” Air Force Times, August 7, 2017, https://www.airforcetimes.com/news/your-air-force/2016/01/21/25-years-later-what-we-learned-from-desert-storm/. 256 Franz-Stefan Gady, “What the Gulf War Teaches About the Future of War,” The Diplomat, March 2, 2018, https://thediplomat.com/2018/03/what-the-gulf-war-teaches-about-the-future-of-war/; Kris Osborn, “Shock and Awe: How the First Gulf War Shook the World,” Text, The National Interest (The Center for the National Interest, October 31, 2020), https://nationalinterest.org/blog/reboot/shock-and-awe-how-first-gulf-war-shook-world-171659; Pawlyk and Swarts, “25 Years Later.” 257 John H. Cushman Jr and Thom Shanker, “A Nation at War: Combat Technology - A War Like No Other Uses New 21st-Century Methods To Disable Enemy Forces,” The New York Times, April 10, 2003, sec. U.S., https://www.nytimes.com/2003/04/10/us/nation-war-combat-technology-war-like-no-other-uses-new-21st-century-methods.html; id Talbot, “How Technology Failed in Iraq,” MIT Technology Review, November 1, 2004, https://www.technologyreview.com/2004/11/01/232152/how-technology-failed-in-iraq/.

Chapter 3 - Threat elevation analysis

94

the accuracy and efficiency of traditional-type weapons (explosives and so forth), cyberspace and

cyber-based or -aided technologies could be, and were, utilised to collect intelligence and both

deceive and entrap suspected terrorists on the ground.258 In the space of a decade, the military uses

of cyberspace were ably proven. During that decade, and in the years following, both the

importance and the vulnerability of cyberspace became more evident as military technologies and

administrations became dependent on a structurally insecure technology.259

Elevation is not always an obvious procedure, and, in fact, may not often occur in the public eye,

for all the requirements of an existential threat. Indeed, it could be argued that the security of the

state would decrease significantly were every major threat to the modern state publicised. Given

that the requirement of a relevant audience can be fulfilled by relevant political and/or military

elites, it is by no means necessary for there to be a widespread awareness of current security threats,

or of the elevation process those threats undergo before truly becoming securitised.

In the case of cyberspace, its elevation appears to have proceeded on par with its publicised (and

feted) utility to the public. I think it fair to say, however, that specific security vulnerabilities have

not precisely been made available for public consumption, either. In part, that may have been due

to the fact that for many years, it was difficult for cyberspace engineers and scientists to convince

military and/or political elites of the dangers that cyberspace introduced into the national and

international security environments.260 At times, this may have been due to the alternate, and

overwhelming traditional security concerns, such as ground wars in Korea and Vietnam, and the

worsening Cold War between Western bloc allies and the Soviet Union (USSR). In a time when

nuclear conflict was a very real concern, interest in any cyberspace-based or -carried attack would

have been difficult to generate.

With the progression of technology, however, and the increasingly networked nature of

government, military, and civilian hierarchies and bureaucracies, cyberspace took on a greater

character of volatility and threat as other, more physical threats either faded, were neutralised, or,

through deterrence, were normalised. With these issues having been, in various ways,

258 Mark Mazzetti, The Way of the Knife: The CIA, A Secret Army, and a War at the Ends of the Earth (New York: Penguin RandomHouse, 2014), https://www.penguinrandomhouse.com/books/311164/the-way-of-the-knife-by-mark-mazzetti/. 259 Defense Advanced Research Projects Agency, “Paving the Way to the Modern Internet,” DARPA | Defense Advanced Research Projects Agency, 2021, https://www.darpa.mil/about-us/timeline/modern-internet; Kim Ann Zimmermann and Jesse Emspak, “Internet History Timeline: ARPANET to the World Wide Web,” Live Science, June 27, 2017, https://www.livescience.com/20727-internet-history.html. 260 See Mark Bowden, Worm: The First Digital World War (Atlantic Books Ltd, 2012).

Courteney O’Connor – PhD Thesis 2021

95

desecuritised,261 the horizon was clear for military and political elites to start considering the

consequences of the (now-global) communications network, the foundations of which were never

designed with security as a primary concern.262

Once the vulnerabilities of cyberspace started to become clear to state decision-makers, several

tests were undertaken in order to gauge the potential threat of a cyber-borne or cyber-enabled

attack, usually with critical national infrastructure systems as the expected target. Perhaps the most

illustrative of these tests, or demonstrations, was the “attack” (in a simulated environment) against

an electrical generator that, in a matter of minutes, completely destroyed a perfectly functional

machine, to the enduring shock of the on-looking policymakers.263 Another illustrative episode

was the discovery of the Moonlight Maze network penetration of US defence systems, the extent

of which to this day has not been fully understood, and the perpetrators of which can only be

guessed at and suspected.264

The elevation of cyberspace was, and is, a process that has taken place across decades. There have

been a number of events which seem to have instigated, or supported, the elevation of cyberspace

from within the realms of first-order concerns, and to the realms of second- and third-order

threats. The vulnerability of cyberspace in the military sector has increased commensurate to the

military use and requirement of cyberspace. In part, the elevation process that cyberspace has

undergone has been in many instances supported by military actors, as the understanding of

cyberspace needs has improved among military decision-makers. Importantly for this chapter, this

process of elevation occurred over a period of decades, and did not have the immediacy typically

associated with securitisation.

261 Desecuritisation of the threat is the ideal situation. Desecuritisation is essentially the securitisation process in reverse, whereby a threat is de-escalated through strategic and political processes until it once more becomes a matter for normal politics. This is part of a larger process of attenuation, captured in Figure 5 - Securitisation and the orders of concern. 262 Defense Advanced Research Projects Agency, “Paving the Way to the Modern Internet”; Ronald J. Deibert, “Black Code Redux: Censorship, Surveillance, and the Militarization of Cyberspace,” in Digital Media and Democracy: Tactics in Hard Times, ed. Megan Boler (London: The MIT Press, 2008), 137–64; Zimmermann and Emspak, “Internet History Timeline: ARPANET to the World Wide Web.” 263 Andy Greenberg, “How 30 Lines of Code Blew Up a 27-Ton Generator,” Wired, October 23, 2020, https://www.wired.com/story/how-30-lines-of-code-blew-up-27-ton-generator/. 264 Juan Andres Guerrero-Saade et al., “Penquin’s Moonlit Maze: The Dawn of Nation-State Digital Espionage” (Kaspersky Lab, 2018), https://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2018/03/07180251/Penquins_Moonlit_Maze_PDF_eng.pdf; Charlie Osborne, “Ancient Moonlight Maze Backdoor Remerges as Modern APT,” ZDNet, April 3, 2017, https://www.zdnet.com/article/ancient-moonlight-maze-backdoor-remerges-as-modern-apt/; Nikolay Pankov, “Moonlight Maze: Lessons from History,” Kaspersky Daily, April 3, 2017, https://www.kaspersky.com.au/blog/moonlight-maze-the-lessons/6713/.

Chapter 3 - Threat elevation analysis

96

Environment

At first glance, you may not suspect that the environmental sector has played a large part in the

elevation of cyberspace from a political issue to a security risk, and eventually a security threat.

However, in an increasingly globalised and interdependent world, cyberspace has been integrated

into all sectors. The most evident, and crucial to every area of life both private and public, is the

global communications network upon which every actor relies. In addition, the advanced

technologies utilised to study the environment and the risks and threats to environmental security

such as global warming and the consequent rising sea levels depend upon advanced networked

technologies. Any recourse to satellite technologies is also dependent on the continued efficient

functioning of cyberspace and cyber-aided or enabled technologies, so any debilitating attack on

the global communications infrastructure will have carry-on effects in the environmental sector.

Beyond the technologies that assist in the examination and analysis of the environmental sector,

the technologies themselves also have a physical presence within that sector. While cyberspace

seems to exist both everywhere and nowhere, the infrastructure upon which cyberspace is built is

physical. Cyberspace infrastructure is subject to degradation or attack, just as with all other (critical)

infrastructures. The cables on the sea floor along which global communications travel are crucial

to the continued operation of not just the Internet, but also to the control and function of similar

pipes, both under sea and over land, that deliver electricity and oil to those states without native

supplies of their own.265 Should these forms of infrastructure suffer attack or physical degradation,

there will be flow-on effects not just in the environmental sector, but in other security sectors as

well.

Economy

After the military sector, the economic sector is one of the most closely integrated with and

dependent on cyberspace for basic functioning. Enormous quantities of money exist solely in

cyberspace; they have no physical representation, and, as such, should anything happen to the data

stored in cyberspace, the global financial system would be unable to continue functioning

efficiently. In the worst case, and if enough data were lost, it may not be able to continue

functioning at all. Corporations that handle a large volume of financial transactions such as states,

265 NATO CCDCOE, “Strategic Importance of, and Dependence on, Undersea Cables” (Tallinn: NATO Cooperative Cyber Defence Centre of Excellence, 219AD); Nadia Schadlow and Brayden Helwig, “Protecting Undersea Cables Must Be Made a National Security Priority,” Defense News, July 1, 2020, https://www.defensenews.com/opinion/commentary/2020/07/01/protecting-undersea-cables-must-be-made-a-national-security-priority/.

Courteney O’Connor – PhD Thesis 2021

97

banks, and multinational corporations have an explicit interest in the security of cyberspace; their

continued existence depends on the perceived security of making transactions with that particular

actor. Trust certificates, for example, increase the confidence of a user in relation to a particular

online presence.266 Other dangers also exist for financial institutions in cyberspace: for example,

credit card fraud has dogged banking institutions since credit card information could be used

online. Cyber-attacks of sufficient complexity and sophistication are capable of exfiltrating

extraordinary quantities of money, often for a significant period of time before discovery if the

quantities in question are taken piecemeal from a number of accounts.267

Beyond the civilian and business concerns in cyberspace, the state itself and associated agencies

and militaries are dependent on the continued confidence in the economic sector, both nationally

and globally. States require an enormous amount of money simply to keep themselves running,

and both military services and civilian agencies depend on that funding for their continued

functioning and existence. In addition, most (if not all) states have overseas interests, investments,

and debts, which depend on both the continued functioning of, and the global confidence in, the

global economy. All of this now further depends on cyberspace, the infrastructure for which was

not originally designed with security as the first (and certainly not overriding) concern.268

Given these considerations, it is unsurprising that cyberspace concerns have been elevated to

significant risks by most actors in the current international economic system. In recent years, there

have been a multitude of cyber-attacks against banks and financial institutions, many of which

have also had stolen confidential information on the clients of those institutions as well as, or

instead of, simple monetary theft.269 That information can be very valuable on the dark web,270

266 Though trust certificates can also be ‘spoofed’, as seen with the case of DigiNotar, wherein a certificate provider was hacked and false trust certificates issued. That provider was subsequently bankrupted following the loss of confidence in their service. Tom Espiner, “Hack Attack Forces DigiNotar Bankruptcy,” ZDNet, September 20, 2011, https://www.zdnet.com/article/hack-attack-forces-diginotar-bankruptcy/. 267 Tom Bergin and Nathan Layne, “Special Report: Cyber Thieves Exploit Banks’ Faith in SWIFT Transfer Network,” Reuters, May 20, 2016, https://www.reuters.com/article/us-cyber-heist-swift-specialreport-idUSKCN0YB0DD. 268 Craig Timberg, “The Real Story of How the Internet Became so Vulnerable,” Washington Post (blog), 2015, http://www.washingtonpost.com/sf/business/2015/05/30/net-of-insecurity-part-1/; Zimmermann and Emspak, “Internet History Timeline: ARPANET to the World Wide Web.” 269 Fruhlinger, “Equifax Data Breach FAQ”; Haskell-Dowland, “Facebook Data Breach”; Ragan, “Raising Awareness Quickly”; Reuters Staff, “JPMorgan Hack Exposed Data of 83 Million, among Biggest Breaches in History”; Rushe, “JP Morgan Chase Reveals Massive Data Breach Affecting 76m Households”; Zetter, “Four Indicted in Massive JP Morgan Chase Hack.” 270 According to Kaspersky, the dark web is “the hidden collective of internet sites only accessible by a specialized web browser. It is used for keeping internet activity anonymous and private, which can be helpful in both legal and illegal applications. While some use it to evade government censorship, it has also been known to be utilized for highly illegal activity.” Kaspersky, “What Is the Deep and Dark Web?,” www.kaspersky.com, January 13, 2021,

Chapter 3 - Threat elevation analysis

98

where such information can be sold and used in instances of credit and identity fraud. While it is

uncertain whether or not cyberspace has been securitised in the economic sector, I would argue

that it has been successfully riskified, and is intermittently being engaged in the securitising process.

Thus, we are witnessing the gradual elevation of the economic sector by reference to its reliance

on cyberspace.

Societal

Originally conceived, securitisation theory perceives several main dangers to the

community/identity sector of security: migration, horizontal competition and vertical

competition.271 Whereas cyberspace may not seem like an immediate concern to such issues as

migration and cultural identity, in fact, cyber-enabled technologies now play a substantial part in

the recording and examination of large migration patterns, including the registration and

transportation of refugees, both within and away from conflict zones. In her 2017 publication The

Politics of Humanitarian Technology, Katja Lindskov Jacobson examines the use of advanced

technologies on actors such as refugees, stateless individuals and internally displaced persons

(IDPs).272 Cyber infrastructure and cyber-enabled technologies, in addition to aiding in the

registration and tracking of these individuals, are also used to record medical data, biometric

identifiers, and whether or not these individuals have been in receipt of food, goods or services

provided by the United Nations, Red Cross, or other third-party aid organisations.273 While in this

sector, cyberspace does not have the level of existential threat that would (at this point in time)

necessitate a securitising process, such use of technologies could give rise to the abuse of the

information they take from disadvantaged peoples. In addition, if that data were to become

corrupted, as Lindskov Jacobson points out, the individuals that this data pertains to might suffer

for it.274 Beyond additional suffering to these actors, there is also the possibility of this data being

stolen or exploited by unknown actors, with heretofore unknown motivations or intent. As such,

while cyberspace may not at this stage pose an active threat to the community sector, there are

associated risks to an insecure or vulnerable cyberspace, real or perceived.

https://www.kaspersky.com/resource-center/threats/deep-web. The ACSC expands this definition slightly by asserting that the dark web is “made up of sites that are not indexed by search engines…” Australian Cyber Security Centre, “Dark Web,” Australian Cyber Security Centre, 2021, https://www.cyber.gov.au/acsc/view-all-content/glossary/dark-web; See also Darren Guccione, “What Is the Dark Web? How to Access It and What You’ll Find,” CSO Online, November 18, 2020, https://www.csoonline.com/article/3249765/what-is-the-dark-web-how-to-access-it-and-what-youll-find.html. 271 See chapter two section on Copenhagen securitisation theory pertaining to the societal sector of security 272 Katja Lindskov Jacobsen, The Politics of Humanitarian Technology: Good Intentions, Unintended Consequences and Insecurity (Routledge, 2015). 273 Jacobsen. 274 Jacobsen.

Courteney O’Connor – PhD Thesis 2021

99

There is also a risk to cultural history, should cyber infrastructure fail in some way. Cultures in

danger of dying out, and indeed cultures that have died out and we know only from historical or

archaeological artefacts, would be in danger of total destruction if the cloud in which we now store

so much data were to be corrupted of damaged.275 While in the case of cultural historical

documents and artefacts it is likely that there will be backups of any records made, there is no

guarantee that those backups will not also be corrupted. It is also the case that, if a document is

copied, and a copy made of the copy and so forth, then eventually the quality of the copies’ copies

will degrade such that the value may ultimately be lost. We have already seen the proven value of

storage records of documents and artefacts including three dimensional scans in cyberspace:276

since the birth and aggressive growth of the terrorist group ISIS in Iraq and Syria, there has been

a documented loss of enormously valuable relics that have been utterly destroyed by ISIS

radicals.277 There remain only crumbling vestiges and rubble where buildings and monuments had

previously endured for millennia; now, the only full record we have of these are digital. If the

digital data is corrupted, so are those relics. So, while the instability or insecurity of cyberspace may

not seem debilitating to community and identity security in the first instance, as in all other areas

of security cyberspace has been riskified, albeit in less spectacular a manner that in other security

sectors, but arguably equally as important in a societal and collective memory sense. By elevating

the risks or threats to these cultural artefacts, governments could direct resources more efficiently

in order to preserve cultural heritage which in many cases amounts to cultural and identity security.

While the national security of the state may not be explicitly impacted by cultural insecurity, the

perceived insecurity of its populations (which, in the case of tribal and ethnic minorities, can be

considered sub-state nations) could result in political instability.

275 ‘The cloud’ refers to “software and services that run on the Internet, instead of locally on your computer.” Bonnie Cha, “Too Embarrassed to Ask: What Is ‘The Cloud’ and How Does It Work?,” Vox, April 30, 2015, https://www.vox.com/2015/4/30/11562024/too-embarrassed-to-ask-what-is-the-cloud-and-how-does-it-work; Cloudflare, “What Is the Cloud? | Cloud Definition,” Cloudflare, accessed June 19, 2021, https://www.cloudflare.com/learning/cloud/what-is-the-cloud/. 276 Sarah Griffiths, “Artefacts Destroyed by ISIS Restored in 3D Models by ‘Cyber Archaeology,’” Daily Mail, May 20, 2015, https://www.dailymail.co.uk/sciencetech/article-3087731/Cyber-archaeology-rebuilds-lost-treasures-Public-project-uses-photos-create-3D-models-artefacts-destroyed-ISIS.html; NPR Staff, “Cyber Archaeologists Rebuild Destroyed Artifacts,” NPR.org, June 1, 2015, https://www.npr.org/sections/alltechconsidered/2015/06/01/411138497/cyber-archaeologists-rebuild-destroyed-artifacts. 277 Emma Cunliffe and Luigi Curini, “ISIS and Heritage Destruction: A Sentiment Analysis,” Antiquity 92, no. 364 (August 2018): 1094–1111, https://doi.org/10.15184/aqy.2018.134; Andrew Curry, “Here Are the Ancient Sites ISIS Has Damaged and Destroyed,” National Geographic, September 1, 2015, https://www.nationalgeographic.com/history/article/150901-isis-destruction-looting-ancient-sites-iraq-syria-archaeology; Christopher W. Jones, “Understanding ISIS’s Destruction of Antiquities as a Rejection of Nationalism,” Journal of Eastern Mediterranean Archaeology & Heritage Studies 6, no. 1–2 (2018): 31–58, https://doi.org/10.5325/jeasmedarcherstu.6.1-2.0031.

Chapter 3 - Threat elevation analysis

100

Political

The political sector is tightly intertwined with the military one and, increasingly, the economic

sector of security. Cyber security and all connected issues and concerns are a significant challenge

for politicians at all levels, and while there are other security concerns as well, it is likely that

cyberspace will continue to be a point of contention and a rallying point for many years to come.

In the contemporary age, politicians (or their media and public relations teams) must have a

familiarity with cyberspace and, of course, social media. However, in terms of infrastructure,

cyberspace is a high political priority; when critical national infrastructure of any kind is dependent

upon the continued reliability and efficiency of cyberspace and cyber-enabled technologies,

political actors are expected to be familiar with both the benefits and the vulnerabilities of those

technologies. In addition, it is political actors who are often the first in line for questioning when

infrastructure of any kind suffers malfunction, from design to hostile attack; it is therefore in the

best interest of those actors that cyberspace is, or is at least perceived by the public to be, as secure

as possible.

Beyond the concern of current political actors to be familiar with the benefits and vulnerabilities

of major infrastructural technologies, the political sector today is also largely dependent on

cyberspace for administration. Elections in particular depend in large part on cyber-enabled

technologies in states where voting is digitally enabled; the recent scandal dogging the election of

Donald Trump as the 45th President of the US and the involvement of the Russian Federation in

recent American elections is proof enough that cyber insecurity is a real, enduring, permanent, and

contemporary concern.278 Moreover, the way that social media etc., was used to undermine

confidence in the outcome of the 2020 US Presidential election shows the ways that cyber-

278 Brian Barrett, “DNC Lawsuit Against Russia Reveals New Details About 2016 Hack,” Wired, April 20, 2018, https://www.wired.com/story/dnc-lawsuit-reveals-key-details-2016-hack/; Heather A. Conley and Jean-Baptiste Jeangene Vilmer, “Successfully Countering Russian Electoral Interference,” Center for Strategic & International Studies, June 21, 2018, https://www.csis.org/analysis/successfully-countering-russian-electoral-interference; Josh Gerstein, “U.S. Brings First Charge for Meddling in 2018 Midterm Elections,” POLITICO, October 19, 2018, https://politi.co/2Ajbubq; Adam Goldman, “Justice Dept. Accuses Russians of Interfering in Midterm Elections,” The New York Times, October 19, 2018, sec. U.S., https://www.nytimes.com/2018/10/19/us/politics/russia-interference-midterm-elections.html; Andy Greenberg, “Hack Brief: As FBI Warns Election Sites Got Hacked, All Eyes Are on Russia,” Wired, August 29, 2016, https://www.wired.com/2016/08/hack-brief-fbi-warns-election-sites-got-hacked-eyes-russia/; Jonathan Masters, “Russia, Trump, and the 2016 U.S. Election,” Council on Foreign Relations, February 26, 2018, https://www.cfr.org/backgrounder/russia-trump-and-2016-us-election; Office of the Director of National Intelligence, “Assessing Russian Activities and Intentions in Recent US Elections”; Senate Select Committee on Intelligence, “Report of the Select Committee on Intelligence United States Senate on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election Volume 2: Russia’s Use of Social Media With Additional Views,” Russian Active Measures Campaigns and Interfernece in the 2016 U.S. Elections (Washington, D.C.: United States Senate, 2019), https://www.intelligence.senate.gov/sites/default/files/documents/Report_Volume2.pdf.

Courteney O’Connor – PhD Thesis 2021

101

technologies play increasingly important roles in political processes.279 Governments also use

cyberspace for bureaucratic administration and to serve functions such as population tracking,

logging and storage of medical records, domestic and international communications, and so on. In

order for any political actor to function in the modern age, cyberspace is a necessary element of

most actions and certainly a day-to-day requirement.

In the political sector, cyberspace in itself has been undergoing elevation for decades; it could be

argued that as soon as cyberspace spread beyond the initial US -based communications network

between academics at university campuses to a global phenomenon,280 political actors have been,

quietly at first perhaps, repositioning themselves in terms of how cyberspace is expected to be

received by the public in any given set of circumstances.

Event-based Threat Elevation

The threat elevation framework accepts events as the instigating acts of a riskifying or securitising

process. Event-based elevation could be considered the more rapid path to threat elevation, given

that the more widespread the coverage of an event, the more likely the relevant audience is to grant

consent to extraordinary measures; the more serious the consequences of the instigating event, the

more likely the relevant audience is to sustain their consent. Event-based elevation is a more

spectacularised (in the sense of mediatised, or more widely broadcasted and communicated) form

of elevation than the traditional performative speech act, and therefore has a greater emotional

influence. To reference the September 11 attacks once more, that was an example of event-based

elevation; there was widespread media coverage of the attack and the consequences thereof, and

there was an incredible emotional impact on victims and viewers, not only in the US, but

worldwide.281 In the sections below, I will examine several events that have influenced, and

continue to influence, the elevation of cyberspace out of the bounds of normal politics to second

order risk, and potentially first order threat.

Stuxnet

Stuxnet is an oft-cited example of the increasing impact of cyber-attacks. An incredibly complex

attack, it was also the first known cyber-attack to have a kinetic effect; to affect real-world, physical

279 Chapter six looks at these issues in more detail. 280 Vint Cerf, “A Brief History of the Internet & Related Networks,” Internet Society (blog), 2021, https://www.internetsociety.org/internet/history-internet/brief-history-internet-related-networks/; Defense Advanced Research Projects Agency, “Paving the Way to the Modern Internet”; Zimmermann and Emspak, “Internet History Timeline: ARPANET to the World Wide Web.” 281 Colombani, “Nous sommes tous Américains”; The Guardian Staff, “World Leaders Express Outrage.”

Chapter 3 - Threat elevation analysis

102

consequences. The target of Stuxnet was a particular type of command-and-control system

manufactured by Siemens,282 and used in the Iranian nuclear processing facility at Natanz.283

Stuxnet performed both deception and degradation functions; by adjusting the mechanisms of the

centrifuges, it degraded their components such that they required many replacement units. By

transmitting false normal function data to the technicians that controlled the centrifuges, Stuxnet

deceived the controllers and contributed to the loss of trust in both the technicians and the

technologies. While opinions vary, experts have estimated that the effects of the Stuxnet virus set

the Iranian nuclear program back by between several months and several years.284 Unfortunately

for its creators, the virus was released into the wild,285 eventually being discovered and analysed by

computer experts. It was decided that the virus, too complex to have been the creation of anything

but a state, was attributable to the US and Israel, both states with a keen interest in preventing the

potential weaponisation of the Iranian nuclear energy program.286

The use of a cyber-borne virus against a state which had observable effects in the physical world

was evidentiary proof of the threat potential of cyberspace, and while perhaps not an instigating

event, was certainly a supporting event in the elevation of cyberspace beyond the bounds of normal

politics. While (to my knowledge) there were no fatalities or injuries resulting from the Stuxnet

virus,287 its release and operation is a clear indication that states at least view cyberspace as a viable

domain for (militarised) conflict. While I would not argue that Stuxnet did not lead to a

securitisation of cyberspace, I would argue that it did contribute to the successful riskification of

cyberspace as a threat vector and security vulnerability. Stuxnet proved that cyber operations were

capable of affecting physical infrastructure in a demonstrably destructive way. However, the effects

were specific and, by design, limited. This is not an existential danger but did indicate that

282 Kim Zetter, “An Unprecedented Look at Stuxnet, the World’s First Digital Weapon,” Wired, November 3, 2014, https://web.archive.org/web/20151207190234/https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet/. 283 Zetter. 284 Zetter. 285 A term used to describe when a virus distributes beyond its intended target and people are able to obtain the source code for analysis or manipulation and use. Australian Cyber Security Centre, “In the Wild,” Australian Signals Directorate | Australian Cyber Security Centre, 2021, https://www.cyber.gov.au/acsc/view-all-content/glossary/wild. 286 Nate Anderson, “Confirmed: US and Israel Created Stuxnet, Lost Control of It,” Ars Technica, June 1, 2012, https://arstechnica.com/tech-policy/2012/06/confirmed-us-israel-created-stuxnet-lost-control-of-it/; Ellen Nakashima and Joby Warrick, “Stuxnet Was Work of U.S. and Israeli Experts, Officials Say,” Washington Post, June 2, 2012, sec. National Security, https://www.washingtonpost.com/world/national-security/stuxnet-was-work-of-us-and-israeli-experts-officials-say/2012/06/01/gJQAlnEy6U_story.html; David E. Sanger, “Obama Order Sped Up Wave of Cyberattacks Against Iran,” The New York Times, June 1, 2012, sec. World, https://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html. 287 Though several engineers were fired from the facility after supervisors and managers lost confidence in their capabilities. Thomas Rid, Cyber War Will Not Take Place (London: Hurst, 2013), 32.

Courteney O’Connor – PhD Thesis 2021

103

cyberspace and cyber operations would be able to affect the physical domain going forward. The

lack of existentially threatening danger to Iran posed by Stuxnet prevents a securitisation but it

also does serve to elevate the problem of cyberspace as a potential threat vector, prompting the

design and implementation of risk management measures and practices. It also serves to illustrate

the importance of both technical and psychological (human) aspects of CCI. There was a

psychological element to the malware, fooling the technicians and their supervisors into losing

trust in both the machinery and their own skills in operating it, thereby extending the period of

time before malware was suspected. The intelligence operation thus had an irrevocably human

element in its design and deployment. Efficient responses to the elevation of the threat of

cyberspace, then, require a recognition of the importance of the human factor in the practice of

CCI.

GhostNet

GhostNet is one of the more well-known cases of cyber-attack and exploitation in recent history.

Using a combination of elements, attackers that were suspected to be members of the PLA’s Unit

31968 infiltrated the computer systems of the Free Tibet Movement and of the Dalai Lama, the

holy leader of the Tibetan people.288 Cyber exploitation had become suspected by the offices of

the Dalai Lama after several incidents, including one wherein a state with whom the offices of the

Dalai Lama had been trying to organise an official visit were notified by the Chinese authorities

that it would be unwise to allow such a visit to occur.289 When experts were invited to examine the

computer systems of the offices of the Dalai Lama, they discovered a surveillance system in place

on many of the computers in the office which allowed remote control access, infiltration, and

exfiltration. After some investigation, the file containing the virus which infected the computers

was found; as it was called Gh0stRat, the network of compromised systems and the operation

itself came to be known as GhostNet.290

The cyber-spying network had, by the point of discovery, compromised more than twelve hundred

machines in over one hundred countries, many of which were in physical locations that would

288 The PLA is the People’s Liberation Army, or the PRC. Nigel Inkster, “China in Cyberspace,” Survival 52, no. 4 (September 2010): 55, https://doi.org/10.1080/00396338.2010.506820; Dimitar Kostadinov, “GhostNet - Part I,” InfoSec, 2013, https://resources.infosecinstitute.com/topic/ghostnet-part-i/. 289 Shishir Nagaraja and Ross Anderson, “The Snooping Dragon: Social-Malware Surveillance of the Tibetan Movement,” Technical Report (Cambridge: University of Cambridge, 2009), 5, https://web.archive.org/web/20130122081850/http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-746.pdf. 290 Information Warfare Monitor, “Tracking GhostNet: Investigating a Cyber Espionage Network” (The Citizen lab & The SecDev Group, 2009), 5, https://citizenlab.ca/wp-content/uploads/2017/05/ghostnet.pdf.

Chapter 3 - Threat elevation analysis

104

offer the infiltrators both sensitive and classified material.291 Beyond the initial discovery of the

infiltrated systems of the offices of the Dalai Lama, compromised machines were identified in

embassies; media offices; foreign ministries; and other government offices around the world. The

command and control structures of the network were traced to mainland China, and further to

Beijing, though authorities fell short of officially accusing the Chinese government of cyber

espionage in this case.292 Further, the government in Beijing denied all knowledge and involvement

and no conclusive evidence was ever discovered to definitively attribute responsibility.293

The Gh0stRat Trojan was spread through emails, socially engineered for the target (a strategy

usually referred to as whaling)294 to create an environment of trust wherein the target felt

comfortable opening an email attachment that appeared as a legitimate communication.295

Depending on the documents and information exfiltrated from previous targets, the documents

in this sort of attack can actually be legitimate and intended for the target; the virus is simply piggy-

backed onto an already legitimate communication. Once the virus is installed, the machine

becomes part of the compromised network and can accept commands from, and send information

back to, the command and control server.296

In this particular instance, and in many similar others, there are multiple (rather than one single)

failures in counterintelligence. First, the security of the targeted networks is in some way deficient;

they have not identified the emails as carrying viruses or otherwise being less-than-legitimate traffic

and have essentially 'waved them through.' Second, there is also a human failure in

counterintelligence. While it is possible that there is no indication in the emails carrying the Trojan

that the contents may not be legitimate, in many cases there are indications that the provenance

of an email may be suspect. A slowdown of the computer; ‘glitches’ in the operating system, and

lack of following up on emails with the apparent sender are all potential identifiers of computer

exploitation. While the target of GhostNet was not a state actor, there are several reasons I have

included it here. First, it was highly likely a state that engaged in the attacks and exploitation for

291 Information Warfare Monitor, 48–49; Inkster, “China in Cyberspace”; Kostadinov, “GhostNet - Part I.” 292 Information Warfare Monitor, “Tracking GhostNet: Investigating a Cyber Espionage Network,” 5–6; Nagaraja and Anderson, “The Snooping Dragon: Social-Malware Surveillance of the Tibetan Movement,” 5–6. 293 BBC, “Cyber Attacks Blamed on China,” BBC News, January 31, 2013, sec. China, https://www.bbc.com/news/world-asia-china-21272613; Voice of America, “China Denies Any Role in ‘GhostNet’ Computer Hacking | Voice of America - English,” Voice of America, November 2, 2009, https://www.voanews.com/archive/china-denies-any-role-ghostnet-computer-hacking. 294 Kaspersky, “What Is a Whaling Attack?,” Kaspersky, 2021, https://www.kaspersky.com/resource-center/definitions/what-is-a-whaling-attack. 295 Information Warfare Monitor, “Tracking GhostNet: Investigating a Cyber Espionage Network”; Nagaraja and Anderson, “The Snooping Dragon: Social-Malware Surveillance of the Tibetan Movement.” 296 Information Warfare Monitor, “Tracking GhostNet: Investigating a Cyber Espionage Network,” 30–38.

Courteney O’Connor – PhD Thesis 2021

105

the purposes of surveillance and intelligence collection: this was a demonstration of both intent

and capability which illuminated at least part of what the attacker intended to accomplish, and the

methods with which they were able to pursue their goal. Second, this thesis acknowledges the

importance of non-state actors in any assessment related to the function or exploitation of

cyberspace and in the use of CCI. Third, this thesis emphasises the importance of the human

element in both CCI and in cyberspace: the human element in this case was both important for

the success of the exploitation operation, and in terms of the specific individuals targeted. And

fourth, the Gh0stRat Trojan did not stay confined to a specific target: it was found in multiple

systems across a wide geographic region and did affect government systems, like that of the Indian

Ministry of Defense and Foreign Ministry.297 The initial targeting of an intelligence operation

against a non-state actor does not preclude the consequences affecting state actors.

Relative to the threat elevation of cyberspace, I would argue that GhostNet was a contributing

factor to riskification, if not securitisation. While in the early and mid-2000s, a greater

understanding of cyberspace as a threat vector had been achieved, it perhaps had not been

suspected (by the general population and authorities, though perhaps excluding intelligence

agencies and personnel) that computer exploitation could extend across as many countries as had

been found to be compromised by the Gh0stRat Trojan. In a different way than Stuxnet,

GhostNet elevated the risk potential of cyberspace by illustrating how widely malware could spread

and the effect that it could have, not only on systems and infrastructure, but the behaviour and

reality of individuals, once the consequences of the espionage operation became clearer. GhostNet

did not represent an existential threat to the governing administration of a state or quasi-state, nor

was there an immediate and pressing potential for danger. It did, however, serve to draw attention

to the necessity of CCI as a direct response to the espionage and surveillance operations of a

(probable) state actor. It also demonstrated once more that the technical and human elements of

CCI are irrevocably linked.

Flame

Discovered by researchers in 2012, Flame was an extremely complex and sophisticated malware.298

With twenty modules (not all installed immediately) that the controller can add/subtract as they

see fit, Flame has a multi-pronged intelligence gathering capability. It seems purpose-built solely

297 James P. Farwell and Rafal Rohozinski, “Stuxnet and the Future of Cyber War,” Survival 53, no. 1 (February 2011): 26, https://doi.org/10.1080/00396338.2011.555586. 298 Ravish Goyal et al., “Obfuscation of Stuxnet and Flame Malware,” Latest Trends in Applied Informatics and Computing, 2012, 5.

Chapter 3 - Threat elevation analysis

106

for espionage purposes; it has multiple functionalities that turn the compromised machine into a

spying device for whoever controls the virus. Flame is capable of activating the on-board camera

and microphone of an infected device; taking screenshots while the machine is in use; logging

strokes on the keyboard and thus acquiring access credentials; monitoring network traffic and

scraping data; recording Skype conversations by recognising when the on-board microphone is in

use; exfiltrating and infiltrating data and documents; and altering files without removing them from

the system.299 In addition, Flame is capable of extorting geolocation data from image stills, and

using Bluetooth wireless technology for data transfer and to both send and receive commands.300

Upon examination by researchers including experts at Kaspersky Labs, certain parallels were drawn

between the code for Flame and the code of Stuxnet, the previously-discovered malware thought

to be the collaboration of the US and Israel, and part of an overall program called Olympic

Games.301 However, some think that Flame was been developed circa 2008, and so would actually

predate Stuxnet; it has been suggested by some to be the forerunner and the platform from which

Stuxnet was kickstarted.302

While there were commonalities suggesting a shared heritage, however, Flame differed from

Stuxnet in several ways. First, Flame had an in-built ‘suicide’ module that was actually successful

in neutralising the virus on an infected machine and then completely wiping all traces of it from

existence. In addition to the built-in module, however, experts also traced a new command, sent

out following the discovery of Flame, which was essentially another ‘uninstall’ application that

performed the same function.303 Experts are only able to study Flame by means of having used

honeypot computers to capture the virus or examining computers known to have been infected

299 Benjamin Gottlieb, “Flame FAQ: All You Need to Know about the Virus,” Washington Post, June 20, 2012, https://web.archive.org/web/20170225103056/https://www.washingtonpost.com/blogs/blogpost/post/flame-faq-all-you-need-to-know-about-the-virus/2012/06/20/gJQAAlrTqV_blog.html; Goyal et al., “Obfuscation of Stuxnet and Flame Malware”; Kate Munro, “Deconstructing Flame: The Limitations of Traditional Defences,” Computer Fraud & Security 2012 (October 1, 2012): 8–11, https://doi.org/10.1016/S1361-3723(12)70102-1; Kim Zetter, “Meet ‘Flame,’ The Massive Spy Malware Infiltrating Iranian Computers,” Wired, May 28, 2012, https://web.archive.org/web/20141014203532/http://www.wired.com/2012/05/flame/all. 300 Gottlieb, “Flame FAQ: All You Need to Know about the Virus”; Goyal et al., “Obfuscation of Stuxnet and Flame Malware”; Munro, “Deconstructing Flame”; Zetter, “Meet ‘Flame,’ The Massive Spy Malware Infiltrating Iranian Computers.” 301 Olympic Games is the effort by US and Israeli intelligence agencies to delay the Iranian nuclear energy program in the attempt to both prevent weaponisation of nuclear energy, and also to provide delays which would allow diplomats more time to liaise. 302 Dan Goodin, “Discovery of New ‘Zero-Day’ Exploit Links Developers of Stuxnet, Flame,” Ars Technica, June 12, 2012, https://arstechnica.com/information-technology/2012/06/zero-day-exploit-links-stuxnet-flame/; Elinor Mills, “Shared Code Indicates Flame, Stuxnet Creators Worked Together,” CNET, June 11, 2012, https://www.cnet.com/news/shared-code-indicates-flame-stuxnet-creators-worked-together/. 303 Rossi Fernandes, “Flame Malware Controllers Send Uninstall Command,” Tech2, 11:02:51 +05:30, https://www.firstpost.com/tech/news-analysis/flame-malware-controllers-send-uninstall-command-3601111.html.

Courteney O’Connor – PhD Thesis 2021

107

before the kill module was activated; any machine on which the kill switch had been used was

clean of all traces of the virus.

Flame was also purpose-built for espionage, unlike Stuxnet. The aggressive functionality for which

Stuxnet became so well-known affected physical elements of the Iranian nuclear program at

Natanz and also performed something of a psychological warfare operation; it both degraded the

centrifuges utilised for uranium separation in the plant and it encouraged the impression of

incompetence among the workers.304 Flame, however, has no such aggressive functionality; it is a

tool for intelligence collection, and even lacks Stuxnet’s uninhibited propagation. Flame can only

spread when the propagation module is activated by the controller and ordered to do so, and on

infection of a new machine that feature is in fact disabled. Had Stuxnet been built along the same

lines, it may not have been discovered so quickly; Flame was very much a purpose-built and tightly

controlled tool.

Flame is also a much larger piece of malware than Stuxnet, indicative of both the extensive

functionality and the effort involved in its creation. Like Stuxnet, Flame spreads via Local Area

Network (LAN), intranet, or USB drive rather than attached to emails or through file downloads

like more common varieties of malware. Unlike Stuxnet, its targeting was not limited to SCADA

industrial control systems: the virus is known to have targeted and compromised machines

belonging to individuals and groups.305 While these two examples of malware were written in

different programming languages and a majority of the code is different, there are enough

commonalities and shared features for experts to suggest that the two have common authors at

least in part. In particular, the obfuscation and detection prevention utilised by the authors of

Flame in combination with the extremely advanced code suggest that only “crypto mathematicians,

such as those working at NSA” would have the capability of authoring such a virus.306 Flame

masqueraded as a Microsoft update, with signed (though fake) Microsoft trust certificates, thus

defeating traditional anti-virus software.307

304 Zetter, “Meet ‘Flame,’ The Massive Spy Malware Infiltrating Iranian Computers.” 305 Gottlieb, “Flame FAQ: All You Need to Know about the Virus”; Zetter, “Meet ‘Flame,’ The Massive Spy Malware Infiltrating Iranian Computers.” 306 Ellen Nakashima, “US and Israel Credited with Shooting Iran down in Flame,” The Sydney Morning Herald, June 20, 2012, https://www.smh.com.au/technology/us-and-israel-credited-with-shooting-iran-down-in-flame-20120620-20ok3.html; Nakashima and Warrick, “Stuxnet Was Work of U.S. and Israeli Experts, Officials Say.” 307 Munro, “Deconstructing Flame.”

Chapter 3 - Threat elevation analysis

108

The alleged unilateral decision by Israel to deploy the virus onto Iran’s networks, including oil

fields systems, lead to the discovery of Flame.308 Had this not occurred, it isn’t clear whether or

not Flame would have continued to function undetected and if so, for what period of time. In

terms of counterintelligence, Flame certainly was more sophisticated than other malware thus far

discovered; its authors had actively considered both methods of obfuscation and concealment

while the virus was operational, and methods of concealment and deletion upon the virus’

discovery. The fact that it functioned for years without becoming known is proof enough that the

effort put into countering and overcoming cyber security functions on target machines was

considerable. However, that same evidence is proof of the failure of both cyber security and CCI

functions of the target machines and publicly available and widely-used software such as Microsoft.

While it is difficult to suggest alternatives to the current, reactionary nature of cyber security and

counterintelligence practices, it is clear that the offensive advantage of cyber explorations and

cyber-attacks has only grown. It is difficult to combat and subvert that which you cannot see and

of which you do not know. I would argue the Flame, perhaps even more so that Stuxnet, alerted

security and intelligence agencies and entities that cyberspace was an increasing risk and a serious

threat vector in the contemporary and digitalised era. While Flame posed no existential risks and

this did not contribute to the securitisation of cyberspace, it is my opinion that it did (and continues

to) contribute to its successful riskification, and the increase of risk management efforts. Flame

was a tool that was purpose-built for espionage, unintended for the sort of physical degradation

for which Stuxnet was designed. Like GhostNet, Flame was an explicitly intelligence-linked tool,

and once more brought to the fore the kinds of risk associated with the widespread diffusion of

access to cyberspace and the flow of information and data through global networked structures.

It identified the necessity of a greater understanding of, focus on, and utilisation of CCI measures

and processes.

Estonia 2007

Like Stuxnet, many studies have been made and articles written about the cyber-attacks against

Estonia in 2007.309 These attacks were an overt example of a sustained campaign against the cyber

308 Nakashima, “US and Israel Credited with Shooting Iran down in Flame.” 309 C. Czosseck, Rain Ottis, and Anna-Maria Talihärm, “Estonia after the 2007 Cyber Attacks: Legal, Strategic and Organisational Changes in Cyber Security,” International Journal of Cyber Warfare and Terrorism 1 (2011): 24–34, https://doi.org/10.4018/ijcwt.2011010103; Samuli Haataja, “The 2007 Cyber Attacks against Estonia and International Law on the Use of Force: An Informational Approach,” Law, Innovation and Technology 9, no. 2 (July 3, 2017): 159–89, https://doi.org/10.1080/17579961.2017.1377914; Rain Ottis, “Analysis of the 2007 Cyber Attacks against Estonia from the Information Warfare Perspective” (Tallinn: NATO Cooperative Cyber Defence Centre of Excellence, 2008), https://ccdcoe.org/library/publications/analysis-of-the-2007-cyber-attacks-against-estonia-from-the-information-warfare-perspective/; Richard Stiennon, “A Short History of Cyber Warfare,” in Cyber Warfare: A Multidisciplinary Analysis, ed. James A. Green (Oxon: Routledge, 2015), 7–32.

Courteney O’Connor – PhD Thesis 2021

109

systems of a sovereign state, and considering that Estonia was then and is still now one of the

most highly networked states in the international system the loss of efficiency and access that the

campaign caused was of great concern. While the North Atlantic Treaty Organisation (NATO)

did not invoke the Article 5 defence clause,310 the attacks did generate significant interest in the

relative vulnerabilities of states to attacks with a cyber vector, and one of the eventual results of

the Estonia attacks was the NATO Cooperative Cyber Defence Centre for Excellence

(CCDCOE), headquartered in Tallinn, Estonia.311 Arguably, the Estonian attack and subsequent

response(s) are the most obvious examples of where a particular cyber event led to an elevation of

risks to threats, and long lasting geopolitical impacts.

The attacks themselves were not necessarily particularly threatening; defacement of government

websites and a massive dedicated denial of service (DDoS) attack against the financial

infrastructure of Estonia saw citizens unable to use online banking systems for several days.312

Estonia, one of the most Internet-dependent states in the world given its size, considers access to

the Internet as a basic human right, and has enshrined that right in its national constitution.313 The

cyber-attacks that it suffered in 2007, precipitated by a political decision to remove a Soviet-era

statue from a Tallinn city centre square, therefore proved most debilitating for the small state.314

Estonia, a former republic of the Soviet Union, has a significant percentage of native Russian

speakers in its population. These individuals protested the removal of the statue from where it had

stood since the days of the USSR, and these protests were seconded by the Russian government.315

Given the precipitating act and the disapproval of the Russian government, it was (and has been)

generally assumed that the Russian administration or those representing the administration

(officially or otherwise) were responsible for the cyber-attacks against Estonia, though they were

never fully and undeniably attributed. What these attacks did prove, however, is that cyberspace is

necessary for the daily function of life in a modern, networked state: it is a significant risk to have

access to that structure removed or made difficult. The defacement of the websites and the

310 See Joshua Davis, “Hackers Take Down the Most Wired Country in Europe,” Wired, August 21, 2007, https://www.wired.com/2007/08/ff-estonia/; Stephen Herzog, “Revisiting the Estonian Cyber Attacks: Digital Threats and Multinational Responses,” Journal of Strategic Security 4, no. 2 (2011): 49–60; Damien McGuinness, “How a Cyber Attack Transformed Estonia,” BBC News, April 27, 2017, sec. Europe, https://www.bbc.com/news/39655415. 311 NATO News, “NATO News: NATO Opens New Centre of Excellence on Cyber Defence,” NATO News, 2008, https://www.nato.int/docu/update/2008/05-may/e0514a.html. 312 In fact, the Estonian systems were so besieged that the government took the Internet offline for several days, incurring further losses, in an attempt to subvert the attacks. 313 Visit Estonia, “E-Estonia | Why Estonia?” 314 Ottis, “Analysis of the 2007 Cyber Attacks against Estonia from the Information Warfare Perspective”; Stiennon, “A Short History of Cyber Warfare.” 315 Ottis, “Analysis of the 2007 Cyber Attacks against Estonia from the Information Warfare Perspective”; Stiennon, “A Short History of Cyber Warfare.”

Chapter 3 - Threat elevation analysis

110

breakdown of access to financial institutions suffered by the Estonians over the period of the

attacks also points to the general insecurity of both government and corporate websites and

structures, which is concerning in the extreme and also contributes to the elevation of cyberspace

out of the bounds of normal politics through the riskification process. The use of cyberspace and

cyber-enabled technologies, even in light of the knowledge that these technologies and structures

are vulnerable to hostile acts, makes it even more important that we analyse and understand the

security threats and vulnerabilities inherent in cyberspace, whether perceived, potential, or actual.

Moreover, with the development of the NATO CCDCOE and the subsequent release of

influential reports such as the so-called Tallinn Manuals on the application and relevance of

international law to cyber warfare and cyber operations,316 the attacks on Estonia provide one of

the most tangible and recognisable examples of how a specific cyber-attack led to an ongoing and

evolving elevation of cyberspace as a threat vector.

Georgia 2008

The year following the cyber-attacks aimed at Estonia, the neighbouring state of Georgia also

suffered a series of debilitating cyber incursions. Unlike in the Estonian case, however, the

Georgian government was subject to a series of cyber-attacks in conjunction with a ground

invasion of South Ossetia, a region of Georgia with a significant native Russian population.317 The

Russian government, under the pretence of protecting the native Russian inhabitants of South

Ossetia, crossed the Georgian border and took over the region.318 The cyber-attacks against

Georgia accomplished two things. First, they made it difficult for the Georgian government and

military to communicate and coordinate, as their communication lines suffered or were useless

throughout the cyber-aided ground campaign. Second, it proved that utilising cyber-attacks as one

element of a military campaign improved the efficacy and the impact of ground invasions. It also

proved that the Russian government was willing to conduct cyber-attacks when deemed necessary

316 NATO News, “NATO News: NATO Opens New Centre of Excellence on Cyber Defence”; Michael N. Schmitt, ed., Tallinn Manual on the International Law Applicable to Cyber Warfare, Tallinn Manual 1 (Cambridge: Cambridge University Press, 2013), https://www.cambridge.org/au/academic/subjects/law/humanitarian-law/tallinn-manual-international-law-applicable-cyber-warfare, https://www.cambridge.org/au/academic/subjects/law/humanitarian-law; Michael N. Schmitt and Liis Vihul, eds., Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, Tallinn Manual 2 (Cambridge: Cambridge University Press, 2017), https://www.cambridge.org/au/academic/subjects/law/humanitarian-law/tallinn-manual-20-international-law-applicable-cyber-operations-2nd-edition, https://www.cambridge.org/au/academic/subjects/law/humanitarian-law. 317 Miroslav Mareš and Veronika Netolická, “Georgia 2008: Conflict Dynamics in the Cyber Domain,” Strategic Analysis 44, no. 3 (May 3, 2020): 224–40, https://doi.org/10.1080/09700161.2020.1778278; Stiennon, “A Short History of Cyber Warfare,” 18–20. 318 Mareš and Netolická, “Georgia 2008”; Stiennon, “A Short History of Cyber Warfare,” 18–20.

Courteney O’Connor – PhD Thesis 2021

111

in order to achieve the stated goals of the government, particularly in areas once part of the Soviet

Union, and still considered within the Russian sphere of influence and security.319

Given this operation came on the heels of the Estonian cyber-attacks of 2007, the Georgian

situation may not have securitised cyberspace outside the state being attacked, but as with the

Estonian attacks I would argue that Georgia contributed to the initial elevation of cyberspace in

military terms. It brought into sharp relief the ways that cyber-attacks could contribute in real

terms to military operations, and the kinetic difficulties that would ensue for the victim state if

their cyber defences were not, in turn, sufficient to the task of repelling a cyber assault. It also

made clear, if it had not been before, that whether or not all parties agree to the fact of cyberspace

being a war fighting domain, the fact of the matter is that governments and militaries are treating

it as such; all states must be prepared to defend their networks from malicious actors and attacks,

and also to utilise cyberspace as a tool of state power in offensive operations against actors with

hostile intent. To do otherwise would be foolhardy in the digitalised society of today.

In terms of the elevation of cyberspace to a security threat, as stated above, I would not consider

the Georgian attacks the instigating act of a securitisation in the sense of moving from second-

order acknowledged risk to first-order security threat. What the Georgian operation did was to

elevate the risk of future cyber-enabled military campaigns by bringing the existence of such into

the light, as it were, and making political and military elites aware of the dangers involved in conflict

when advanced communications technologies are both used and undermined by an opposing

party. In addition, attacks such as those on Estonia and Georgia do not just affect the victim states

in terms of military and political security; the loss of crucial financial and communications

infrastructures to the attacks can incur huge costs to the governing administrations of both states.

PRISM

Perhaps nothing in the history of cyberspace development has so effectively contributed to its

riskification and securitisation as the revelation of the NSA surveillance web. Unlike the cyber-

attacks associated with the Estonia and Georgia conflicts, the US was not in a conflict relationship

with the majority of parties upon which they were discovered to have been conducting

319 Ronald J. Deibert, Rafal Rohozinski, and Masashi Crete-Nishihata, “Cyclones in Cyberspace: Information Shaping and Denial in the 2008 Russia–Georgia War,” Security Dialogue 43, no. 1 (February 1, 2012): 3–24, https://doi.org/10.1177/0967010611431079.

Chapter 3 - Threat elevation analysis

112

surveillance.320 Through various cyber structures, the NSA was discovered to have a number of

programs running that enabled them to surveil and collect data on tens of millions of people,

including citizens of the US, across a broad range of sovereign states.321 Dragnet surveillance, as it

has come to be termed in the aftermath of the so-called NSA scandal, ensured that the NSA had

a vast quantity of data within which to search for data points relevant to their national security

intelligence requirements.322

Unfortunately for the NSA, when the program was discovered through the leaking of significant

quantities of information by ex-NSA contractor Edward Snowden, the public and private backlash

was immense.323 The sheer volume of data that the NSA was sweeping up in its dragnets concerned

citizens both foreign and domestic, and those affected by the surveillance ranged from the ordinary

individual up to world leaders, including spectacularly German Chancellor and American ally

Angela Merkel.324 As further information was released in the days and weeks following the initial

breaking of the story, the true extent of the American intelligence network came to the fore, and

brought intelligence into the public eye in a spectacular and negative light. Following the leaks, the

NSA is now one of the most well-known intelligence agencies in the world.

The Snowden leaks also exposed the sheer strength and extent of the intelligence collection

capacity of the US and its allies, as pertaining directly to digitalised data. Using a variety of

techniques, the NSA had (and likely still has) access to more information that it could ever possibly

analyse. Given the volume of digitalised information that could become actionable intelligence,

combined with the velocity at which the information is created and the inordinate variety of

information that is available, it is unrealistic to assume that dragnet surveillance, especially at the

level the NSA was conducting, will in fact make the US any more secure at the domestic or

international level.

320 Richard Lempert, “PRISM and Boundless Informant: Is NSA Surveillance a Threat?,” Brookings (blog), November 30, 1AD, https://www.brookings.edu/blog/up-front/2013/06/13/prism-and-boundless-informant-is-nsa-surveillance-a-threat/; T. C. Sottek and Janus Kopfstein, “Everything You Need to Know about PRISM,” The Verge, July 17, 2013, https://www.theverge.com/2013/7/17/4517480/nsa-spying-prism-surveillance-cheat-sheet. 321 Glenn Greenwald, No Place to Hide: Edward Snowden, the NSA and the Surveillance State (Melbourne: Hamish Hamilton, 2014). 322 Julia Angwin, Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance (New York: Henry Holt and Company, 2014). 323 Greenwald, No Place to Hide: Edward Snowden, the NSA and the Surveillance State; Lempert, “PRISM and Boundless Informant.” 324 Reuters in Berlin, “NSA Tapped German Chancellery for Decades, WikiLeaks Claims,” the Guardian, July 8, 2015, http://www.theguardian.com/us-news/2015/jul/08/nsa-tapped-german-chancellery-decades-wikileaks-claims-merkel; Ian Traynor, “Angela Merkel: NSA Spying on Allies Is Not On,” the Guardian, October 24, 2013, http://www.theguardian.com/world/2013/oct/24/angela-merkel-nsa-spying-allies-not-on.

Courteney O’Connor – PhD Thesis 2021

113

One of the arguments used to justify dragnet surveillance is the practice of hopping, or networking:

tracing the communications of a known (and presumably wanted) individual in order to map his

network, the idea being that having access to all possible information will enable the mapping of

terrorist networks. The intelligence community being what it is, it is highly unlikely that the general

public will ever discover or be informed of whether or not this practice has actually succeeded in

increasing the (perceived) security of the USA, or in tracking down terrorist suspects and cells.325

What the NSA leaks did manage to do was prove beyond reasonable doubt that cyberspace was

most definitely exploitable, and was most definitely being exploited. This is a risk not just to states

but to all actors therein; the privacy debate came to the forefront of both public and government

consciousness, and has remained there in some fashion since.

In terms of riskification and securitisation, the Snowden leaks revealed to allies, enemies, and

parties neutral to the US that cyberspace was being actively navigated and exploited by the largest

intelligence community in the world. As further information was released and the extent of the

NSA's capabilities in cyberspace became clearer, so too did the political consequences of the

practice of dragnet surveillance. The suspicion of US intelligence agencies continues to this day,

and many books, articles and documentaries have been released that examine and analyse both the

material released by Snowden, and the direct and indirect consequences of those revelations.326

The Snowden revelations also revealed the UK’s Government Communications Security

Headquarters (GCHQ) and other Five Eyes partners were heavily engaged in global surveillance

in partnership with NSA. The value placed on global communications data as well as the

vulnerability of that data became extremely clear, and has continued to increase.

There can be no doubt, after the revelation of the exploitable capacity of cyberspace, that

increasing reliance on digital networks and cyberspace infrastructure without similar attention to

increasing the security of that infrastructure, as well as the applications and programs run on that

infrastructure, is increasingly dangerous. The practices that the US was conducting in terms of

exploiting the relatively insecure structure of cyberspace to the detriment of both enemies and

allies was unexpected in the extreme. Rather than increasing the security of the US through the

extension of intelligence networks and raw information sources, I would argue that, upon

discovery, the NSA dragnet practice decreased the overall security of the US as other actors in the

325 Quantifying something like this would be extremely difficult, if not impossible, as it would involve assessing people’s changed perceptions. 326 Angwin, Dragnet Nation; Greenwald, No Place to Hide: Edward Snowden, the NSA and the Surveillance State; Luke Harding, The Snowden Files: The Inside Story of the World’s Most Wanted Man (New York: Vintage Books, 2014).

Chapter 3 - Threat elevation analysis

114

international system began strengthening cyber programs in response to the (perceived) aggressive

stance of its government on use of cyberspace. Snowden’s actions in revealing the extent of

surveillance and the quantity of data being collected elevated the vulnerability of cyberspace in

terms of audience susceptibility to intelligence collection. It also showcased the lack of CCI

practices in place at individual through to state levels, highlighting the importance of increased

attention to CCI as well as the value of investing in the security and resilience to cyber-attack and

exploitation of individuals as well as governments.

Threat Elevation of Cyberspace

According to the elevation framework, there are a number of crucial elements to a securitisation

process, which I examine above. In addition, these elements contribute to a process which is

completed in stages, separating third order political issue from second order risk, and second order

risk from first order threat. The two processes that must be undertaken to move between these

orders are riskification (to go from political concern to risk), and securitisation (to go from risk to

(existential) threat). As has been discussed in the sections above, there have been a number events

over the course of the last several decades that have contributed to the overall perception of

cyberspace as a growing vulnerability and probable attack vector, as we become an increasingly

digitalised and informed society. Cyberspace is undoubtedly a domain in which intelligence

collection and exploitation occurs, and so efforts must be made not just to increase cyber security

in the form of defensive measures such as anti-virus and firewalls, but counterintelligence measures

in order to protect sensitive and proprietary information. The combination of cyber defence and

cyber counterintelligence will enhance the resilience of the state to cyber-attack and exploitation

— see Figure 1 The Elements of Cyber Security. It would be difficult to deny the fact that

cyberspace, while an integral part of everyday life for most, now presents almost as much a risk as

it does a benefit. And the key word to focus on there is risk. Despite the many cyberspace strategies,

agencies and departments within both civilian administrations and military hierarchies, and a

growing number of international agreements and treaties concerning cyberspace, I would argue

that cyberspace has not yet been securitised. It has, however, been successfully riskified.

This is not to say that the elevation process will stop at successful riskification. The validity of an

analysis performed according to the elevation framework does not expire upon the eventual

designation of risk rather than threat. Because there is no inbuilt mechanism in the elevation

framework that requires an analysis to be performed within a specified timeframe, and does not

require that the elevation process be conducted within a certain period of time in order to be valid,

Courteney O’Connor – PhD Thesis 2021

115

it is possible to simply designate an elevation process as ongoing and worthy of further analysis.

In the case of cyberspace, this is exactly the conclusion at which I have arrived. While cyberspace

is admitted by many parties in the international system to present valid security risks, I would

hesitate to say that any actor with the legitimate authority to speak security for the state has framed

cyberspace (or the use/exploitation thereof) as an existential threat to the modern state. If

anything, despite cyberspace being acknowledged as a risk to state security, the uptake of cyber-

enabled technologies has actually increased rapidly, and it is highly probably that this will continue.

Conclusion: Riskified but Not Securitised

It may never be known by the public whether securitising moves have been made out of the public

eye; much of state capacity in and exploitation of cyberspace is highly classified – and rightly so.

If this argument were made for every international security issue, however, there would be no

political theory or framework that could produce conclusions worth anything at all. Every analyst

must work with, and produce conclusions from, the intelligence that is available to them at that

point in time; it is the same for social scientists. According to the precepts I have outlined for

elevation theory and the information available to the public, cyberspace has not been successfully

securitised. It has been riskified, and may in the future be securitised, but at this point has not been

identified and acknowledged as representing an existential threat to the state. No extraordinary

measures have been requested to mitigate the (potential) threats posed by the existence and

(foreign) exploitation of cyberspace, and, at present, the element of extreme necessity has not been

fulfilled. As cyberspace has developed and been riskified, actors that have chosen to utilise and

exploit cyberspace (from the casual individual user through to state representative entities) must

become familiar, at a minimum, with the basic concepts of cyber security, and act in such a way as

to maximise their own safety and security in cyberspace. A large part of that familiarity and practice

is the employment of (though not necessarily explicit understanding thereof) the principles of

counterintelligence, particularly at the level of state interaction.

Cyber counterintelligence practices will contribute to the overall cyber security of the state (or

other actor being analysed) through denying the adversary access or opportunity for intelligence

collection and exploitation, in addition to subverting intelligence operation undertaken through

cyberspace or cyber-enabled means. The identification and subversion of adversary intelligence

operations separate cyber counterintelligence from cyber defensive measures such as system

protection and the repulsion of attacks against systems, such as distributed denial of service. As

identified in chapter two, cyber counterintelligence consists of “the methods, processes and

Chapter 3 - Threat elevation analysis

116

technologies employed to prevent, identify, trace, penetrate, infiltrate, degrade, exploit and/or

counteract the attempts of any entity to utilise cyberspace as the primary means and/or location

of intelligence collection, transfer or exploitation; or to affect immediate, eventual, temporary or

long-term negative change otherwise unlikely to occur.”327 One such example of the use of

cyberspace for intelligence exploitation to effect negative change will be considered in the chapter

six discussion of disinformation, and the current failure to adequately elevate the risk to democratic

integrity of cyber-enabled disinformation campaigns.

This chapter has developed the threat elevation analysis framework, articulating the pyramidal

process of elevation through riskification and securitisation and identifying the applicability of a

modernised framework by which to trace the development of threats. It has contextualised the

elevation of cyberspace as a threat by providing examples of events through which cyberspace has

been elevated, concluding that at present cyberspace has been riskified. The next chapter uses the

threat elevation analysis framework to analyse how the perception of cyberspace as a risk to

national security has developed in the UK as a case study, and whether specific counterintelligence

practices or responses have developed in concert with the increased level of perceived risk.

327 O’Connor, 112.

117

4 - The United Kingdom: Tracing the Threat Perception of Cyberspace How has the state threat perception of cyberspace developed? Is any evolution of threat perception

correlated to the development of counterintelligence responses to cyber-enabled disinformation?

In order to begin answering these questions, this chapter uses threat elevation analysis to trace the

development of the threat perception of cyberspace in the United Kingdom (UK). By identifying

and tracing the evolution of the way the UK frames cyberspace in national security documentation

(the ‘official view’), this chapter seeks to identify the extent to which the threat of cyberspace has

been elevated in the UK. An assessment of the threat elevation of cyberspace provides a partial

explanation for attitudes in the UK not only to cyberspace, but to its perception of risk and threat

in the national security space. If the threat perception of cyberspace has been either riskified or

securitised, then there should also be an evolution in the approach to cyber security and cyber

counterintelligence (CCI) practices that can be ascertained through process tracing and discourse

analysis. This will lay the foundation of, and inform, subsequent chapters, which will examine CCI

and establish the audience as a threat vector and security vulnerability in considering cyber-enabled

disinformation as a threat to democratic integrity.

The examination of Stuxnet in chapter three identified the importance of the human factor both

in conducting espionage operations, and in considering counterintelligence measures to combat

them. The psychology of the audience – their perception of the cyber domain and the risks therein

– will directly affect not only implementation of general cyber security and specific cyber

counterintelligence measures, but their susceptibility to adversary intelligence operations such as

disinformation campaigns. CCI practices, best designed and implemented, need to recognise the

influence of perception – psychology – on how CCI practices can and will be understood and

employed. One way of assessing how these practices develop is the case analysis of how an actor

has elevated the threat perception of cyberspace and whether management and mitigation

measures such as CCI have developed contemporaneously.

The UK was selected as the case study for analysis for a variety of reasons, chief amongst which

was their history of publishing their national security strategy documentation for public

consumption and analysis. I have selected nine official documents that the UK published across

the selected period of analysis – 2008-2018 – related specifically to national defence and security,

which are analysed in the following sections. A second reason for choosing the UK as the subject

of analysis is its status as a fully developed Western-style liberal democracy; by analysing its

response to specific issues, potential lessons can be derived from the experience and responses of

Chapter 4 – The United Kingdom

118

the UK to best inform the national security and intelligence policies and frameworks of nations

developing along the same political path. While not all states will be subject to the same national

security risks and threats or have the same intelligence and security requirements, there will still be

valuable lessons to learn in terms of how the UK responded to threats which do apply to most

modern states, such as those enabled by cyberspace and cyber-physical networks and technologies.

Finally, the UK is part of the Five Eyes intelligence cooperation network, comprised of the UK,

the United States of America (US), Canada, Australia, and New Zealand (NZ). By analysing how

a mature democracy like the UK has responded to and countered (or not) risks and threats in

cyberspace, its intelligence and security partners, and other allies, can judge the success or failure

of various countermeasures and strategies to build upon or develop their own. In analysing the

national security documentation of the UK, we see that the threat perception of cyberspace has

been elevated. This chapter thus shows the utility of threat elevation as an analytical tool, as well

as showcasing the changing treatment of cyberspace in the national security area.

Document Selection and Justification

In selecting which sources to examine for a case study that analyses the threat elevation of

cyberspace and the evolution of a concept like counterintelligence, and how these affect

disinformation, it is readily apparent that analysis must occur at the state level. How governments

understand these concepts and enshrine them into policy directly affects the deployment of

resources to resolve the problems that arise. Directives issued at the state level serve the purpose

of both dictating domestic policy and foreign engagement as well as signalling state position to

adversaries. Official government documents, in the form of national and cyber security policies

and frameworks, are the best fit for such an evaluation. They represent both the contemporary

understanding of the governing Administration of the threatscape at the time the document was

published, as well as what the forecasted risks and threats would be and, to an extent, how the

state would respond to those risks and threats. As such, for the period under review (2008-2018),

nine documents were selected. These documents include national security strategies; strategic

defence and security reviews; a national security capability review; and cyber security strategies.

Nine documents across a period of one decade should provide sufficient documentation to prove

whether or not threat elevation or attenuation has occurred as expected, and whether or not there

have been risk management and/or threat mitigation measures put into place to support either an

elevation or an attenuation.

Courteney O’Connor – PhD Thesis 2021

119

CCI and the Current Threat Environment

As will be seen in chapter five, counterintelligence has had to evolve in order to treat security

concerns in a cyber-enabled age. The threat landscape has shifted, and citizens of target states are

regularly the intended audience of cyber-enabled disinformation campaigns intended to instigate

political of behavioural shifts that will benefit the adversary. There are a greater (and increasing)

number of capable actors in cyberspace with which the intelligence and security apparatuses must

contend, increasing the requirement on citizens to contribute to the overall resilience and security

of the state in cyberspace. Because CCI is a relatively recent field in comparison to traditional (non-

cyber) intelligence and counterintelligence, a significant amount of inference needs to be done in

order to ascertain how the discipline is understood and how it has developed. Additionally,

audience perception and psychology have a significant degree of importance in term of the uptake

and implementation of CCI measures at all levels of analysis, so an examination of CCI in the

current threat environment must take into account several elements.

First, how the state (or other actor) in question perceives cyberspace on the spectrum of concern

through to threat, and whether that has evolved across a selected period of time. Second, whether

there is a recognition or acceptance of the role of intelligence and security agencies in the resilience

and security of cyberspace. Third, whether there has been a recognition that in the cyber-enabled

age, the audience (i.e., the citizenry) of a highly-networked society is both a vulnerability and a

threat vector, particularly in terms of information warfare and intelligence collection and

exploitation. There are few states with sufficient commitment to policy transparency and a high

degree of network connectivity for which there is sufficient national security documentation to be

able to infer answers to these questions. The UK has sufficient commitment to both, as well as

recent exposure to adversary information operations, and as such is a good candidate for an

analysis of the evolution of perceived threat of cyberspace and contingent development of cyber

counterintelligence. This will provide an understanding of how a state is perceiving and reacting

to the current threat environment.

Analysis: National Security Strategies and Strategic Defence and Security Reviews

This qualitative documentary analysis is split into two broad sections; general national security

documentation, followed by cyber-specific security documentation, with both sections presented

in chronological order of publication. This is intended to show the elevation (if any) of cyberspace

as a threat in general national security terms before moving into cyber-specific documentation,

which will necessarily present a greater perception of risk and threat from one vector than would

Chapter 4 – The United Kingdom

120

a general national security document. Within each documentary analysis, the examination is

presented in terms of identifiable threats and drivers. Considered in this section are the 2008

National Security Strategy; the 2009 Update to the 2008 National Security Strategy; the 2010

National Security Strategy; the 2010 Strategic Defence and Security Review; the 2015 National

Security Strategy and Strategic Defence and Security Review; and the 2018 National Security

Capability Review.

2008: The National Security Strategy of the United Kingdom: Security in an Interdependent World

The 2008 National Security Strategy of the United Kingdom was the first articulated, published

national security strategy of that country.328 Considering the recent and contemporary history at

the time the 2008 Strategy (henceforth ‘the Strategy’) was written, it is expected that there will be

several overarching themes such as terrorism/counterterrorism; transnational crime; and failed or

failing states. The assumption going forward will be that the overarching identified ‘threats’ to

national security will peak and, according to the elevation theory framework upon which this thesis

is based, then be attenuated as the threat is downgraded through mitigation and (risk) management.

Threats and Drivers

The Strategy identifies five particular threats or areas of threat: international terrorism; weapons

of mass destruction; conflicts and failed states; pandemics; and transnational crime. These threats

are then assumed to be related to; caused by; or otherwise exacerbated by five drivers: climate

change; competition for energy; poverty and poor governance; demographic changes; and

globalisation.329 There is a particular focus on (international and/or transnational) terrorism as the

greatest threat faced by the UK at the time of publication, and presumed into the near-middle

future. This assumption is threaded throughout the 60-page document, coupling the dangers and

consequences of (varieties of) terrorism with the other identified threats and drivers.

Importantly for this thesis, in talking about the threat of terrorism against the UK, the Strategy

mentions for the first time the potential for ‘electronic attack’ in Chapter 3 (Security Challenges).330

Interestingly, and somewhat frustratingly, there follows no articulation or definition of the term,

and it appears only three times throughout the course of the document. What does follow is a

328 Cabinet Office of the United Kingdom, “The National Security Strategy of the United Kingdom: Security in an Interdependent World” (The Stationery Office, 2008), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/228539/7291.pdf. 329 Cabinet Office of the United Kingdom, 3. 330 Cabinet Office of the United Kingdom, 10 para 3.6.

Courteney O’Connor – PhD Thesis 2021

121

discussion of the merits and dangers of technological innovation and complexity, as well as the

dangers of complex interdependence in terms of global communications. It is within this section

that we see the first use of the term “cyber-attack”; once again, however, there is neither a

preceding nor subsequent discussion of how the term is understood and applied by the UK

government, nor how the threat of cyber-attack is being/will be mitigated.331 Additionally, there is

no recognition of the intelligence collection and exploitation potential of cyberspace, from which

we can infer that recognition of the necessity of CCI within the United Kingdom at this time was

likely negligible.

A strong theme that emerges in the Strategy is the acknowledgement of non-state actors as a

growing (and prominent) element of the national security threatscape. While the 2008 Strategy is

the first of its kind from the UK, the inference can be made that the appearance of non-state actors

on the national radar as threatening to the state is a novel occurrence.332 It should be noted here

that, while non-state actors are acknowledged in this Strategy as a legitimate threat against the state,

the broad inference that can be made of the national security discussion therein is that the potential

for hostile action that could present an existential threat against the state (the UK) remains in large

part with hostile states rather than non-state actors, regardless of reputation or combative history.

Throughout the sections concerning the threat capacity of both states and non-state actors is a

pervasive theme relating to the varieties of terrorism, the overall existence of which is deemed to

be the greatest contemporary threat to the security if the UK. Additionally, it is assumed (forecast)

that terrorism (regardless of variety) will continue to be one of, if not the greatest threat to be faced

by the UK in the foreseeable future.

There seems to be an ambiguity of meaning and a lack of concentration on cyber-attacks or

electronic attacks in the Strategy itself. Both terms are used, but there is no differentiation between

or articulated understanding of either. While this could have been an oversight, considering there

was no (limited or otherwise) summary or dictionary of terms published with the Strategy, there is

no way to be certain whether or not these terms were used to refer to the same thing, or to different

concepts altogether. Additionally, the term “technical attacks” is used in Section 3.26,333 but it

seems to be used as a synonym for electronic attacks or cyber-attacks; again, there is no articulation

331 Cabinet Office of the United Kingdom, 21 paras 3.47-8. 332 Cabinet Office of the United Kingdom, 22 para 3.53. 333 Cabinet Office of the United Kingdom, 16.

Chapter 4 – The United Kingdom

122

of meaning attached to the phrase.334 Given the growing importance of cyberspace even in the

mid-2000s, it is somewhat confusing that a clarified definition of national security terms was

neither provided at the time of the Strategy’s release, nor in the updates that followed.335 This

despite the fact that information and communications systems were considered (at minimum) to

be one of the UK’s vital security interests.336

While there is no explicit reference in the Strategy to counterintelligence (either cyber- or

traditional), it can be inferred from paragraph 4.66 in Chapter 4 that the UK considers cyber-

attacks to be part of the intelligence field;

On Intelligence, in addition to the major effort required to tackle the current level of terrorist threat, the security and intelligence agencies will continue to protect the United Kingdom against covert activity by foreign intelligence organisations aimed at political, economic and security targets, including cyber-attack [sic].337

“Cyber-attack” as used here could be inferred as a catch-all phrase intended to include espionage

and intelligence operations given the context in which the phrase appears. Per the definition

provided in chapter two of this thesis, cyber counterintelligence is certainly intended to subvert

the covert cyber activities of foreign intelligence services, though the recognition above is a limited

acknowledgement of adversary cyber intelligence and exploitation operations. It is worth noting,

however, that even where the discussion turns to the intelligence and security services (and their

practices and mandates), which is not at all frequently, the discussion is often linked or in relation

to issues of terrorism and/or counterterrorism. For example, there is reference to “covert

intelligence” for the purposes of detecting and disrupting “the terrorist threat” on page 25 of the

Strategy, but for no other purposes throughout.

Based on the themes and drivers discussed (and inferred) from the 2008 Strategy, there is potential

to examine the UK security strategies for both threat elevation and threat attenuation of

cyberspace.338 Consideration of the perceived threat from cyberspace was low in the 2008 Strategy,

334 While the US Department of Defense has released (and semi-regularly updates) their Dictionary of Military and Associated Terms, to my knowledge they are the only one of the Five Eyes states to have done so. See Joint Chiefs of Staff: http://www.jcs.mil, “DOD Dictionary of Military and Associated Terms (June 2018)” (United States. Joint Chiefs of Staff, June 1, 2018), https://www.hsdl.org/?abstract&did=. 335 Considered in subsequent sections. 336 Cabinet Office of the United Kingdom, “The National Security Strategy of the United Kingdom: Security in an Interdependent World,” 44 para 4.63. 337 This section of the Strategy considers cyber-attacks in the explicit context of intelligence (and thus counterintelligence). Cabinet Office of the United Kingdom, 44. 338 There is also the potential to analyse the elevation and attenuation of threat through the treatment of terrorism, though an in-depth analysis is beyond the scope of this thesis and will be restricted to minor commentary in favour of the threat elevation analysis of cyberspace.

Courteney O’Connor – PhD Thesis 2021

123

particularly in comparison to threats such as terrorism. Given this low starting position, there is

scope to observe whether or not the threat perception of cyberspace was elevated in subsequent

national security documentation.

There is a commitment to a ‘high-technology’ approach to national security in the Strategy, without

specific articulation of what this necessarily means or what the vulnerabilities of this approach

might be. Despite this ambiguity, we can infer that complex information and communications

technologies, in conjunction with the growing need for and reliance on cyberspace, will influence

and affect the ways in which the UK understands and approaches the protection of national

security. The Strategy does detail the threat levels according to which the UK views risks and

threats; these can be found in the Strategy endnotes,339 and roughly correlate to the

elevation/attenuation structure in Figure 5 - Securitisation and the orders of concern.

The UK threat levels consist of low (attack unlikely); moderate (possible but unlikely); substantial

(strong possibility); severe (high likely); and critical (imminent danger). I consider these threat levels

roughly equivalent to the stages of threat elevation: something is low risk when it is a third order

political concern, though if elevating acts or moves occur, it riskifies (moves to moderate threat)

and either stabilises at substantial (second order risk), or securitises (becomes a severe threat) and

becomes a first order threat (critical danger).340 That the UK has a structure for understanding

threats that accords roughly with the threat elevation analysis framework I consider to be a

validation of the framework itself, and justification of continued research using the elevation

structure for ex post facto analysis of rising threats.

In terms of specific intelligence and counterintelligence foci or requirements, paragraph 4.39

admits to the need for an expansion of analytical capacity over and above current levels, claiming

that the Administration would “continue to strengthen our national analytical capacity for early

warning and strategy development...”341 Understandably for a public document, the focus on

intelligence and security agencies, capacities, and shortcomings is brief and vague; inference is a

subjective practice, but where this has occurred for the purposes of qualitative analysis, I will

acknowledge this explicitly. Overall, the 2008 Strategy forms a good foundation for the analysis of

the UK’s threat perception of cyberspace and whether it has been elevated across the period under

339 Cabinet Office of the United Kingdom, “The National Security Strategy of the United Kingdom: Security in an Interdependent World,” 61. 340 Cabinet Office of the United Kingdom, 61. 341 Cabinet Office of the United Kingdom, 37.

Chapter 4 – The United Kingdom

124

examination, 2008-2018. There also appears to be scope for an analysis of the development of CCI

in the UK. Subsequent chapters will develop the concept of CCI and how the human factor in

CCI is crucial to building resilience to threats such as the negative effects of disinformation.

2009: Update to the National Security Strategy of the United Kingdom: Security for the Next

Generation

In the 2009 Update (henceforth “’09 Update”) to the original 2008 National Security Strategy,

there is an immediate acknowledgement in the executive summary of the increasing importance

of ‘cyberspace’ to the national security of the UK – the first time the term is used in either

document.342 Following the practice in the 2008 Strategy, the term is used without an

accompanying definition. The ’09 Update follows the original structure and complement of themes

and drivers set out in the 2008 Strategy, though goes into considerably more detail: whereas the

2008 Strategy comprised 60 pages, the ’09 Update contains 112. In the consideration of ideology

as a driver of security threats, the ’09 Update links ideology to violent extremism and makes a

point of identifying both ethnic and nationalist ideologies, as well as religious ideologies, as having

the potential to threaten the security of the UK.343 In general terms, the ’09 Update seems markedly

more comprehensive and in-depth than the original 2008 Strategy, which can perhaps be explained

by having learned from the original drafting process and the feedback received since the

publication of the first.

Threats and Drivers

Important for the purposes of this thesis, and the test of threat elevation theory, is the fact that

the ’09 Update contains a section specific to cyber security,344 which, as previously mentioned, only

appeared infrequently in the 2008 Strategy, and then mostly by inference or oblique reference.

Regarding the discussion of cyberspace generally and cyber security specifically, an interesting

theme that emerges in the ’09 Update is the tendency toward analogising these with global and

national health, at one point specifically referring to the “health of cyber space.”345 Whether this is

a conscious or unconscious decision, perhaps to make the cyber domain and the concerns therein

more comprehensible to non-experts, is unclear. The health analogy does not continue in later

342 Cabinet Office of the United Kingdom, “The National Security Strategy of the United Kingdom: Update 2009” (The Stationery Office, 2009), 7, http://www.cabinetoffice.gov.uk/media/216734/nss2009v2.pdf. 343 Cabinet Office of the United Kingdom, 10. 344 Cabinet Office of the United Kingdom, 13. 345 Cabinet Office of the United Kingdom, 13 para 47.

Courteney O’Connor – PhD Thesis 2021

125

strategies and reviews; I infer that it was used initially to encourage understanding based on existing

frames of reference before being discarded in favour of area-specific terminology and analysis.

The ’09 Update also recognises the instantaneous nature of information and public opinion in the

digital era, tying that concern into the information operations domain.346 More so than the 2008

Strategy, the ’09 Update goes into further detail on the different elements of, and risks/threats to,

the national security of the UK, and information operations both by and against the UK. This is

also the earliest acknowledgement I have found which explicitly recognises the effect — perhaps

circular — of information operations on public opinion and therefore the national security and

political sovereignty of the UK. It can be considered as early recognition of the impending security

issues concerning psychological operations and democratic resilience and integrity.

Overall, the ’09 Update seems to articulate and operate on the idea that cyber space is a domain

through which threats (including other actors, both state and non-state) are enabled or delivered,

rather than a disparate threat.347 This seems to correlate to the hypothesis (outlined earlier in this

thesis) that cyberspace is an enabling overlay to the physical domains of land, sea, air, and space.

There is an elevation of perceived threat in subsequent strategies of the potential danger of

cyberspace in terms of the possible consequences to national infrastructure and interests, but the

continuing understanding is of cyberspace as a vehicle for traditional threats rather than a threat

itself. In the context of the current focus on the vulnerabilities of cyberspace and ongoing instances

of cyber-attacks and exploitations, it is an intriguing position to take. Despite the heightened focus

on cyber space in terms of national security, however, there is a continued emphasis on, and

awareness of, the terrorist threat/s against the UK and the probability of (further) kinetic attacks.348

Interestingly, despite cyberspace being identified as a new “area through which (the UK) may be

threatened,”349 there was no concurrent identification of cyber or electronic warfare as a specific,

disparate threat. Nor was there an explicit recognition that beyond destructive options, cyberspace

is also a domain in which intelligence collection and exploitation is likely to occur. The

announcement is made that, accompanying the ’09 Update to the 2008 National Security Strategy,

there would be a side-by-side publication of the first UK national cyber security strategy (see later

sections of this chapter for the analysis of that document).350 This announcement, in combination

346 Cabinet Office of the United Kingdom, 15 para 50. 347 Cabinet Office of the United Kingdom, 20 para 2.6. 348 Cabinet Office of the United Kingdom, 22 para 2.22. 349 Cabinet Office of the United Kingdom, 27 para 3.6. 350 Cabinet Office of the United Kingdom, 31 para 3.11.

Chapter 4 – The United Kingdom

126

with the link made between, and articulation of, the UK’s intelligence services and the development

of defences against cyber threats, does lend credence to the idea that CCI measures are an

increasingly important item on the British national security agenda.351 This is the first explicit

evidence we see of elevation occurring in the ’09 Update that identifies an evolution in threat

perception.

Another term that appears for the first time in the ’09 Update is “cyber technology”,352 used in

relation to the concept of technological innovation and how that has/will have negative as well as

positive consequences for national security, particularly for the UK, but also globally. The Strategy

identifies the potential exploitation of such technologies by “terrorists and organised criminal

groups, or others with malicious intent as a negative consequence of technological innovations in

cyberspace”.353 Once more, this is linked to the overarching theme of terrorism/counterterrorism

as relates to the UK, but this concept can be applied to all sectors of security. In fact, the ’09

Update refers frequently to advances in (communications) technologies in the economic and

environmental sectors, as well.354 The application of CCI measures and practices is particularly

relevant to multi-sectoral security problems such as disinformation, much if which is now

transmitted on advanced telecommunications and social media platforms. Pandemic

disinformation in relation to the COVID-19 pandemic, for example, affects multiple security

sectors and will be considered further in chapter six. This change from the previous Strategy

suggests that we are seeing the first evidence of the threats from cyberspace being elevated. With

respect to the terrorist use of cyberspace, cyber counterintelligence operations are a likely

consequence given that one purpose of CCI is to trace, identify, and subvert actor using cyberspace

to effect negative outcomes. While this is not identified, it is implicit according to an understanding

of counterintelligence in cyberspace.

This contributes to the evolving idea that although it is unlikely that the UK will/would face hostile

military action from another state that would “threaten the independence, integrity and self-

government of the UK”, there is a potential for non-military state threat in less traditional domains,

though these aren’t specifically articulated.355 State-sponsored or -supported actors, for example,

have engaged in or signal-boosted disinformation campaigns with direct and negative

consequences for British infrastructure, democratic integrity, and future security – chapter six will

351 Cabinet Office of the United Kingdom, 34 para 3.31. 352 Cabinet Office of the United Kingdom, 40 para 4.16. 353 Cabinet Office of the United Kingdom, 40 para 4.16. 354 Cabinet Office of the United Kingdom, 40 para 4.16. 355 Cabinet Office of the United Kingdom, 41 para 4.20.

Courteney O’Connor – PhD Thesis 2021

127

consider this in-depth. While there is an articulation of the potential of some states looking “to

develop cyber-attack capabilities” later in the ’09 Update,356 it would be difficult to say that the

recent and current situation could have been foreseen or mitigated. Indeed, foreign interference in

democratic processes, and particularly in elections, has become one of the foremost security issues

from Western democracies.357

One of the elements of UK national security that lends force to the threat of foreign interference

is that, largely as a consequence of globalisation and complex interdependence with the

international community of states358 (which is only in part due to previous membership with the

European Union) it is practically impossible to separate the domestic security of the UK from

international security.359 The interconnected nature of the international economy, the alliance

structures that have developed over decades, and even the contemporary nature of social and

cultural interchange, have resulted in an international system wherein the security of one state can

be (and often is) influenced and affected by other states, just as those states are also influenced

and affected. Nor is it necessary for those states to be geographically contiguous to have complex

security interdependencies; consider the relationship of the UK’s security to that of the US. The

economic, political, and strategic stability and integrity of the US impacts many states, including

the UK, by virtue of complex economic, socio-political, and security relationships.

These elements all add to the development within the ’09 Update of the understanding that

technology is a driver of both innovation (in a positive sense) and insecurity (in a negative sense),

and thus should be both exploited, and mitigated against, for the purposes of security. One

particular threat that is articulated and discussed is that electronic communications and

information systems are a threat vector (vulnerable to attack); given the contemporary reliance on

such systems and even the levels of reliance in the mid-2000s, this is a significant vulnerability and

a particular cyber counterintelligence problem.360 In terms of both system and information

integrity, there needs to be an evolution of understanding in how to both identify and subvert

356 Cabinet Office of the United Kingdom, 42 para 4.23. 357 See Adam Henschke, Matthew Sussex, and Courteney O’Connor, “Countering Foreign Interference: Election Integrity Lessons for Liberal Democracies,” Journal of Cyber Policy 5, no. 2 (2020): 180–98; Karen Yourish, Larry Buchanan, and Derek Watkins, “A Timeline Showing the Full Scale of Russia’s Unprecedented Interference in the 2016 Election, and Its Aftermath,” The New York Times, September 20, 2018, https://www.nytimes.com/interactive/2018/09/20/us/politics/russia-trump-election-timeline.html. 358 Bilateral and multilateral treaties and agreements as well as the economic complexities of the global liberal capitalist system have resulted in close (and often fraught) ties between nations. 359 Cabinet Office of the United Kingdom, “The National Security Strategy of the United Kingdom: Update 2009,” 49 para 5.4. 360 Cabinet Office of the United Kingdom, 50 para 5.8.

Chapter 4 – The United Kingdom

128

adversary attempts to exploit these systems without degrading or destroying them (CCI). What is

not discussed, however, is what should or could be done in order to harden and stabilise these

critical systems against interference or outright attack (overall cyber security). Moreover, while we

are seeing the first evidence of cyberspace being elevated, the focus in the ’09 Update is on the

technology more broadly. As will be discussed in detail in my examination of the rise of

disinformation in chapter six, effective CCI needs to include human factors as essential to any

effort to harden vulnerable systems.

Directly relevant to the discussion within this thesis on the development and evolution of CCI,

the UK’s ’09 Update does view the amalgamation of espionage with cyberspace as a possible non-

military action that can be used against the state to threaten stability, though this is not widely or

deep considered.361 Accordingly, the acknowledgment of the necessity for CCI can be inferred

from statements that confirm that “the UK already faces a sophisticated and pervasive threat from

hostile foreign intelligence activity, much of which is conducted in the cyber domain.”362 The

recognition of a ‘sophisticated and pervasive threat’ emanating from the cyber domain is an

important elevation in published threat perception of cyberspace as a location for intelligence

collection and exploitation. It links the cyber domain explicitly with adversarial intelligence activity,

which implies a necessity of body or function to guard against such activity: by definition, this

elevation requires a management response of CCI.

There is a much greater focus in the ‘09 Update on the role and function of the intelligence and

security services than appeared in the original 2008 Strategy. In keeping with the ‘09 Update’s

theme of internationalism, there is also explicit mention of the role and function of the security

and intelligence services of the UK’s allies in the safekeeping of British national security:

Some hostile state actions, particularly in times of conflict or heightened tension, could be carried out overtly but many will be covert. Our security and intelligence agencies, together with those of our allies, play a critical role both in providing insight and early warning on states’ capabilities and intent and in enabling us to best mitigate or defend ourselves against specific threats.363

Again, the importance of counterintelligence can be inferred here, particularly when the

articulation of ‘specific threats’ is made; safeguarding the nation’s security and secrets are the key

responsibilities of the counterintelligence practitioner. Though there is no particular mention of

cyber or cyberspace in relation to the preceding quote or the inferential discussion on

361 Cabinet Office of the United Kingdom, 65 para 6.6. 362 Cabinet Office of the United Kingdom, 66 para 6.8. 363 Cabinet Office of the United Kingdom, 68 para 6.12.

Courteney O’Connor – PhD Thesis 2021

129

counterintelligence, considering the focus on ‘high-technology approach’ and repeated mention of

information and communications technologies, I believe it to be a fair assumption that some of

the foreseen ‘specific threats’ either take place in, or are enabled by, cyberspace. While there is a

statement on the importance (and utility) of aggressive forms of intelligence and

counterintelligence to national security, it is once more articulated in specific reference to the

terrorist threat against the UK rather than as a general assumption of practice.364 There is also an

announcement for significantly increased funding towards counterterrorism and intelligence,

though unqualified (as may be expected in an unclassified document).365

In the ‘09 Update, cyberspace is recognised as a domain within which threats to national security

can arise and through which they can be delivered or enabled. However, it is not considered a

specific threat in and of itself.366 Linked to the concern surrounding future terrorist threat and the

ongoing recognition of weapons of mass destruction (WMD) as a national security threat, the fear

of terrorists gaining knowledge of chemical, biological, radiological or nuclear (CBRN) technology

is articulated in Chapter 7 in connection with widespread use of and access to the internet.367 That

access to the Internet gives individuals and groups access to potentially dangerous information,

and has made information on CBRN devices potentially more readily available to interested parties.

In combination with dual-use technologies, the UK assesses the threat of terrorists gaining CBRN

capabilities through greater informational access via the Internet as concerning.368 In situations like

this, the state can and should utilise CCI as knowledge of who is interested in and displays an

intent to acquire or use these technologies is relevant to national security. Identifying and tracking

these actors through (their use of) cyberspace is a direct form of CCI as defined in this thesis.

Admittedly, the assertion is made that “[i]n the technological domain, the need for security in cyber

space [sic] is now a pressing concern.”369 However, there is no subsequent discussion of the

potential elements or requirements of how this problem could or should be solved. Lacking even

a definition of what a (more) secure cyberspace actually looks like, the reader is once again left

with more questions than answers. What is a secure cyberspace? How is the UK defining either

the space, or the characterisation of ‘secure’? What are the building blocks of a more secure

cyberspace? How will the UK address the threats and risks that it has identified or forecast, and in

364 Cabinet Office of the United Kingdom, 78 para 6.37. 365 Cabinet Office of the United Kingdom, 80 para 6.42. 366 Cabinet Office of the United Kingdom, 93 para 7.2. 367 Cabinet Office of the United Kingdom, 95. 368 Cabinet Office of the United Kingdom, 95 para 7.7. 369 Cabinet Office of the United Kingdom, 99 para 7.27.

Chapter 4 – The United Kingdom

130

what ways? The 2009 Update undoubtedly contains the first steps toward the threat elevation of

cyberspace in the UK national security documentation, following on from the 2008 national

security strategy. The security authorities of the UK government are building their awareness and

understanding of the threat potential of cyberspace. That these questions are unanswered only

points to the need for a greater focus on, and understanding of, CCI at the state level, as well as

the necessity of a holistic approach to the management of cyberspace-enabled risks.

The ’09 Update differs significantly from the original 2008 Strategy in that it does devote a section

of the document explicitly to cyber security: paragraphs 7.35 through 7.44 deal explicitly with

observations about the relative (in)security of the cyber space. According to the ’09 Update, the

“cyber space encompasses all forms of networked, digital activities; this includes the content and

actions conducted through digital networks.”370 While this may be a workable definition of cyber-

enabled actions, as a description of cyberspace in itself, it is somewhat lacking: it doesn’t include

the cyber-physical infrastructure upon which cyberspace rests and through which information and

communications are conveyed. There is a potential for this definition to require the response to

disinformation as both content and activity conducted through digital networks, which would

engender both overall cyber security requirement as well as specific cyber counterintelligence

measures (in regards to specific campaigns and operation). A particularly insightful section dealing

with the potential threats that are enabled by cyberspace identifies the ease of entry and the

varieties of attacks that can be perpetuated at relatively low cost to include both espionage and

disinformation:

The asymmetric, low cost, and largely anonymous nature of cyber space makes it an attractive domain for use by organised criminals, terrorists, and in conventional state-led espionage and warfare. Those with hostile intent can, for whatever reason, use a variety of methods of attack…to gather intelligence, spread false information, interfere with data or disrupt the availability of a vital system.371

Left unarticulated is the fact that all these elements, while they are potential threats against the UK

(and all states with digital presences), the UK can also take advantage of these elements in its own

pursuit of national security. Positive use of the Internet to gather intelligence; spread false

information; interfere with data or disrupt the availability of a vital system, etc. have all been

demonstrated over the last decades (and increasingly in contemporary times), and all have

370 Cabinet Office of the United Kingdom, 102 para 7.35. 371 Cabinet Office of the United Kingdom, 102 para 7.37.

Courteney O’Connor – PhD Thesis 2021

131

moreover been demonstrated against the UK.372 However, the same systems and methods used to

conduct operations against the UK can also be used by the UK against hostile parties; the same

exploits and malware used against the UK can in turn be exploited and used by the UK. This is

one of the key features of counterintelligence, and evidence of the importance of CCI being

systematically understood and employed. While active and proactive CCI (see chapter five) are

more relevant to state actor operations, the education of the public (the audience) in basic CCI

practices will contribute to the aggregate cyber resilience and security of the UK, which will be

considered further in chapters five and six.

One particularly important instance of an elevating act directly relevant to the cyber security of the

UK arises in the latter paragraphs of the cyber section, specifically in 7.41, with the creation of the

Office of Cyber Security (OCS), intended to coordinate a cyber framework and policy.373

According to elevation theory, the creation of an agency, department or branch is an elevating act

most likely to occur in an attempted risk management process (following the elevation of perceived

risk of an issue, but lacking the elements of existential danger and immediacy of a securitisation).374

While cyberspace has been framed as an increasing risk in the ’09 Update, it remains to be seen

whether any single act can be said to support the claim of a securitisation process having been

instigated. The additional creation of the Cyber Security Operations Centre (CSOC) is further

evidence to the increasing acknowledgment of, and threatening nature of, threats enabled or

conducted by and through cyberspace.375 The highly interconnected nature of public opinion and

(the security of) cyberspace is also specifically acknowledged.376 While the potential effects of that

relationship on the national security of the UK are briefly considered, with the benefit of hindsight,

this could have been more deeply explored and the public more broadly educated on the

interactions of cyber-enabled technologies and the public space. A holistic approach to conducting

CCI for the benefit of individual and state security requires the recognition of threats to audiences

as well as infrastructure, given the human element is an irrevocable part of cyber resilience and

security. The importance of considering the education, as well as the protection of the public in

372 Disinformation will be discussed further in chapter six. See also Karim Amer and Jehane Noujaim, The Great Hack, Documentary, 2019, https://www.netflix.com/au/title/80117542; Bradshaw and Howard, “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation”; Fallis, “A Conceptual Analysis of Disinformation”; Rid, Active Measures: The Secret History of Disinformation and Political Warfare. 373 Cabinet Office of the United Kingdom, “The National Security Strategy of the United Kingdom: Update 2009,” 103 para 7.41. 374 See the discussion of extraordinary measures and risk management in chapter three. 375 Cabinet Office of the United Kingdom, “The National Security Strategy of the United Kingdom: Update 2009,” 103 para 7.43. 376 Cabinet Office of the United Kingdom, 105 para 7.54.

Chapter 4 – The United Kingdom

132

relation to cyber-enabled threats, will be examined in relation to the rising threat of disinformation

in chapter six.

2010: A Strong Britain in an Age of Uncertainty: National Security Strategy

The 2010 National Security Strategy (henceforth “the 2010 Strategy”) speaks in the foreword of

the vulnerabilities of increasingly networked and open societies.377 In terms of identifying the

primary and grave threats to the security of the UK, cyber[space] is second only to terrorism,

followed by “chemical, nuclear, and biological weapons as well as large scale accidents or natural

hazards...”378 There is also an explicit recognition of the dangers posed to the state by non-state

actors, an important divergence from traditional understandings of danger at the systemic level –

the continued focus on the threat of terrorism dovetails with this expansion of hostile actor

identification. The 2010 Strategy was published simultaneously with the Strategic Defence and

Security Review, which details the methods of dealing with the current and future threats examined

in the 2010 Strategy, and which will be the focus of the subsequent section.

Following severe criticisms of the prior Administration’s handling of both the national budget and

various national security matters – “in an age of uncertainty, we are continually facing new and

unforeseen threats to our security…the last Government took little account of this fact” – the

2010 Strategy identifies clearly the national security priorities of the incoming Administration, with

terrorism and cyber taking priorities 1 and 2, respectively.379 There is an overt acknowledgement

that British interests are best served by continued international engagement, namely as a member

of the European Union (EU) and a permanent (P5) member of the United Nations Security

Council (UNSC). Alliances and alliance networks also feature in the introduction to the ‘Age of

Uncertainty,’ so-called due to the modern era of rapidly transforming and proliferating threats.380

In line with the discussion from chapter three and what we would expect to see when a threat is

being elevated, the introductory material of the 2010 Strategy contains a discussion of the

importance of identifying risks and engaging in risk management to prevent those risks developing

into crises. Accordingly, the 2010 Strategy is claimed to be the first such document produced

377 Cabinet Office of the United Kingdom, “A Strong Britain in an Age of Uncertainty: The National Security Strategy” (The Stationery Office, 2010), 3, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/61936/national-security-strategy.pdf. 378 Cabinet Office of the United Kingdom, 3. 379 Cabinet Office of the United Kingdom, 4–5. 380 Cabinet Office of the United Kingdom, 3.

Courteney O’Connor – PhD Thesis 2021

133

alongside a prioritisation framework and capabilities assessment in pursuit of a holistic view of the

UK’s potential to achieve the state’s defence and security goals.

Threats and Drivers

While the “top priority” is identified as “countering the threat from terrorism at home and

overseas,”381 there is an adoption of the concept of resilience as an aspirational description of what

British security should look like. Cyber-attack is identified as the second highest-priority risk to

UK national security. However, it is tied to terrorism as a potential tool for terrorist attackers.382

While there is no doubt significant danger involved in terrorist use of cyberspace, to tie the inherent

risks of cyberspace to the risk of terrorism is to fail to consider the variety and severity of risks

and threats possible in, and through, cyberspace beyond the range of terrorist acts. While there is

an extensive focus on the risks of terrorism and insurgency, with particular focus on Afghanistan

as an example of instability and insurgent warfare and Al-Qaeda as a direct threat, a significant

development in the 2010 Strategy is the first appearance of the term ‘hostile espionage activity’ in

the same section as ‘cyber-attack.’383 While there is an either/or format utilised in the examination

of these two interrelated risks, one can infer a growing acknowledgement of the (potential)

connection between the two, itself a minor elevating act for future riskification and risk

management. Importantly, there is a specific articulation of the multiplying effect of cyberspace

on the dangers and potential consequences of (foreign, hostile) espionage activity.384 There is a real

need here, if implicit, for cyber counterintelligence to evolve in response to the force-multiplication

and geographic reduction effects of cyberspace as a location of intelligence collection and

exploitation operations. In terms of elevating both the risks of cyberspace and the importance of

CCI, this is an important continuation on the elevation process begun in the ’09 Update. This

signals a continuity of conceptual and strategic development, as well as an elevation of the

perception of the risk.

The 2010 Strategy does note a lack of specific “existential threat to…security, freedom or

prosperity,” which is a clear statement against the genuine securitisation of any issue, but in context

of the discussion of identified (less than existential) threats to British security is taken as evidence

of advanced riskification of those identified threats and drivers.385 In combination with the

381 Cabinet Office of the United Kingdom, 9 para 0.7. 382 Cabinet Office of the United Kingdom, 11 para 0.18. 383 Cabinet Office of the United Kingdom, 14 para 1.5. 384 Cabinet Office of the United Kingdom, 14 para 1.9. 385 Cabinet Office of the United Kingdom, 15 para 1.11.

Chapter 4 – The United Kingdom

134

recognition that access to the Internet (and social networking sites in particular) are allowing

geographically disparate groups to coalesce over relevant issues and exert influence over

‘international governments and organisations,’ one potential inference is that while psychological

dangers to democratic audiences are not overtly acknowledged to be engendered by adversarial

operations, the possibility exists for those operations to affect decision-making and policy-making

capacity.386

There is also a subtle identification of the importance of counterintelligence in the recognition of

hostile intelligence-gathering activities and cyber-attacks, as well as the potential of malign

influence operations, though I note that this is a qualitative inference based on documentary

context.387 Paragraph 1.32 states that the adversaries faced by the UK “will change and diversify as

enemies seek means of threat or attack which are cheaper, more easily accessible and less

attributable than conventional warfare. These include gathering hostile intelligence, cyber-attack,

the disruption of critical services and the exercise of malign influence over citizens or

governments.”388 This is an important initial recognition of the intent of malicious actors to affect

audience psychology, and CCI is needed to both identify these influence campaigns, subvert them,

and identify the adversaries undertaking them. It also builds upon a similar recognition in the ’09

Update that the UK is subject to hostile intelligence-gathering activity, of which a significant

percentage is undertaken through cyberspace.389 This is an important element of the continuous

elevation of threat by the British government in relation to cyberspace. While no one event has

occurred which could be used as a securitising act, there is an increase in threat perception which

is communicated through the evolution in language throughout the national security

documentation thus far. As discussed in chapter three, there has been a gradual discursive

riskification, which requires risk management measures; there has not yet been a securitisation,

which requires more direct mitigation.

Interestingly for intelligence-gathering and counterintelligence in cyberspace, the 2010 Strategy

explicitly identifies the free flow of information on the Internet as crucial to the British information

economy and as such to the general (national) interest of the UK, without noting or recognising

the potential drawbacks of that same free flow of information; namely, the free flow of

disinformation and foreign propaganda. The rise and dangers of disinformation will be considered

386 Cabinet Office of the United Kingdom, 16 para 1.21. 387 Cabinet Office of the United Kingdom, 18 para 1.32. 388 Cabinet Office of the United Kingdom, 18 para 1.32. 389 Cabinet Office of the United Kingdom, “The National Security Strategy of the United Kingdom: Update 2009,” 66 para 6.8.

Courteney O’Connor – PhD Thesis 2021

135

further in chapter six, following the chapter five definition, typology, and examination of cyber

counterintelligence. The identification of the ‘real and present threats’ of terrorism and cyber-

attack speak more to the elevation of (critical national) assets than to the security and integrity of

information to which the public has access, despite the statement and examination of a risk

identification, prioritisation, and management framework as would be expected according to the

threat elevation framework.390 According to the 2010 Strategy, this framework will help to

“prioritise the risks which represent the most pressing security concerns in order to identify the

actions and resources needed to deliver our responses to these risks,” which is a perfect

representation of a riskification process and the elevation of identifiable risks to security into a

management framework.391

Unlike the ’09 Update, where cyberspace is considered a domain in which national security threats

can arise, but is not a threat in and of itself, in the 2010 Strategy, cyberspace is the second-ranked

Tier 1 threat (following terrorism).392 While elevation theory does not recognise the validity of

speaking a threat into existence as in traditional securitisation theory, between the ’09 Update and

the 2010 Strategy there is nonetheless an elevation of the relative threat level of cyberspace to a

genuine national security threat. This inference is strengthened by the acknowledgement in the

2010 Strategy that the UK must “take action precisely to prevent risks that are in Tier Two or

Three from rising up the scale to become more pressing and reach Tier One,” thereby recognising

the substantially higher threat profile of cyberspace from one Strategy to the next. The tiered threat

matrix aligns with the elevation framework proposed in chapter three,393 and introduces the

advanced risk management processes intended to stabilise cyberspace according to UK

requirements.

This is the clearest indication yet that states approach risk and threat recognition in a structured,

tiered way, as suggested by threat elevation analysis. The UK has created a native version of the

threat elevation pyramid; even though threat elevation analysis is designed for post hoc examination

of threat recognition and related policy decisions, the similarities between the threat elevation

analysis framework and the UK version suggest that threat elevation analysis could also be

developed for prescriptive use. Despite the status of cyberspace as a Tier One threat, there is no

further evidence to indicate a successful securitisation of cyberspace such that extraordinary

390 Cabinet Office of the United Kingdom, “A Strong Britain in an Age of Uncertainty,” 25 para 3.1. 391 Cabinet Office of the United Kingdom, 26 para 3.13. 392 Cabinet Office of the United Kingdom, 27. 393 Cabinet Office of the United Kingdom, 28.

Chapter 4 – The United Kingdom

136

powers of mitigation can or should be claimed. The continued focus on the link between terrorism

and cyberspace obscures the variety of alternative threats that cyberspace has the potential to both

represent and transmit, but without a significant elevating event or claim by the UK government.

So, according to threat elevation theory, cyberspace in the UK was successfully riskified by this

point, though heavily weighted toward systems and infrastructure rather than holistically.

Significantly, there is a differentiation in the 2010 Strategy between risks and threats, with the

expected terminology (according to threat elevation analysis) of mitigation and management

utilised: “…preventing and mitigating the Tier One risks are the top priority ends of the

Strategy.”394

2010: Securing Britain in an Age of Uncertainty: The Strategic Defence and Security Review

The Strategic Defence and Security Review that accompanied the 2010 National Security Strategy

(published at the same time as the 2010 Strategy) is intended to outline the means and ways of

enacting and securing the national security priorities identified in the 2010 Strategy. The Foreword

of the 2010 Strategic Defence and Security Review (hereafter the “2010 SDSR”) once more

confirms the threat to the national security of the UK from terrorist actors, though for the first

time starts to expand the discussion of terrorism into the drivers of radicalisation and the

importance of counter-radicalisation measures. In addition to announcing the creation of the

National Security Council,395 the Foreword takes up the concept and framework of national

resilience in broad terms, a continuation of the trend seen in the 2010 Strategy. Importantly for

the examination of the threat elevation of cyberspace in the UK, the 2010 SDSR also recognises

the increasing risk of cyberspace to UK national security and prosperity, a promising development

so early in the document.396

While much of the 2010 SDSR is taken up with a discussion of the national defence and security

budget, and the necessity and means of bringing it to a more controllable and responsible state,

there is also an early acknowledgement of the continued importance of counterterrorism, a

comparatively early occurrence next to prior national security documents. At approximately twice

the length of the 2010 Strategy (76 pages to 37), the 2010 SDSR provides more of the tactical,

operational statements, purposes, and methods of achieving the national security priorities. The

394 Cabinet Office of the United Kingdom, 31 para 3.45; 33 para 4.09. 395 Cabinet Office of the United Kingdom, “Securing Britain in an Age of Uncertainty: The Strategic Defence and Security Review” (The Stationery Office, 2010), 3, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/62482/strategic-defence-security-review.pdf. 396 Cabinet Office of the United Kingdom, 4.

Courteney O’Connor – PhD Thesis 2021

137

early identification of terrorism as a security threat is consistent with prior strategies, and the

accompanying recognition of the threat potential of cyberspace indicates the acknowledged

increase in threat assessment. That it appears alongside a high-tier threat such as terrorism in the

foreword to national security documentation is a clear indication of a considerable elevation of

perceived risk or threat.

Threats and Drivers

The concept of resilience appears early in the 2010 SDSR, contained within the ‘National Security

Tasks and Planning Guidelines’.397 Despite this, and consistent with previous national security

documentation, while the SDSR does not explicitly define resilience, p.13 does provide a short

description of what resilience looks like as a deliverable, which is the provision of “resilience for

the UK by being prepared for all kinds of emergencies, able to recover from shocks and to

maintain essential services.”398

There is also early encouragement for the development of counterterrorism and cyber security

measures; the contiguous placement of these two threats is a strong indication that the perception

of cyberspace as a threat vector has significantly increased in the UK since the publication of the

2008 National Security Strategy, and that cyber security might receive similar attention as does

terrorism in terms of countermeasures and funding. This SDSR also begins to emphasise the use

and importance of the state intelligence services and assets in relation to national security.

Nonetheless, as expected according to the nature of intelligence services and assets, there is little

discussion or examination of those services and assets. However, their mention and

acknowledgement are an important development for the elevation of cyber-based or -enabled

threats, particularly in terms of the importance of the intelligence services in both active and

proactive counterintelligence in cyberspace. While no overt link is made between the importance

of the intelligence services and the acknowledged risk of cyberspace, the fifth guideline articulated

in the National Security Tasks and Planning Guidelines identifies the necessity of using the

intelligence services to protect the UK against both “physical and electronic threats from both

state and non-state actors.”399

While there is a subsequent explicit link made between the intelligence services and protecting the

UK against terrorist threats specifically, once more associating terrorism and cyberspace without

397 Cabinet Office of the United Kingdom, 9. 398 Cabinet Office of the United Kingdom, 13 para 7. 399 Cabinet Office of the United Kingdom, 12.

Chapter 4 – The United Kingdom

138

consideration of alternate threats, this is an important development in conceptual understanding

of the operational context of cyberspace. Here we see further evidence of the progressive and

consistent threat elevation of cyberspace as a risk to the national security of the UK. That this is

occurring in parallel to the growing recognition of the role and necessity of the intelligence

community in the protection against such risks is implicitly indicative of both the requirement and

the practice of CCI. Where cyber security concerns the overall defence of cyberspace, and cyber

defence concerns the protection of networks, cyber counterintelligence is (at least in part) about

the defence of information and tracking those who attempt to appropriate and exploit that

information through cyber-enabled means. It is also proof that political concerns often undergo

an incremental increase in threat perception based on the relationship between parties and

evidence of danger, rather than being spoken into existence (as with traditional securitisation).

Interesting for this thesis is the characterisation of the use of ‘cyber’ as an asymmetric tactic, and

that future conflict spaces will require resilient communications; the inference here is that future

communications, as well as conflict between hostile parties, will take place at least in part on or

through cyberspace, and that it may be difficult to mitigate or respond given the asymmetric nature

of cyber-attacks or exploits.400 The linking of cyberspace to resilience is an interesting development

in that it indicates an acknowledgement that true security in cyberspace is an unreasonable goal,

given the profusion of potential adversaries and increasing variety of threat vectors and

vulnerabilities. This section also sees the first use of the term ‘cyber operations’ as a unique activity,

which speaks to both recognition of adversarial movement in cyberspace and, potentially, to the

ability of the UK to undertake such operations and activities.401

The 2010 SDSR describes an overarching adaptable strategic posture, one specific aspect of which

is how to “manage the risks” to the national security of the UK, which aligns perfectly with the

expected path of elevation according to the threat elevation framework offered in this thesis.402

Management of risks in order to prevent further elevation (securitisation) of that risk to the level

of a true threat to national security that would require extraordinary measures should be the

operative goal of any national security strategy; the ultimate aim of such management is the

attenuation of that risk (or threat), such that it no longer requires additional (or, in the best case,

continuing) mitigation, management, or other-than-ordinary consideration. There is an extended

400 Cabinet Office of the United Kingdom, 16. 401 Cabinet Office of the United Kingdom, 17. 402 Cabinet Office of the United Kingdom, 19 para 2.16. For a discussion of threat elevation analysis, see chapter three.

Courteney O’Connor – PhD Thesis 2021

139

examination of Defence Force posture and capacity, expected for a Defence and Security Review;

notable is the emphasis on intelligence capability throughout.

The 2010 SDSR also announces the creation of the Defence Cyber Operation Group (DCOG),

which represents a cross-government approach and the integration of capabilities across both

cyber and physical space.403 This is a significant development in terms of the threat elevation

analysis framework, which posits the creation of a specific agency or group as an elevating event,

given that such a creation could only occur if the subject of that agency or group had been

acknowledged as significant enough a risk or threat that a dedicated team is required.404 Given the

consistency of incremental elevation of concern in previous Strategies, it is fair to assess the

creation of the DCOG is an elevating act that finalises the (successful) riskification of cyberspace

as a threat vector. A dedicated agency or group is an explicit measure of risk management. It is

worth noting that despite the creation of a dedicated team within Defence that is intended to

represent a cross-government approach, there is an ongoing confinement of cyber threat as

pertinent to systems and data. There is no recognition that there exists a threat to democratic

audiences (people and decision-making) as there is to assets, which are explicitly recognised as

being vulnerable in each of the national security documents thus far analysed.

In relation to the development and understanding of CCI, observations are only possible through

inference. In addition to the presumably classified nature of assets and methods, there is at best a

nascent link between the existence and vulnerabilities of cyberspace and the function and

operation of the intelligence services, at least in the public domain as represented by the national

security documentation selected for analysis in this thesis. There is an articulation in the SDSR

discussion on science and technology of the uses of unmanned aerial vehicles (UAVs) and

surveillance technologies designed to inhibit the ability of hostile actors to move in secrecy,405 but

considering the overall focus on the threat of terrorism throughout the UK national security

documentation, it is inferred that this is more in reference to counterterrorism and counter-

insurgency operations than in cyberspace. Regardless, it is an important acknowledgement (if

implicit) of the importance of counterintelligence to operational and national security. The 2010

SDSR does articulate the “implications of the Strategic Defence and Security Review for

intelligence,” and though the overriding focus remains on terrorism as a threat, there is also an

acknowledgement of the intelligence agencies providing a “sufficient technical platform for the

403 Cabinet Office of the United Kingdom, 27 para 2.A.15. 404 See the “Extraordinary Measures” discussion in chapter three. 405 Cabinet Office of the United Kingdom, “Securing Britain in an Age of Uncertainty,” 28 para 2.A.16.

Chapter 4 – The United Kingdom

140

cyber security program.”406 There is another implicit reference to CCI capabilities and requirements

in the subsequent section in reference to communications interception, but, again, is only in

relation to counterterrorism operations.407 The focus on the technical aspects of CCI without

contemporaneous recognition of the human factor is at least consistent, though a serious oversight

in terms of continued democratic integrity and the desired national resilience of the UK.

Unlike previous national security documents, the 2010 SDSR has a significant section on cyber

security, extending across three pages.408 The discussion of cyber security is entwined with the

utility of cyberspace to British national interests, and here we see the use of new terms specific to

the examination of cyberspace and cyber security, including cyber infrastructure (further focus on

assets and infrastructure) and cyber products/services (in relation to aforementioned assets and

infrastructure).409 Further links are also made between the concept of national resilience and the

cyber domain, identifying so-called ‘vital networks’ as being critical to the security and interests of

the UK. Where the DCOG was created in order to “mainstream cyber security throughout the

Ministry of Defence and ensure the coherent integration of cyber activities across the spectrum of

defence operations,”410 the SDSR also announces the creation of the Cyber Infrastructure Team

within the Department for Business, Innovation and Skills,411 and the Office of Cyber Security and

Information Assurance within the Cabinet Office412, which are intended to secure the integrity of

cyber-related infrastructure and digitally created and stored information, respectively. This is

further clear evidence of the riskification of cyberspace and cyber-enabled adversarial activity, but

also continues the focus on assets and infrastructure while failing to recognise the vulnerability of

the British public to cyber-enabled operations intended to influence or affect their decision-making

and behaviour.

2015: National Security Strategy and Strategic Defence and Security Review: A Secure and

Prosperous United Kingdom

The 2015 iteration of the National Security Strategy of the United Kingdom also contains the

‘2015 Strategic Defence and Security Review’, where the 2010 iterations were published as separate

documents. The foreword of the 2015 Strategy and Review (hereafter “2015 NSS/SDSR”) makes

406 Cabinet Office of the United Kingdom, 43. 407 Cabinet Office of the United Kingdom, 44 para 4.A.5 point 5. 408 Cabinet Office of the United Kingdom, 47–49. 409 Cabinet Office of the United Kingdom, 47. 410 Cabinet Office of the United Kingdom, 47 para 4.C.3 point 3. 411 Cabinet Office of the United Kingdom, 48 para 4.C.3 point 4. 412 Cabinet Office of the United Kingdom, 49 para 4.C.3 point 8.

Courteney O’Connor – PhD Thesis 2021

141

an immediate link between the economic security of the UK and its national security. Following

the 2010 documents, which identified the economic imbalance and undertook to address that

imbalance, it is unsurprising that a similar emphasis appears in the subsequent documentation.

There is also an early recognition of the danger of cyber-attacks relative to previous iterations,

appearing on the first page of the foreword of ‘A Secure and Prosperous United Kingdom’.413 Also

in the foreword is a strong recognition of the importance of the work of the intelligence and

security agencies – while this is once more closely related to the counterterrorism mandate, it is

the earliest articulation of their work and purpose of the national security documents thus far

analysed.414 Also consistent with previous documents is the reliance on the concept of national

resilience as an aspirational characteristic of British security.

While the primary security threat to the UK is still identified as terrorism, in the 2015 NSS/SDSR

cyber security and the status of the UK as a world leader in that field is a close second. This is

tightly linked to the goal of a resilient UK, though the specifics of resilience are once more left to

inference from context and general understanding.415 The 2015 NSS/SDSR identifies three

National Security Objectives which should drive the UK; “protect [British] people; protect [British]

global influence; promote [British] prosperity.”416 There is considerable emphasis on the global

nature of the UK, the relationships and influence it maintains and the relationships it would like

to build or strengthen in the future.417 Dovetailing that, however, is the recognition that state-based

threats are an increasing concern in the national security context, the direct specification of which

has been absent from previous documents.418 While state-based threats are often considered in

terms of military engagement, it is important to point out here that intelligence collection by states

takes place commonly and consistently. It is also likely to increase in the context of potential

conflict. In this respect, the recognition of state-based threats must also be a recognition of danger

beyond the militaristic: cyber security becomes more important in the general sense, and cyber

counterintelligence in the particular sense as state and citizens seek to protect their own data and

information from the adversary.

413 Cabinet Office of the United Kingdom, “National Security Strategy and Strategic Defence and Security Review 2015: A Secure and Prosperous United Kingdom,” 2015, 5, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/555607/2015_Strategic_Defence_and_Security_Review.pdf. 414 Cabinet Office of the United Kingdom, 6. 415 Cabinet Office of the United Kingdom, 10. 416 Cabinet Office of the United Kingdom, 11–12. Written in the Strategy as “protect our people; protect our global influence; promote our prosperity.” 417 Cabinet Office of the United Kingdom, 14. 418 Cabinet Office of the United Kingdom, 15.

Chapter 4 – The United Kingdom

142

Threats and Drivers

Terrorism is once more listed as the primary security concern of the UK, though in this document,

domestic extremism is specified as a related concern.419 Again, there is an explicit link between the

dangers of terrorism and cyberspace, the connection being the recognition that terrorist actors are

utilising social media platforms and networks for the diffusion of propaganda.420 While this is an

important link and has been the subject of significant research,421 the lack of development of the

threat elevation of cyberspace in terms of people beyond terrorist actors (both as a vector for

attack by adversaries and a vulnerability for the population, the audience) is a failing of the national

security conception of the UK in the documents analysed thus far. Chapter six will examine the

issue of audiences as security vulnerabilities more thoroughly, in the context of democratic integrity

and the rise of disinformation. The argument that will be developed there, and that I introduce

here, is that a holistic approach to national security is impossible to achieve unless the threat to

audiences is elevated to the same level as critical national infrastructure, and that CCI is crucial to

any such endeavour.

A more significant development in terms of elevating the risks of cyberspace is the greater

emphasis on the dangers of hostile intelligence activities by state actors than was seen in previous

iterations. The explicit identification of cyber operations undertaken as intelligence activities is

particularly noteworthy as it is the first time that the activities have been so overtly linked and

419 Cabinet Office of the United Kingdom, 15 para 3.3. 420 Cabinet Office of the United Kingdom, 16 para 3.8. 421 See Maura Conway et al., “Disrupting Daesh: Measuring Takedown of Online Terrorist Material and Its Impacts,” Studies in Conflict & Terrorism 42, no. 1–2 (February 1, 2019): 141–60, https://doi.org/10.1080/1057610X.2018.1513984; Christina Archetti, “Terrorism, Communication and New Media: Explaining Radicalization in the Digital Age,” Perspectives on Terrorism 9, no. 1 (2015): 49–59; Monika Bickert and Brian Fishman, “Hard Questions: How Effective Is Technology in Keeping Terrorists off Facebook?,” About Facebook (blog), April 23, 2018, https://about.fb.com/news/2018/04/keeping-terrorists-off-facebook/; Kurt Braddock and John Horgan, “Towards a Guide for Constructing and Disseminating Counternarratives to Reduce Support for Terrorism,” Studies in Conflict & Terrorism 39, no. 5 (May 3, 2016): 381–404, https://doi.org/10.1080/1057610X.2015.1116277; Adam Henschke and Alastair Reed, “Toward an Ethical Framework for Countering Extremist Propaganda Online,” Studies in Conflict & Terrorism, March 8, 2021, 1–18, https://doi.org/10.1080/1057610X.2020.1866744; Haroro J. Ingram, A Brief History of Propaganda During Conflict: Lessons for Counter-Terrorism Strategic Communications, ICCT Research Paper (The Hague: The International Centre for Counter-terrorism, 2016), https://books.google.com.au/books?id=XRWTAQAACAAJ; Rita Katz, “To Curb Terrorist Propaganda Online, Look to YouTube. No, Really. | WIRED,” October 20, 2018, https://www.wired.com/story/to-curb-terrorist-propaganda-online-look-to-youtube-no-really/; Alastair Reed, “IS Propaganda: Should We Counter the Narrative?,” The International Centre for Counter-Terrorism, March 17, 2017, https://icct.nl/publication/is-propaganda-should-we-counter-the-narrative/; Alastair Reed, “Counter-Terrorism Strategic Communications: Back to the Future, Lessons from Past and Present,” The International Centre for Counter-Terrorism, July 4, 2017, https://icct.nl/publication/counter-terrorism-strategic-communications-back-to-the-future-lessons-from-past-and-present/; Kent Walker, “Four Steps We’re Taking Today to Fight Terrorism Online,” Google, June 18, 2017, https://blog.google/around-the-globe/google-europe/four-steps-were-taking-today-fight-online-terror/; Amy-Louise Watkin and Joe Whittaker, “Evolution of Terrorists’ Use of the Internet,” Text, Counter Terror Business, October 20, 2017, https://counterterrorbusiness.com/features/evolution-terrorists%E2%80%99-use-internet.

Courteney O’Connor – PhD Thesis 2021

143

acknowledged, a clear elevation of the risks associated with cyber-enabled intelligence collection

and exploitation capacities.422 While there is no explicit examination of the nature of the

intelligence activities thus far identified as having been conducted against the UK, the appearance

of the linked activities in the 2015 NSS/SDSR indicates that there is or will be a program of

counterintelligence activities undertaken though cyberspace as a response to identified foreign

action. This is a significant development in the acknowledgement of CCI as necessary to state

security and resilience, even if implicitly acknowledged. The noticeable increase in uses of the terms

‘management’ and ‘mitigation’ in terms of risks and threats to security is also in line with the threat

elevation framework offered in this thesis, showing a division of perceived levels of concern and

methods of response.423

Regular focus on the purposes and uses of the intelligence and security agencies in terms of the

way in which the UK will respond to Objective 1 (Protect Our People) is both in line with the

increasing requirement for assessment of ongoing and emerging threats in an era of rapid

transformation, and indicative of the methods by which the UK intends to manage these risks.

Two short paragraphs deal explicitly with the dangers posed to the UK by ‘hostile foreign

intelligence activity;’ though consistent with previous iterations of the national security strategy,

these sections are not developed in any significant way.424 Potentially more significant, with the

developing perception of the UK’s administration of the growing risks associated with cyberspace,

is the declaration that cyber-attacks (and presumably exploitation) would be treated with the same

gravity as would be ‘an equivalent conventional attack.’425

Missing from this declaration and the surrounding paragraphs is an exploration of what is meant

by the phrase ‘equivalent conventional attack,’ or indeed how equivalent measures for attacks in

the cyber and physical domains have been constructed. Without signalling to adversaries (or allies)

how exactly the UK government is assessing cyber-attacks, and by what measures they are judging

the gravity of these attacks, the 2015 NSS/SDSR leaves the intelligence and security services of

the UK, and the domestic audience and international allies of the UK without a yardstick by which

to assess whether treaty alliances may be called upon in the event of a serious attack. It is possible

that these signals or communications may have been broadcast by more covert means, or by simple

bilateral relations; it is, however, impossible to judge whether such events have occurred based on

422 Cabinet Office of the United Kingdom, “National Security Strategy and Strategic Defence and Security Review 2015,” 18 para 3.23. 423 Cabinet Office of the United Kingdom, 21. 424 Cabinet Office of the United Kingdom, 25–26. 425 Cabinet Office of the United Kingdom, 24 para 4.10.

Chapter 4 – The United Kingdom

144

open-source material. Given the global audience for cyberspace and cyber security-related

research, however, as well as the constantly evolving threatscape of the cyber domain and the

profusion of actors therein, I judge it unlikely that such communications have taken place. It is

outside the scope of this research to consider bilateral or multilateral communications in relation

to the development of cyber norms, laws, or rules of engagement.426 This makes it more difficult

to gauge whether this language is a further riskification, or an attempt at securitisation through

event (pre-emptive identification of elevating event). Equating a cyber-attack with a conventional

attack requires some sort of measure; there must be a designed equivalence matrix, both integrated

domestically and communicated (overtly or covertly) externally, to know whether such an event

serves to engage further risk management measures, or whether there would be an immediate

securitisation and implementation of mitigation measures. This would also decide whether active

or proactive CCI will suffice as a countermeasure, or whether cyber or kinetic offensive

countermeasures will be deployed. This is an issue with which the Tallinn Manual processes have

engaged, but international consensus has yet to be reached and the Tallinn 3.0 process is in the

planning stages.427

The Armed Forces are identified as having the responsibility of contributing to the defence and

resilience of the UK in cyberspace, a notably overt linkage between a publicly accessible domain

and a national military, which seems to be an implicit recognition of the explicit requirement of

both defensive and offensive capacities.428 Beyond even that, paragraph 4.44 of the 2015

NSS/SDSR not only identifies the exploitation potential of cyber-enabled information collection,

but the capacity of the UK to both protect networks and attack the networks of others:

Joint Forces Command will lead the MOD’s work to improve its understanding go our security environment, using new information technology and greater analytical power to exploit big data, social media and both open source and classified material. We will maintain our satellite communications and invest in

426 I note that there are international fora wherein such issues are discussed, such as the Tallinn Manual scholars and the United Nations Group of Governmental Experts (GGE) assigned to the cyber domain. However, there is also significant disagreement among States party within and beyond participants to these and similar fora. It is unlikely that such issues will be settled comprehensively or soon, and as such research in this field would be a valuable contribution to the creation of knowledge in this space. See Schmitt, Tallinn Manual on the International Law Applicable to Cyber Warfare; Schmitt and Vihul, Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations; United Nations Office for Disarmament Affairs, “Group of Governmental Experts - Cyberspace,” United Nations Office for Disarmament Affairs, 2021, https://www.un.org/disarmament/group-of-governmental-experts/. 427 Schmitt, Tallinn Manual on the International Law Applicable to Cyber Warfare; Schmitt and Vihul, Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations; See also NATO CCDCOE, “CCDCOE to Host the Tallinn Manual 3.0 Process,” CCDCOE, 2020, https://ccdcoe.org/news/2020/ccdcoe-to-host-the-tallinn-manual-3-0-process/. Types of CCI (passive, active, proactive) will be articulated in chapter five. 428 Cabinet Office of the United Kingdom, “National Security Strategy and Strategic Defence and Security Review 2015,” 40–41 paras 4.103-4.112.

Courteney O’Connor – PhD Thesis 2021

145

cyber and space capabilities. We will protect our networks, and attack those of our adversaries if necessary…429

In terms of the development of CCI, this statement identifies explicitly the intent of the

Government to be able to exploit and thus, to collect significant quantities of data from the cyber

domain using new and emerging technologies. By definition, the capabilities will need to be

protected from the intelligence collection of adversaries in order to protect national security assets

(active CCI) and explicitly identifies attacking the networks of others – depending on the

circumstances of the event, this could be both/either proactive CCI or outright cyber-attack. What

this Strategy does not go on to develop is the point regarding the reversal of this capability or

intent in terms of the vulnerabilities of the British population to foreign intelligence collection

and/or psychological manipulation.

The 2015 NSS/SDSR also acknowledged the establishment of the Centre for Cyber Assessments

and the UK Computer Emergency Response Team (CERT), clear examples of both elevated

perception of risk and implementation of measures for management of and response to that

perceived risk.430 There is also an additional commitment to the creation of a National Cyber

Centre, under the leadership of the Government Communications Security Headquarters (GCHQ)

with the mandate of managing “future operational response to cyber incidents, ensuring that [the

Government] can protect the UK against serious attacks and minimise their impact,” and that all

British citizens have a role to play in the protection of “computers, networks and data.”431 The

proliferation of cyber-specific government units continues with the creation of an intelligence unit

dedicated to analysing and mitigating the criminal use of the dark web, elevating the perceived risk

to British interests of cyber-crime.432 Consistent with previous national security documents,

however, the strategic focus remains on the protection of assets (computers, networks and data)

over the users of those computers and networks and the producers of that (exploitable) data: the

audience. Without focusing on the psychological integrity of the audience, which will affect the

information integrity of the data they produce, as well as potentially affecting democratic behaviour

in terms of political decision-making,433 this is a one-sided approach to national resilience with

regards to cyberspace. There is a notable development in this strategy in relation to cyber education

429 Cabinet Office of the United Kingdom, 30 para 4.44. Edward Snowden revealed much of the GCHQ and Allied surveillance capacity, though the impacted programs belonged primarily to the NSA. See Greenwald, No Place to Hide: Edward Snowden, the NSA and the Surveillance State; Harding, The Snowden Files: The Inside Story of the World’s Most Wanted Man. 430 Cabinet Office of the United Kingdom, “National Security Strategy and Strategic Defence and Security Review 2015,” 40 para 4.104. 431 Cabinet Office of the United Kingdom, 41 paras 4.109-4.110. 432 Cabinet Office of the United Kingdom, 41 para 4.114. 433 This is something I discuss in Chapter six.

Chapter 4 – The United Kingdom

146

programs and funding for defence and cyber innovation and research,434 but this requires both

significant development, as well as a more near-future action plan to secure audience psychological

resilience to cyber-enabled threats.

2018: National Security Capability Review

Immediately noticeable in the 2018 National Security Capability Review (hereafter “the NSCR”)

is the recognition in the foreword of the document of the rise of cyber-attacks against the UK.

Placed directly after recognition of state-based threats to the liberal international rules-based order,

and prior to identifying the rise in homeland terrorist/extremist threats, this signals a change in

the government’s perception of the dangers posed by actors in cyberspace.435 This is an explicit

elevation in the perceived potential of cyberspace to present a danger to national security and

interests, but is more accurately categorised as a riskification than a securitisation, given that there

is no statement of existential threat or any sort of temporal justification for extreme measures of

threat mitigation, which would be required for a securitising move.

While the document neither revises existing principles nor introduces new ones, there is a

continued emphasis on British resilience to the existing and varied threats to national security,

continuing the movement from static defence strategies to risk mitigation and defensive resilience

of the last several strategies and national security documentation. In respect to those threats, the

NSCR does recognise two challenges faced by the UK that were not focused on in previous

documents: the growth of serious organised crime (SOC), and the potential of serious diseases to

affect the security of the UK.436 It is worth noting that there is a continued emphasis on and

recognition of extremisms and/or terrorism against the security of the UK. Against the backdrop

of recognised threats and the expansion of potential consequences of each, however, terrorism

and the dangers posed by extremism and supporters has dropped to more of an identified ongoing

risk. This requires long-term management more than special powers of mitigation, which should

necessarily be shorter-term than anything intended to manage an assessed long-term challenge like

(international or domestic) terrorism.

434 Cabinet Office of the United Kingdom, “National Security Strategy and Strategic Defence and Security Review 2015,” 74–78. 435 Cabinet Office of the United Kingdom, “National Security Capability Review” (The Stationery Office, 2018), 2, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/705347/6.4391_CO_National-Security-Review_web.pdf. 436 Cabinet Office of the United Kingdom, 5.

Courteney O’Connor – PhD Thesis 2021

147

Threats and Drivers

The articulation of the growth of state-based threats presents an interesting devolution toward a

more combative international relations than has necessarily been the case for several decades,

through the identification of historically hostile entities (such as the Russian Federation) as

upcoming threats to the national security of the UK; this is in line with the expectations of threat

elevation theory.437 Having a record of hostile or combative interactions with Russia’s prior

incarnation, the Soviet Union (USSR), the UK is able to successfully identify and riskify action

taken by Russia, and to adequately marshal resources for resilient defence against a known risk and

rising threat. That the Russian Federation is also singled out during the NSCR articulation of the

technological and/or cyber threats to the UK is telling: the UK may not anticipate more traditional

kinetic attacks from Russia, but have identified the cyber domain as the most likely vector for

current and forthcoming attacks from the identified hostile party.438 Certainly, the use of the phrase

“Hostile State Activity” is a concerning indication of the UK’s perception of Russian action and

intent in cyberspace and represents an elevation of the language used to identify potential threat

actors.439

The short discussion of ‘counterespionage powers’ on p.8 of the NSCR is the most explicit

reference to counterintelligence requirements or responsibilities in any of the national security

documentation thus far analysed for the UK.440 It is notable that the reference of counterespionage

powers used is in the context of human agents entering and/or operating within the UK that may

be a threat to British national security, but considering the apparent reluctance to identify

counterintelligence in explicit terms prior to this NSCR, which is a significant step.441Again,

Russian agents and diplomats are explicitly identified as elements of specific concern.442 The

strengthening of cyber defences and the development of space technologies for the purposes of

individual and collective self-defence is noted in the same section, though removed enough so as

not to relate directly to the importance or practice of (cyber) counterintelligence in those same

sectors.443 Investment in the defence, intelligence agencies, counterterrorism, and cyber sectors is

also stated,444 linking the dangers of terrorism and cyber, in particular, with deterrence. The UK

framework for national security identified and outlined in SDSR 2015 continues to guide national

437 Cabinet Office of the United Kingdom, 6. 438 Cabinet Office of the United Kingdom, 6. 439 Cabinet Office of the United Kingdom, 6. 440 Cabinet Office of the United Kingdom, 8. 441 Cabinet Office of the United Kingdom, 8. 442 Cabinet Office of the United Kingdom, 8. 443 Cabinet Office of the United Kingdom, 8. 444 Cabinet Office of the United Kingdom, 9.

Chapter 4 – The United Kingdom

148

security policy in the NSCR 2018, with three overarching objectives: i) protect [the British] people,

ii) project [British] global influence, and iii) promote [British] prosperity.445 It is worth noting that

in identifying what each of these goals mean (in short), much of the discussion links back to both

the traditional concept of territorial defence, but also to the importance of infrastructure in

protecting the British way of life. It also becomes a multi-sectoral risk elevation analysis, as ‘way

of life’ includes both the society/identity and economy sectors, to say nothing of political and

military (defence) sectors. While it can be assumed that the ‘way of life’ very much involves the

British citizenry, there is no indication of any intent to educate the British public about threats

emanating from cyberspace, nor the ability or capacity to protect themselves from undue influence

resulting from (so-called) hostile state activity conducted through cyberspace and/or cyber-

enabled technologies. This is a concerning indication that there is a lack of formal (or any)

recognition that the audience in a democracy is vulnerable to foreign interference through cyber-

enabled information warfare, and as such are a threat vector for democratic governments. The

audience needs to engage in individual CCI practices, which will contribute in aggregate to national

security. They also need to be made aware of, and educated about, foreign interference through

psychological manipulation such as disinformation campaigns. This will be discussed further in

chapter six in a short vignette considering a successful elevation of the foreign interference threat

to audiences, with regard to the practices of the Swedish government. The Swedish administration

has recognised disinformation as a threat and the audience as vulnerable, and engaged in

appropriate countermeasures which I will cover in due course.

Terrorism and counterterrorism remain cornerstone issues for British national security, which is

both understandable and expected following London stabbing and vehicle attacks and Manchester

concert bombing – recall from chapter three that according to threat elevation analysis, in addition

to events themselves being elevating acts, the legitimacy of threat elevation processes require

(historical) evidence.446 An interesting aspect of the discussion surrounding the UK

445 Cabinet Office of the United Kingdom, 9. 446 BBC News, “Parsons Green: Underground Blast a Terror Incident, Say Police,” BBC News, September 15, 2017, sec. UK, https://www.bbc.com/news/uk-41278545; BBC News, “Manchester Arena Blast: 19 Dead and More than 50 Hurt,” BBC News, May 23, 2017, sec. Manchester, https://www.bbc.com/news/uk-england-manchester-40007886; Danny Boyle, Chris Graham, and David Millward, “Finsbury Park Mosque Attack Latest: Theresa May Vows Hatred and Evil Will Never Succeed as Labour Warns of Rise in Islamophobia,” accessed May 15, 2021, https://web.archive.org/web/20170621045319/https://www.telegraph.co.uk/news/2017/06/19/finsbury-park-mosque-latest-terror-attack-london-live/; CBS News, “UK Police: 22 Confirmed Dead after Terror Incident at Ariana Grande Concert,” CBS News, May 23, 2017, https://www.cbsnews.com/news/ariana-grande-concert-manchester-arena-explosion/; Lizzie Dearden and May Bulman, “Isis Claims Responsibility for London Terror Attack,” The Independent, June 4, 2017, https://www.independent.co.uk/news/uk/home-news/london-terror-attack-isis-claims-responsibility-borough-market-bridge-a7772776.html; Vikram Dodd, “Palace Terror Suspect Was

Courteney O’Connor – PhD Thesis 2021

149

counterterrorism strategy is the stated intent to make “the internet as hostile an environment as

possible for terrorists to use for propaganda, and ensure we have the critical access we need to

information on their communications.”447 At a minimum, this indicates an existing practice of

active CCI, and, depending on the measures utilised, could potentially mean the use of proactive

CCI operations. The typology of cyber counterintelligence will be discussed in the next chapter.

While these elements are very specifically linked to terrorist use of the Internet, the capacity for

operationalising against one actor in cyberspace can just as well be turned to another; it is no great

leap to assume that this same concept is in play concerning state actors or organised (aligned or

non-aligned) groups or individuals active in the cyber domain. Regardless, it is confirmation that

the UK is moving (or has moved) beyond an active defence of ‘their’ cyberspace to more proactive

CCI in support of national security. Thus, here we see a direct connection between the gradual

elevation of cyberspace as an increasingly important risk to UK national security, and the increased

role and need for effective CCI to monitor, mitigate, and respond to these threats, particularly

against intelligence collection and exploitation in cyberspace.

Coverage of, and attention to, cyber security could certainly be extended; in the NSCR, only two

pages are dedicated specifically to ‘cyber’ issues;448 cyber-crime and cyber threats from hostile states

are dealt with in the same paragraph. There seems to be a general recognition that UK cyber

security has not kept pace with the variety and variance of threats, which is certainly a step in the

right direction. However, for all the consideration, personnel recruitment and training, and

commitment to education programs like Cyber Aware449 for businesses, a greater focus is necessary

on current professional and older generations for the ‘now’ cyber security, not just on the youth

Uber Driver Who Had Tried to Get to Windsor Castle,” the Guardian, August 31, 2017, http://www.theguardian.com/uk-news/2017/sep/01/buckingham-palace-terror-suspect-had-tried-to-get-to-windsor-castle; Vikram Dodd and Matthew Taylor, “London Attack: ‘Aggressive’ and ‘Strange’ Suspect Vowed to ‘Do Some Damage,’” the Guardian, June 20, 2017, http://www.theguardian.com/uk-news/2017/jun/19/several-casualties-reported-after-van-hits-pedestrians-in-north-london; Alexandra Ma, “Parsons Green: Details on London Underground Bomb,” Insider, September 18, 2017, https://www.businessinsider.com/parsons-green-london-underground-attack-bomb-not-worked-properly-experts-2017-9?r=AU&IR=T; Kim Sengupta, “The Final WhatsApp Message Sent by Westminster Attacker Khalid Masood Has Been Released by Security Agencies,” The Independent, April 28, 2017, https://www.independent.co.uk/news/uk/crime/last-message-left-westminster-attacker-khalid-masood-uncovered-security-agencies-a7706561.html; The Irish Times, “London Terror Attack: Who Was Khuram Shazad Butt?,” The Irish Times, accessed May 15, 2021, https://www.irishtimes.com/news/world/uk/london-terror-attack-who-was-khuram-shazad-butt-1.3109174. 447 Cabinet Office of the United Kingdom, “National Security Capability Review,” 19. 448 Cabinet Office of the United Kingdom, 21–22. 449 Cyber Aware is the British Government’s published advice to citizens on increasing cyber security and which particular steps can be taken, 3 of which involve password security – classified in this thesis as (individual) active CCI. See National Cyber Security Centre, “Cyber Aware,” Cyber Aware, 2021, https://www.ncsc.gov.uk/cyberaware/home. See also “The National Cyber Security Centre,” The National Cyber Security Centre, 2021, https://www.ncsc.gov.uk/.

Chapter 4 – The United Kingdom

150

for future cyber security. The use and concept of Active Cyber Defence (ACD) is an interesting

point of reference in terms of specific measures the UK is undertaking, claiming to block “tens of

millions of attacks every week”.450 While there is no specification of what ACD might actually

comprise, it does appear to closely resemble what I have called active CCI,451 perhaps even

proactive CCI given the stated reduction in time that phishing websites are online.452 The

encouragement for industry and global partners to follow suit in implementing such measures that

make cyber-crime “more risky” does indicate a more aggressive, offensive approach to cyber

defence than has previously been observed.453 This is both a clear elevation in threat perception of

cyberspace comparative to previous strategies (indicating continuous evolution of strategic threat

perception), and a clear elevation of the necessity for active defence and countermeasures in

cyberspace. While the audience has yet to be developed as either vulnerable to foreign interference

or as a threat vector, language like that used to describe ACD lends credence to the typologies of

CCI I offer in chapter five, particularly in relation to active and proactive operationalisation, as

well as the strategic vs. tactical view on CCI. Throughout the section on cyberspace, as well as the

entirety of the NSCR, the concept of resilience underpins the strategy of the UK in terms of

defence. The ongoing commitment to infrastructure investment and education of technical

operators is a good first step, but further involvement in public education in and employment of

CCI is critical to ensuring the success of the current British system of democracy.

Analysis: Cyber Security Strategies

This section analyses the three cyber security strategies the UK has so far released, covering the

periods 2009-2011, 2011-2015 and 2016-2021. While there are considerably fewer strategic policy

frameworks for cyber security than there are general national security, given their more specialised

nature, these three documents presented a more detailed analysis of cyberspace as a threat vector.

Additionally, they contained a more developed framework for countermeasures and response to

risks and threats in relation to cyberspace than were present in the national security strategy

documentation detailed in the preceding sections.

450 Cabinet Office of the United Kingdom, “National Security Capability Review,” 21. 451 See chapter five. 452 Cabinet Office of the United Kingdom, “National Security Capability Review,” 21. 453 Cabinet Office of the United Kingdom, 21.

Courteney O’Connor – PhD Thesis 2021

151

2009: Cyber Security Strategy of the United Kingdom: safety, security and resilience in cyber space

Released in tandem with the 2009 Update to the National Security Strategy,454 the 2009 Cyber

Security Strategy of the United Kingdom (hereafter “the 2009 CSS”) is the inaugural cyber security

strategy of the UK.455 This document, shorter than subsequent strategies at 26 pages including

appendices, lays much of the foundation that the 2011 and 2016 cyber security strategies build

upon. There is immediate recognition of the reliance of the UK on the continued and efficient

functioning of cyberspace, as well as the increasing reliance on cyberspace of all sectors of British

society. Particularly pertinent to this study is the acknowledgement that not only do cyberspace

and cyber-enabled activities “affect organisations, individuals, critical infrastructure, and the

business of government,”456 but the fact that it points out that “the UK needs a coherent approach

to cyber security, and one in which the Government, organisations across all sectors, the public, and

international partners all have a part to play.”457 Given that historically, national security is the realm and

remit of the sovereign state, the recognition of the British government that an effective approach

to security and resilience in cyberspace requires participation from multiple sectors of society

beyond the state itself indicates that both approaches to, and the practice of CCI, will be requisite

for security at individual, collective, and national levels. It is also an indication that to properly

riskify cyberspace and introduce management measures for that risk, the government will need to

appeal to a broad audience (i.e., the public) given the requirement for public participation in, and

contribution to, efficient cyber counterintelligence practices.

The strategic objectives of the 2009 CSS are laid out explicitly and early. The UK Government

“will security the UK’s advantage in cyberspace…by reducing risk from the UK’s use of cyber

space [sic]…and exploiting opportunities in cyber space… through improving knowledge,

capabilities and decision-making.”458 These are broad strategic objectives and set lofty goals but

given that this is the inaugural cyber security strategy of the UK and frameworks for cyber security

and resilience are nascent, this is not unexpected. Specifically because this is the inaugural cyber

security strategy, there is an excellent opportunity to assess the development of the UK’s threat

perception of cyberspace, as well as the approach to CCI and cyber-enabled threats from the outset

of official documentation.

454 Examined earlier in this chapter. 455 Cabinet Office of the United Kingdom, “Cyber Security Strategy of the United Kingdom: Safety, Security and Resilience in Cyber Space” (The Stationery Office, 2009), 4, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/228841/7642.pdf. 456 Cabinet Office of the United Kingdom, 3. 457 Emphasis added; Cabinet Office of the United Kingdom, 3. 458 Cabinet Office of the United Kingdom, 4.

Chapter 4 – The United Kingdom

152

Threats and Drivers

The 2009 CSS sets an immediate understanding of the increasing risks posed by cyberspace with

the creation of two new agencies in order to properly address the UK’s perceived cyber challenges;

the Office of Cyber Security (OCS), and the Cyber Security Operations Centre (CSOC).459 As

briefly mentioned previously, appearing as early as it does in the Strategy, the creation of two new

agencies is a clear and overt acknowledgement of the threat elevation of cyberspace by the

governing Administration of the UK. The 2009 CSS also contextualises the British approach to

cyber security and resilience by analogising the security of cyberspace in the 21st century with

maritime security and air security in the 19th and 20th centuries, respectively.460 The Strategy also

identifies the strategic goals specified as part of a ‘vision for cyber security’ that will require a cross-

government and whole-of-society approach in order to be effective and efficient.461 Paragraph 1.2

explicitly requires contribution to the ongoing security and resilience of cyberspace from multiple

sectors – “[t]his document is aimed at all those people who work, communicate or interact using

cyber space, and therefore have a responsibility for maintaining and improving its security; this

includes individual members of the public, organisations across all sectors, and the

Government.”462 This paragraph indicates that there is not only a recognition that cyber security

cannot be addressed by the sovereign state alone, but that there is an implicit acknowledgment

that CS and CCI practices will need to be undertaken by all sectors of society to contribute in

aggregate to the overall level of cyber resilience and security of the state as a collective.463

The 2009 CSS also offers a definition of cyberspace in the introductory chapter specifying that

“cyber space encompasses all forms of networked, digital activities; this includes the content of,

and actions conducted through digital networks.”464 This is a very broad definition. By specifying

the content of, and actions conducted through digital networks as ‘being’ cyberspace, the 2009

CSS inherently acknowledges that cyber-enabled information threats — like disinformation —

459 Cabinet Office of the United Kingdom, 5; p. 17 para 3.8. 460 “Just as in the 19th century we had to secure the seas for our national safety and prosperity, and in the 20th century we had to secure the air, in the 21st century we also have to secure our advantage in cyber space.” Cabinet Office of the United Kingdom, 5. 461 Cabinet Office of the United Kingdom, 7 para 1.1. 462 Cabinet Office of the United Kingdom, 7 para 1.2. 463 Where cyber security (CS) refers to the overall approach to the defence of cyberspace and connected networks, cyber counterintelligence (CCI) refers to the identification and subversion of adversary intelligence activity in and through cyberspace. CCI is defined in chapters two and five, and the relationship between cyber security and cyber counterintelligence is illustrated in Figure 1 The Elements of Cyber Security 464 Cabinet Office of the United Kingdom, 7. Note that where cyberspace is written as two separate words (cyber space), this is how the term is used in the 2009 CSS and I have preserved the official terminology in direct references to the text.

Courteney O’Connor – PhD Thesis 2021

153

must be mitigated by the state. When combined with the insistence on cyberspace and cyber

security being whole-of-society responsibilities, I infer that cyber security and CI practices are

expected to be conducted at all levels – individual, group/corporate, and government; and that

disinformation is one cyber-enabled threat that must also be mitigated at all levels.

The definition also fundamentally links critical national infrastructure with the efficiency,

resilience, and security of cyberspace. The “physical building blocks of cyber space” are considered

to be individual computers (machine) and communications systems – this is a basic definition of

cyberspace and future iterations of national security documentation do have a more informed,

complex, and sophisticated understanding of the physical and virtual infrastructure of

cyberspace.465 Those building blocks, or “technical elements underpin many of our daily activities,

both at work and in our personal lives, and also fundamentally support much of our national

infrastructure and information.”466 This is an early indicator of the schism between recognition of

threats to infrastructure versus threats to audiences through cyberspace and cyber-enabled

technologies like social media platforms; the UK government recognises the danger to assets like

infrastructure and information, but does not develop that understanding further in terms of how

cyberspace and the information it is used to convey could be used to affect democratic audience

psychology and behaviour. The Strategy does reiterate the reliance of the UK on cyberspace in

relation to consumer transactions and individual communications and social media interactions,

but more in the sense of the technologies and efficiency of platforms than under any sense of

threats to perception and behaviour through information warfare or disinformation.467 Notably,

there is a unique appearance of the word ‘disinformation’ in the 2009 CSS in paragraph 2.6,

concerning state actor use of cyberspace; unfortunately that point is not developed any further,

and this is one of the analytical and knowledge gaps that this thesis seeks to address.468

There is also an identification of what the UK government means when the term ‘cyber security’

is used, though it is less a definition than a loose description of a concept – “[c]yber security

embraces both the protection of UK interests in cyber space and also the pursuit of wider UK

security policy through exploitation of the many opportunities that cyber space offers.”469

Interestingly, the point is made that cyber security is not considered an end in and of itself; it is

465 Cabinet Office of the United Kingdom, 7 para 1.5. 466 Cabinet Office of the United Kingdom, 7 para 1.5. 467 Disinformation as a CCI problem will be examined in the final chapter of this thesis. 468 Cabinet Office of the United Kingdom, “Cyber Security Strategy of the United Kingdom: Safety, Security and Resilience in Cyber Space,” 13 para 2.6. 469 Cabinet Office of the United Kingdom, 9 para 1.8. See also Figure 1 The Elements of Cyber Security, which illustrates the relationship between cyber security and cyber counterintelligence and neatly aligns with this general definition.

Chapter 4 – The United Kingdom

154

more of a process and something to work towards, which is perhaps where the emphasis on

resilience in this and subsequent cyber security strategies originates.470 In terms of the continuing

reliance on and use of cyber space and the Internet, paragraph 1.12 states that the free flow of

ideas and the openness of the Internet is a fundamental underpinning of, and way to strengthen,

democratic ideals and economic health.471 Given that there is a recognition of the use of

disinformation (even if explicitly by state actors only) in subsequent sections of the 2009 CSS, it is

surprising that no caveat is made for ensuring the accuracy and integrity of the information that

does flow through the open internet and across cyberspace. Recognition of that danger, and of the

consequent dangers to democratic integrity if audience psychology and behaviour are manipulated

through the use of information operations, is a necessary precursor to an efficient and holistic

approach to CCI, which will contribute to the aggregate cyber security of the state.472 Espionage is

also identified as a threat that can be undertaken through cyberspace, though little development is

given to this point.473 Related to espionage but also underdeveloped in the 2009 CSS is the

identification of ‘influence’ as an evolving threat to the UK – this is a specific danger to audience

psychology and potential manipulation of perception, whether that audience is officials of

government or the democratic public, but there is only one use of the word in the Strategy and the

point is once again not pursued further.474

The section of state actors explicitly identifies cyberspace as a threat vector for intelligence and

information operations, indicating at least a fundamental (if implicit) acknowledgment of the split

between assets and audiences in relation to cyberspace security and resilience:

The most sophisticated threat in the cyber domain is from established, capable states seeking to exploit computers and communications networks to gather intelligence on government, military, industrial and economic targets, and opponents of their regimes. The techniques used by these state actors go beyond “traditional” intelligence gathering and can also be used for spreading disinformation and disrupting critical services.475

I also infer from this the acknowledgements of the profusion of actors in cyberspace capable of

actuating damage to the state or its representative agencies. The caveat of the state being the most

sophisticated actor in cyberspace serves as an additional justification for the choice to focus on the

threat perception of the state and the threats to the state of disinformation and inefficient threat

470 Cabinet Office of the United Kingdom, 9 para 1.9. 471 Cabinet Office of the United Kingdom, 10 para 1.12. 472 CCI is examined in chapter five of this thesis. 473 “Industry partnerships - Case Study” Cabinet Office of the United Kingdom, “Cyber Security Strategy of the United Kingdom: Safety, Security and Resilience in Cyber Space,” 11; 12 para 2.4. 474 Cabinet Office of the United Kingdom, 12 para 2.4. 475 Cabinet Office of the United Kingdom, 13 para 2.6.

Courteney O’Connor – PhD Thesis 2021

155

elevation. I note the specification by the 2009 CSS that “computer systems, networks and

applications all rely on people for their development, delivery, operation and protection.”476 This is

overt acknowledgement of the importance of the human factor in cyber security, and thus in CCI;

explicit identification of the audience as a crucial element in a resilience and secure cyberspace.

This point is not developed further in the 2009 CSS, a trend which continues in subsequent cyber

strategies. The separation of assets (as infrastructure and data) and audiences (as the democratic

public) is an inefficient approach to threat elevation and leaves the UK unprepared for concerted

psychological manipulation campaigns, which will be considered in the sixth chapter of this thesis.

2011: The UK Cyber Security Strategy: Protecting and promoting the UK in a digital world

The second cyber security strategy of the UK acknowledges early in the introduction that there is

a growing dependence on cyberspace, a situation that creates new risks.477 Consistent with the

more general national security strategies and reviews, the cyber security strategy (hereafter “the

2011 CSS”) links these news risks specifically to data and systems. Particularly relevant to this

thesis, one of the acknowledged concerns of the risk to critical data and systems is from hackers

and foreign intelligence services – this represents one half of the holistic elevation of cyberspace,

lacking still the focus on democratic audiences that would promote the resilience of British society

to foreign interference, including (and perhaps especially) through cyberspace, a point covered in

chapter six. Increasing the resilience and security of cyberspace is identified as part of the

overarching purpose of the 2011 CSS over the four-year term to 2015, followed almost

immediately by acknowledging the ‘new normal’ of cyber threat to the public.478 This is an

important acknowledgement of the role the public plays in terms of being targeted by cyber threats,

but the 2011 CSS remains heavily weighted toward consideration of the technical over the

human/psychological.

The 2011 CSS contains an explicit acknowledgement of the split between action and policy — that

which is unclassified and can be considered in the public domain, and that which is classified —

restricted to those personnel and Service members acting to ensure the resilience and security of

the state.479 Such a statement identifies that there is likely considerable development of policy and

476 Emphasis added; Cabinet Office of the United Kingdom, 14 para 2.9. 477 Cabinet Office of the United Kingdom, “The UK Cyber Security Strategy: Protecting and Promoting the UK in a Digital World” (The Stationery Office, 2011), 5, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/60961/uk-cyber-security-strategy-final.pdf. 478 Cabinet Office of the United Kingdom, 8–9. 479 Cabinet Office of the United Kingdom, 9.

Chapter 4 – The United Kingdom

156

methods being undertaken out of the public domain. We can infer that some of these policies and

methods will involve aggressive and possibly even offensive action against adversaries through

cyberspace and/or cyber-enabled technologies. The inherent unknowability of these elements does

not reduce the analytic observations of this thesis, given that much research and signalling occur

in the public domain, but this may prove to be an area of interest for future research.

Threats and Drivers

The 2011 CSS also provides an updated definition of cyberspace, modified from the 2009 Cyber

Security Strategy. According to the 2016 governing Administration of the UK, cyberspace is an

“interactive domain made up of digital networks that is used to store, modify, and communicate

information.”480 It also identifies cyberspace as a repository of knowledge and claims that access

to it strengthens open societies. The assertion about the free flow of ideas and information is an

interesting claim – while it is true that access to ideas and information can be a nett positive for

developing and developed nations, it is also true that this free access is also available to hostile

actors.481 An actor with a working knowledge of cyberspace, and particularly social networking

platforms, can use that access and flow of information to affect target populations, either directly

or by polluting the information ecology such that the integrity of the available information is

suspect or outright untrustworthy. This is the cyber era equivalent of ‘poisoning the well’.482

Following the explicit acknowledgement of the increasing value of cyberspace to the interests and

security of the UK,483 four cyberspace-specific classes of threat are identified: criminals, states,

terrorists, and hacktivists.484 The initial focus on the dangers of cyber criminals is consistent with

previous national security documentation, in that it is assessed to be against money, data, and

systems (assets), more than against the people that create and use said money, data, and systems.

There is also a development of threat understandings in section 2.5 of the 2011 CSS, wherein the

danger of foreign espionage in cyberspace is identified.485 This section specifically articulates the

spread of disinformation as a danger, though qualifies the danger as emanating from patriotic

480 Cabinet Office of the United Kingdom, 11 para 1.3. 481 Cabinet Office of the United Kingdom, 12 para 1.10. 482 ‘Poisoning the well’ originally meant to introduce doubt about the commitment to truth of a person making an argument, such that the audience might not trust what they say regardless of the factual accuracy of the discourse. In this context, I take it to mean introducing doubt about the integrity of the discourse itself (the information) to which the layperson or decision-maker has access, thus influencing the decisions they may make without exerting specific efforts against any single individual. For an examination of the poisoning the well fallacy, see D.N. Walton, “Poisoning the Well,” Argumentation 20, no. 3 (2006): 273–307, https://doi.org/10.1007/s10503-006-9013-z. 483 Cabinet Office of the United Kingdom, “The UK Cyber Security Strategy: Protecting and Promoting the UK in a Digital World,” 15 para 2.1. 484 Cabinet Office of the United Kingdom, 15–16 para 2.4-2.8. 485 Cabinet Office of the United Kingdom, 15 para 2.5.

Courteney O’Connor – PhD Thesis 2021

157

hackers.486 While it is important both for this analysis and the overall threat conception of the UK

that such a danger has been identified and articulated, it is not enough to restrict the focus of the

defence and security forces to disinformation produced by patriotic hackers. Moreover, the

operating assumption here is that it will be possible to identify and isolate only those producers of

mis- and disinformation that were operating on the specific behalf or orders of state adversary.

This is simply not the case; disinformation can be produced and disseminated through both

internal and external actors and must be identified and recognised as a multi-vector risk. The

following section, 2.6, does identify the terrorist use of cyberspace to spread propaganda;487 again,

this is important but there need to be fewer restrictions on the parties identified as producing these

unique forms of information warfare. The danger of bounding a risk in this way is that alternate

actors may also be conducting such operations, and they may not be identified in time to manage

or mitigate, because the actors were not on ‘the list’ of accepted originators. While the risk has

been elevated, both the articulation and proposed management of that risk are problematic and

require further consideration.

The idea that cyberspace is important for the free flow of information and ideas is reiterated after

another articulation of the dangers posed to infrastructure and critical systems by cyberspace and

cyber-enabled technologies. Section 2.19 discussed the importance of cyberspace as a means for

the spread of ideas, specifically in the context of less-liberal states that attempt (and succeed at)

restricting civil liberties and access to cyber-enabled platforms.488 Once more, the alternative view

of the dangers to liberal societies of the free and relatively unrestricted flow of ideas and

information does not take into account that illiberal states can infiltrate false information and

subversive ideas into that flow of information just as easily (if not more so) as citizens of liberal

democracies. While the acknowledgement of the flow of information is important in terms of

analysing the spread of disinformation and the elevation of the audience in terms of cyber risk,

there is insufficient consideration of how that information is used and whether the integrity of that

information is assured or enforceable. The free flow of information is one thing; the accuracy and

consequences of the information stream is entirely another and has not been properly considered

in relation to the integrity of modern democracy or national security strategic thinking.

What the 2011 CSS does offer in terms of the elevation of the audience for cyber security is a

recognition that individual contributions to cyber security will increase the overall resilience and

486 Cabinet Office of the United Kingdom, 15 para 2.5. 487 Cabinet Office of the United Kingdom, 15 para 2.6. 488 Cabinet Office of the United Kingdom, 17 para 2.19.

Chapter 4 – The United Kingdom

158

security of the state, though this is a brief articulation at best.489 The employment of CCI practices

on the part of individual citizens and residents will reduce the overall vulnerability of British society

to the risks enabled or increased by cyberspace and cyber-enabled technologies, which makes it

doubly surprising that the importance of the (democratic) audience to ongoing resilience and

security in cyberspace has not been appropriately elevated. Despite this recognition, the

subsequent sections return to a focus on the importance and vulnerability of critical infrastructure,

reducing the elevation impact of the preceding recognition. The failure to appropriately consider

the audience aspect of cyber resilience and security will prevent a truly holistic and efficient

approach being possible.

The 2011 CSS announces the creation of the Defence Operations Group, as well as the Global

Operations and Security Control Centre within the Ministry of Defence, which will “act as a focus

for cyber defence for the armed forces.”490 While centralisation of focus will hopefully prevent

overlapping of mandate and jurisdiction issues, for the purposes of threat elevation analysis, the

creation of further agencies and groups is a good indication of recognised risks and management

intent. In designating an existing agency or creating one and mandating it with a new or emerging

problem, a government indicates recognition of the growing threat or importance of the problem

area. In selecting a method of countermeasure and an actor to engage in those countermeasures,

the state has acknowledged the elevated importance of that issue. In terms of how CCI might be

practiced by these new actors, the 2011 CSS uses the phrase ‘proactive measures’ in terms of the

Control Centre’s intent and capacity to disrupt threats to British information security.491 On the

face of it, this sounds like proactive CCI and could reasonably be inferred to involve proactive

pursuit of threat actors. Whether such action is in accordance more with the offered definition of

proactive CCI or simply active CCI is likely to remain classified for the foreseeable future. The

methods used by state actors for active cyber defence or proactive cyber defence will depend on

the context of the situation, as well as political feeling at the time. Moreover, and particularly in

relation to proactive CCI, it may be unclear where it ends, and offensive (retaliatory) measures

begin. This is an area of future research that will require considerable time and effort, considering

the national security impact of cyber methods and practices.

In terms of education and protection of the audience as a function of CCI, the 2011 CSS does

identify public awareness and the importance of best practice, as well as providing goals for

489 Cabinet Office of the United Kingdom, 22–23 para 3.8. 490 Cabinet Office of the United Kingdom, 26–27 para 4.9-4.10. 491 Cabinet Office of the United Kingdom, 27 para 4.10.

Courteney O’Connor – PhD Thesis 2021

159

individual resilience and security in cyberspace – these all pertain to (simple) technical security,

without a recognition of cyber-enabled disinformation as something against which the individual

should guard, or a directive to be aware of information accuracy and integrity.492 Despite the earlier

mentions of disinformation (by patriotic hackers) and propaganda (by terrorists), the 2011 CSS

does not develop the understanding of these risks for the audience, nor the idea that those same

threats may in fact emanate from state actors as much as non-state actors. The potential risks to

the audience will be considered further in chapter six, along with an examination of success in

practice.

2016: National Cyber Security Strategy 2016 – 2021

The foreword of the most recent cyber security strategy of the UK acknowledges that cyber-attacks

are becoming more frequent, sophisticated, and damaging. It acknowledges that to ensure national

cyber security and resilience to the greatest possible degree, there is a requirement for a whole-of-

society capability that specifies and includes individuals. This is the most explicit identification yet

of the necessity of individual participation in the pursuit of national security and resilience in

cyberspace, as much an admission of the too-broad scope of cyber security as the

acknowledgement of individual importance. The 2016 Cyber Security Strategy (hereafter “the 2016

CSS”) identifies in the document preface the necessity of “managing and mitigating” threats –

consistent with previous national security documentation, however, there is a continued focus on

systems and data security despite acknowledging the place of the individual (the audience) in

practicing security.493 The recognition of a tiered process of risk management and threat mitigation

aligns closely with the framework of understanding suggested by elevation analysis, and, despite

the continued split focus in favour of technical systems over a holistic view of cyber security and

resilience, the natural development of the British national security documentation along the

framework of threat elevation is encouraging. We thus see not only the elevation of cyberspace

occurring, but also the increased need for CCI to play a role in responding to these threats.

The 2016 CSS contains an explicit admission of the UK owning the capability and “means to take

offensive action in cyberspace, should [the UK] choose to do so.”494 Given the contested nature

of cyberspace and the profusion of cyber-enabled technologies, it is unsurprising that developed

492 Cabinet Office of the United Kingdom, 31 para 4.37-4.39. 493 Cabinet Office of the United Kingdom, “National Cyber Security Strategy 2016-2021” (The Stationery Office, 2016), 7, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/567242/national_cyber_security_strategy_2016.pdf. 494 Cabinet Office of the United Kingdom, 9 para 1.5.

Chapter 4 – The United Kingdom

160

nations would pursue an offensive capability. For the UK to admit and publish this capability is

perhaps unusual, given the statements of previous national security documentation that

international law, the expectation of civility, and the importance of international partnerships and

relationships apply in cyberspace as in the physical realm; to admit an offensive capacity and a

willingness to use it is a strong signal to send.495 This is perhaps tied to the recognition of the

increasing vulnerability of the state to cyber-enabled exploitation, given the growth of cyber

physical systems like the Internet of Things (IoT) and the capacity of those devices to extend the

“threat of remote exploitation,”496 with the Strategy aiming “to increase cyber security at all levels

for [the] collective benefit [of the United Kingdom].”497

Threats and Drivers

In the 2016 CSS, cyber security is defined as “the protection of information systems, the data on

them, and the services they provide.”498 This definition is consistent with previous documents’

conception of cyber security and aligns with the general understandings of security and resilience

in cyberspace produced by the analysis of those documents. However, there is also room in this

definition for it to be applied to the human element in both cyber security and CCI practices.

While information systems and the data they contain are clearly meant to describe the ‘assets’ pillar

of the assets vs. audiences understanding of this analysis,499 the above definition also identifies the

services provided by those technical systems as necessitating security measures. This could be

developed into an understanding that cyber-enabled services and platforms like social networking

and massive communications sites need to be secured and made more resilient to interference to

protect the audience. While I judge it unlikely that this is the intended meaning of the definition,

there is definite scope for the broadening of the definition and thus the understanding of what

exactly cyber security is, or should look like, in the UK.

The 2016 CSS acknowledges a broadening of threat in cyberspace, identifying “an expanding range

of devices” as a specific vulnerability and specifying the IoT as creating new opportunities for

495 Though this is in line with an increased elevation and arguably securitisation of cyber security. The 2017 Australian International Cyber Engagement Strategy, for instance, was more explicit about Australia’s willingness to use offensive cyber capacities. Department of Foreign Affairs and Trade, “Australia’s International Cyber Engagement Strategy” (Australian Government, 2017), 54–55, https://web.archive.org/web/20180216140850/https://www.dfat.gov.au/international-relations/themes/cyber-affairs/aices/preliminary_information/introduction.html. 496 Cabinet Office of the United Kingdom, “National Cyber Security Strategy 2016-2021,” 13 para 2.4. 497 Cabinet Office of the United Kingdom, 14–15 para 2.10. 498 Cabinet Office of the United Kingdom, 15 para 2.11. 499 This will be examined in-depth in chapter six.

Courteney O’Connor – PhD Thesis 2021

161

exploitation.500 While this is most likely an identification of the ways in which insecure devices can

be attacked, co-opted, or otherwise exploited, this is another area where an unbounded definition

could be interpreted as being permissive of measures intended to aid in the resilience and security

of audience psychology and decision-making capacity. Consistent with the 2011 CSS,501 the

particular threats identified in the 2016 CSS are cyber criminals; states and state-sponsored threats;

terrorists; hacktivists; and script kiddies.502 In addition to these, insiders are recognised as a threat

to national security.503

The 2016 CSS contains an interesting modification of the 2011 CSS identification of state threats

to the current evolution of states and state-based threats. From this, I infer a greater understanding

of non-state actors operating in cyberspace either at the behest or on the behalf of state actors as

presenting a higher (perceived) threat than in previous iterations of national security documents.

The definition of CCI offered in chapters two and five of this thesis acknowledges the risks from

and to non-state actors specifically because of the decreasing threshold of entry into cyber conflict

and the growing capability of these actors to affect behaviour and policy and the interstate level.

While there are non-state actors in cyberspace that do not act at the behest or on the behalf of

states, it is also true that these groups can be categorised as criminal enterprises and, as such, are

already identified by the UK national security documentation as a current threat. This is a sign of

development in the UK understanding of the cyber threatscape and a clear elevation of both

understanding and risk perception; the understanding among the British Administration of what

kinds of actors can constitute a threat to national resilience and security in cyberspace is evolving

to consider a greater array of threat vectors.

The best recognition thus far seen in the national security documentation of the UK that the

audience as a collective of individuals is important in any consideration of security and resilience

in cyberspace is the declaration in Section 3.19 of the 2016 CSS that “the public is also insufficiently

cyber aware.”504 This simple statement serves as a recognition that the threat to the audience of

500 Cabinet Office of the United Kingdom, “National Cyber Security Strategy 2016-2021,” 17–22. 501 Cabinet Office of the United Kingdom, “The UK Cyber Security Strategy: Protecting and Promoting the UK in a Digital World,” 15–16 para 2.4-2.8. 502 A script kiddie, for example, is an individual lacking the knowledge or skill to write their own malware; they use the malware written by others to engage in attack and exploitation. Australian Cyber Security Centre, “Script Kiddie,” Australian Signals Directorate | Australian Cyber Security Centre, 2021, https://www.cyber.gov.au/acsc/view-all-content/glossary/script-kiddie; Patrick Putnam, “Script Kiddie: Unskilled Amateur or Dangerous Hackers?,” United States Cybersecurity Magazine, September 14, 2018, https://www.uscybersecurity.net/script-kiddie/. 503 Cabinet Office of the United Kingdom, “National Cyber Security Strategy 2016-2021,” 17–20 para 3.2-3.14. 504 Cabinet Office of the United Kingdom, 22 para 3.19.

Chapter 4 – The United Kingdom

162

being insufficiently unaware of or prepared for the cyber threatscape has been perceived by the

Administration in their examination of the security and resilience requirements of the UK. This is

a clear elevation of the (perceived) risk to the public of cyber-enabled threats. It is also noteworthy

that the 2016 CSS considers roles and responsibilities for all sectors of society, categorising groups

as individuals/businesses, organisations, and government, which broadly align with the identified

actor aggregations for CCI used in this thesis.505

However, the 2016 CSS then calls on individuals, as citizens, to take all reasonable steps to

safeguard hardware, software and systems.506 The reductionist logic of this concept of individuals

needing to take steps to protect systems but not the information that resides within those systems,

nor their own perceptions and understanding of that information is concerning at best, and

negligent at worst.507 While there is the affirmation that the primary duty of government is to

protect citizens from attacks by other states,508 there is an inherent assumption of the (cyber-)

physical nature of any expected attack, leaving unaddressed any psychological operations that may

be targeting the (democratic) audience, and therefore the voting public. The point must be made

that without the understanding and aggregate contribution of the British audience to cyber

resilience through individual CCI, true cyber resilience will remain an aspiration. The human

element of cyber is inextricable from the technical-physical structures: until the human element is

aware of and guarded against psychological manipulation and information pollution, weaknesses

will remain inherent in any cyber security strategies. This, I suggest, will only occur when there is

a greater integration between the processes of elevation, and the capacities and tools of CCI, a

point I explore in detail in the following chapter.

There is some interesting ambiguity in section 4.16 announcing a broadened intelligence and law

enforcement focus on cyber, and that these institutions would “expand their efforts to identify,

anticipate, and disrupt hostile activities by foreign actors, cyber criminals and terrorists.”509 That

there is no specification as to what ‘activities’ will be identified and disrupted, nor that ‘foreign

actors’ are restricted to states, leaves room for interpretation, and a mandate for psychological

defence could be read into ambiguous statements such as these.

505 Cabinet Office of the United Kingdom, 26–27 para 4.6-4.11. See chapter five for a discussion on actor aggregates (broad categories) that can or should use CCI. 506 Cabinet Office of the United Kingdom, 26 para 4.7. 507 The implications of this are considered further in chapter six, where I examine disinformation as a threat to democratic integrity and analyse the importance of the human element in a holistic approach to CCI. 508 Cabinet Office of the United Kingdom, “National Cyber Security Strategy 2016-2021,” 26 para 4.9. 509 Cabinet Office of the United Kingdom, 28.

Courteney O’Connor – PhD Thesis 2021

163

There is a consistent focus on cyber-crime as a persistent threat to cyber security and resilience,

but an explicit acknowledgement of the importance of the human fact to cyber security does not

appear until Section 5.3.6, when the 2016 CSS commits the Administration to the development of

cyber skills and expertise – the caveat to this development is that this commitment is specific to

government service,510 without addressing the public requirement for the same skills and expertise

any further than the ongoing commitment to education and innovation funding. While these are

excellent commitments and will no doubt serve the UK well in future, there is nothing that

addresses the current, real-time dearth of skills, expertise, and understanding of the audience that,

through voting decisions and behaviour, chooses and affects the functions of the governing

administration.

I note that there is a dedicated section on “countering hostile foreign actors” in cyberspace that

contains an acknowledgement that many countermeasures will not be undertaken or admitted in

the public space,511 and given the classified nature of many cyber operations and their importance

to national security and resilience, this is understandable. It is important, however, that the

audience of a democratic nation both know and understand that their government is aware of and

(capable of) responding to the threat of cyber-enabled interference that directly targets the

audience rather than just the systems that they use to generate exploitable data. Public trust in the

ability of the administration to both govern and protect is a fundamental pillar of democratic

integrity. In addition to recognising and elevating the threat to the audience of cyber-enabled

foreign interference, the administration must also be able to communicate a specific management

plan or proposal to strengthen public trust in government. Without that trust, the risk is that the

public will turn to alternate sources of information, thus becoming more vulnerable and

susceptible to disinformation and reducing overall democratic integrity, a point I return to in

chapter six.

I also note the acknowledged requirement for further education and training of cyber security

professionals.512 Further evolution of this stance is encouraged, however; while there is an

indubitable need for a larger cohort of cyber security professionals both now and in the future, it

is still, and will continue to be, necessary to encourage a more informed general public (a more

informed audience), which will decrease the opportunities of which adversaries can take advantage.

510 Cabinet Office of the United Kingdom, 38 para 5.3.6. 511 Cabinet Office of the United Kingdom, 49–50 para 6.3. 512 Cabinet Office of the United Kingdom, 51–55.

Chapter 4 – The United Kingdom

164

More specifically, and especially given the affirmation that the UK will itself “seek to influence the

decision-making of those engaging in…cyber espionage,” the elevation of the importance of the

human factor in any consideration of cyber intelligence/counterintelligence and security can only

be affirmed as critical.

Case Summary

In analysing the selected national security documentation of the UK, I attempted to ascertain

several different things. First, do the assumptions of threat elevation analysis hold true? Then, has

the threat perception of cyberspace evolved in the UK? And finally, has there been a development

in the understanding of, and need for, CCI?

Across the analysed period, namely 2008-2018, the UK engaged in several elevating acts that

indicated a heightening of their perceived threat of cyberspace. In addition to a number of agencies

being created to engage with different aspects of cyberspace, criminal use thereof, and military and

intelligence engagement with cyberspace infrastructure, the publication of three national cyber

strategies indicates the seriousness with which cyberspace has come to be viewed by successive

UK administrations. In addition, the national security documentation of the UK showed a distinct

elevation of increased threat perception of cyberspace.513

513 Given the scope of this thesis it is not possible to conduct a thorough investigation of threat attenuation here using

the UK perception of terrorism as a study. In brief, while the threat perception of cyberspace was elevated, the national

security documentation of the UK consistently identified (international) terrorism as the primary threat faced by the

UK. While earlier experiences and documentation may have contained the securitisation of terrorism (outside the

bounds of the 2008-2018 period time under examination), the collection of documents analysed here contained neither

an elevation nor an attenuation of the perceived terrorist threat. In fact, each document examined represented the

continued risk management of the threat of terrorism rather than any elevating or attenuating language or measure.

The threat perception of terrorism has not attenuated. While not indicative of security elevation practices more

broadly, it is an indication that once security concerns have been securitised to the level of threat it is more likely that

the process of threat attenuation will be significantly long-term and tends to hold at risk management, rather than

continuing down to normal politics. While mentions of terrorism and counterterrorism decreased considerably in the

two cyber security strategies so far produced by the UK (both within the analysed period), it does represent a

considerable percentage of the national security and intelligence effort, focus, and funding of the UK. Violent

extremism does not consistently receive the attention in the national security documentation as does terrorism, but it

is important for the identification of community efforts involved in countering violent extremism - these efforts could

be patterned in future public education and awareness efforts surrounding psychological defence in cyberspace. Given

Courteney O’Connor – PhD Thesis 2021

165

In contrast to terrorism, which remained a Tier One threat, cyberspace underwent a considerable

elevation of perceived risk across the period analysed. From barely appearing at all in the 2008

National Security Strategy, by the end of the analysed period, cyberspace (or the vulnerability of

cyberspace as a threat vector) was considered one of the primary threats to the security of the UK,

and an obstacle in the pursuit of national security and resilience. As the understanding of

cyberspace developed and the terminology of threats, actions, and actors in cyberspace evolved,

so too did the conception within the UK Government of how cyberspace should be treated and

managed in terms of resilience and security by the defence and intelligence communities. There is

clear riskification here.

What developed, however, was a split view of cyberspace in terms of threats and vulnerabilities

that I call the assets vs. audience divide and develop further in chapter six. The focus of the UK

on the digital infrastructure of cyberspace, on the systems, processes, data, and physical

technologies that support and are produced by and in cyberspace (the assets), have left unresolved

the problems surrounding adversarial targeting of the general population of the UK (the audience).

While the integrity of cyberspace assets and infrastructure is crucial to the continued normal

function of everyday life, not only in the UK but globally, there has been a fundamental failure to

acknowledge that at every stage of analysis and deployment, the human factor in cyber security is

both integral to success, and in fact the reason for which security is pursued. Humans make

decisions (whether emotionally or rationally) based in large part on the information available to

them at any given time (snapshot intelligence), or based on what they have known and integrated

into their belief framework over time (fundamental intelligence). Much like short-term and long-

term memory, the beliefs either introduced or encouraged by snapshot intelligence feed into the

long-term belief and perception frameworks (fundamental intelligence) that affect ongoing

behaviour, such as civil interaction and voting patterns. The resulting national and cyber security

strategy documentation fixates almost entirely on technical security, leaving unaddressed the

human factor. Individuals produce the data collected on systems, and they create and utilise those

systems, producing further data, so on and so forth in a constant loop of production and

consumption. As I will argue in chapter five, we can close this gap by drawing from

these conclusions, the process of threat attenuation according to the threat elevation analysis framework will be the

subject of future research.

Chapter 4 – The United Kingdom

166

counterintelligence and properly elevating the threat to the audience by encouraging broad uptake

of CCI practices that contribute to the overall security of the state and its constituents.

In terms of how an overall approach to the understanding or practice of CCI may have evolved in

the UK in relation to the threat elevation of cyberspace, while the intelligence and security services

were referenced in the documentation analysed, there was general lack of specificity in regard to

CCI. While this can in part be explained by the classified and highly sensitive nature of these

capabilities and investments, it does mean that the analysis undertaken throughout the previous

sections has been significantly inferential. More so in the cyber security strategies, it is possible to

see that there is a connection and an increased understanding of counterintelligence requirements

in relation to disinformation. This is highly connected to state actor activities in relation to

disinformation, and to terrorist actors when considering propaganda. There is also the heightened

recognition that the intelligence and security services will be increasingly required to operate in and

mitigate hostile actions through, cyberspace, as foreign actors engage in attempts at interference.

Conclusion

This analysis has shown that there is still a significant focus on technological aspects of cyber

security, with limited elevation of the human factors in cyber security. While there has been a

successful riskification in terms of risks to assets, I suggest that there is a need hybridise the

technical side of cyberspace operations with the human aspect of psychological manipulation or

protection. CCI, as I will argue in chapter five, offers a way to connect these two elements that

makes it the perfect concept to recognise, monitor, and respond to the elevated national security

threats enabled by cyberspace, with appropriate risk management or threat mitigation measures.

Moreover, elevating the role of the audience as a vulnerability and a threat vector will enable the

allocation of resources to contribute to the protection and resilience of the audience against

disinformation and foreign interference campaigns. An increase in audience resilience to

disinformation will reduce the ability of hostile actors to degrade democratic integrity through the

sowing of confusion and consequent reduction in trust.

Beyond the technical considerations, though, is the audience: made up of the individuals that the

cyber security strategies indicate are at least partly responsible for information security, the

audience is still susceptible to deception through disinformation. The consequences of such

deceptions (as cyber-enabled information operations) affect both current and future behaviour,

such as voting. The integrity of the information to which they have access, the flow of ideas which

Courteney O’Connor – PhD Thesis 2021

167

the UK identifies as crucial to the function of a modern democracy, has the potential for enormous

impact on the perceptions and behaviours of the audiences. There needs to be a serious effort to

elevate the risks to the audience of disinformation: an effort to elevate the risks to the audience of

cyber-enabled exploitation in general terms, beyond specifically technical problems.

Disinformation presents significant risks not only to the security of individuals, but to the overall

integrity of democracies and democratic institutions, such as election infrastructure. Because there

are severe potential consequences of the successful infiltration of disinformation into political

information ecologies, there needs to be an active response to such efforts. Because there is a

ubiquitous capacity for use of cyberspace, it is beyond the capacity of the individual state to secure

this domain. As such, performing counterintelligence measures in cyberspace is not the sole remit

of the state: individuals and organised groups also bear a responsibility to engage in CCI measures

and practices to contribute to the aggregate security of the state, and thereby the international

system at large. This separates CCI from more traditional modes of counterintelligence, bringing

the individual into a realm where previously only states or quasi-state actors were acknowledged

as legitimate actors. The next chapter examines the nature and field of CCI before offering a

typology of CCI, contextualising the examination of disinformation in chapter six.

168

5 – Counterintelligence and Cyberspace A fundamental outcome of the analysis in chapter four is that, while cyberspace has been elevated

in the United Kingdom (UK), it still treats cyber security as primarily a technological issue. Yet, a

fundamental aspect of mitigating, and ideally attenuating, threats in cyberspace involves humans.

In this chapter, given its embrace of human factors in intelligence and security, I will argue that

counterintelligence is a highly relevant and useful resource that those working in cyber security

should draw on: counterintelligence is a crucial part of the discipline and practice of intelligence,

and needs to play more of a role in cyber security. As long as spies and spying techniques have

been utilised to try and gain viable intelligence from hostile or oppositional parties, there have

been efforts to deny that intelligence collection and degrade that party’s ability to collect or use

further intelligence. For all intents and purposes, this is the function of counterintelligence: all

modern enhancements or changes to practice and understanding build on this foundation, rather

than renovate it. As outlined earlier in this thesis, where cyber security refers to the overall state

of, and methods of assuring, the overall security of cyberspace or an entity operating therein, cyber

counterintelligence concerns the specific practices intended to obviate attempts at intelligence

collection (espionage) or exploitation that then enable disruption or influence operation such as

disinformation campaigns. In modern times, considering the ubiquity of cyberspace to all areas of

life, the study, analysis, and practice of counterintelligence in cyberspace is both particularly

important and particularly complicated.

In chapter three, I examined whether cyberspace had undergone threat elevation, concluding that

cyberspace has been successfully riskified but has not yet been successfully securitised. In chapter

four, through examination of the national security documentation of the UK I demonstrated that

it is possible to identify and trace the elevation of the threat of cyberspace at the state level,

identifying successful riskification and accompanying risk management measures, though it should

be noted that the potential future elevation of cyberspace from risk to threat is not precluded by

this analysis. The UK by 2018 had successfully riskified the cyber threat to assets – critical

infrastructure, systems, and data. It failed to elevate the cyber threat to the audience, who are

vulnerable actors in cyberspace and can be a threat vector to the state in terms of both cyber

security and democratic integrity. One 2018 study found that individuals are just as likely to curate

and disseminate disinformation as is a state actor, making the audience both a risk to be managed

Courteney O’Connor – PhD Thesis 2021

169

and a vulnerability to be secured.514 A comprehensive threat elevation of cyberspace needs to

recognise the role, and the importance, of the human factor in cyber security. Drawing on the

conclusions reached in chapter four, this chapter identifies cyber counterintelligence (CCI) as a

risk management and threat mitigation measure before applying this concept to the problem of

audience vulnerability to disinformation in chapter six.

The threat elevation of cyberspace should provoke a counterintelligence response at all levels, but

particularly at the state level, as it is responsible for the security of its citizens. The means with

which an actor addresses CCI concerns and the methods by which it manages or mitigates security

risks and threats in cyberspace accords with the general threat elevation analysis framework. A

mature actor in cyberspace is expected to have a more strategic outlook on CCI concerns and

would be expected to engage largely in active or proactive CCI, having constructed defences

designed to successfully repel or deter most malicious actors. This is the case with the UK. Less

mature actors might still be utilising tactical CCI means and methods and would likely be elevating

the threat of cyberspace as cyber-attacks and exploitation attempts are experienced. This is not to

say that tactical and strategic approaches to CCI are mutually exclusive – simply that there are

different levels and approaches by which actors undertake CCI. This chapter develops the concept

of CCI – how it is defined, how it has evolved, and to what purpose it is, and should be, used.

Accepting that cyberspace is an acknowledged risk to the security of the sovereign state, it becomes

the responsibility of all governments to undertake such actions and processes designed to manage

the risks associated with cyberspace as a conflict domain. If risk management fails, then a

multilateral approach to cyber threat mitigation is the next required step. Given that no state (or

corporation registered therein) is the sole owner or operator of the infrastructure underpinning

cyberspace, minimisation of threat potential is the duty of multiple actors. Risk management is

only possible if there is no immediate, overriding danger which would precipitate a further

elevating process and the identification of an existential threat to the state.515

So how has the riskification of cyberspace affected counterintelligence, and more specifically, cyber

counterintelligence? While CCI is a subset of counterintelligence, the requirements and limitations

of the cyber domain necessitate both new technologies and new analytical tools to properly

514 Yevgeniy Golovchenko, Mareike Hartmann, and Rebecca Adler-Nissen, “State, Media and Civil Society in the Information Warfare over Ukraine: Citizen Curators of Digital Disinformation,” International Affairs 94, no. 5 (2018): 975–94. 515 Threat elevation analysis is examined in depth in chapter three, as the framework underpinning this thesis.

Chapter 5 – Counterintelligence and Cyberspace

170

understand what cyberspace is and will be to state security. The ability to comprehend, navigate,

and exploit cyberspace is as essential to practitioners of counterintelligence as it is to intelligence

collectors; possibly more so, as to practice counterintelligence is to defend against those malicious

actors who are actively trying to obtain classified (or otherwise private) information for their own

(presumably exploitative) purposes. It is also worth noting that given the ubiquity of cyberspace

and the interconnected networks that form its global structure, counterintelligence is increasingly

required and utilised by parties beyond the state, to include individuals, groups, and corporations

of various sizes. Individuals and civil society groups, for example, engaged in significant

disinformation and counter-disinformation dissemination during the research into the cause of the

downing of MH17 over Ukraine in 2014.516 The Russian Federation initially denied involvement,

attempting to manipulate the narrative by implying Ukrainian actors had downed the plane.517

Bellingcat, an open-source investigative group, engaged in the identification and tracking of data

and information in a stunning display of active and proactive CCI in direct competition with actors

intending to cover Russian involvement by attempting to conceal information and avoid

attribution.518 However, to analyse the ways in which each of these types of actor employs CCI is

beyond the scope of this thesis and will therefore be only briefly examined to contextualise the

importance of CCI to the security of the state.

Despite the many kinds of cyber-enabled technologies, intelligence collection and analysis still

fundamentally revolve around humans. Signals intelligence (SIGINT) may be collected by one

agency and protected by another, but the intelligence is collected in order to ascertain the probable

or actual actions and decisions of people – foreign leaders, policymakers, and agents of foreign

intelligence services. The hacks of the Democratic National Committee (DNC) servers and the

exploitation of John Podesta’s email account, for example, provided a significant quantity of

information of what people had done and were planning on or considering doing.519 The human

516 Golovchenko, Hartmann, and Adler-Nissen, “State, Media and Civil Society in the Information Warfare over Ukraine: Citizen Curators of Digital Disinformation”; Sebastiaan Rietjens, “Unraveling Disinformation: The Case of Malaysia Airlines Flight MH17,” The International Journal of Intelligence, Security, and Public Affairs 21, no. 3 (September 2, 2019): 195–218, https://doi.org/10.1080/23800992.2019.1695666; Matt Sienkiewicz, “Open BUK: Digital Labor, Media Investigation and the Downing of MH17,” Critical Studies in Media Communication 32, no. 3 (May 27, 2015): 208–23, https://doi.org/10.1080/15295036.2015.1050427. 517 Golovchenko, Hartmann, and Adler-Nissen, “State, Media and Civil Society in the Information Warfare over Ukraine: Citizen Curators of Digital Disinformation,” 978; Philip Pangalos, “Russia Denies Claims of Involvement in Malaysia Airlines MH17 Crash,” euronews, June 19, 2019, https://www.euronews.com/2019/06/19/russia-denies-claims-of-involvement-in-malaysia-airlines-mh17-crash. 518 Bellingcat, “MH17,” Bellingcat, 2021, https://www.bellingcat.com/tag/mh17/. 519 Katiana Krawchenko et al., “The John Podesta Emails Released by WikiLeaks,” CBS News, November 3, 2016, https://www.cbsnews.com/news/the-john-podesta-emails-released-by-wikileaks/; Raphael Satter, Jeff Donn, and Chad Day, “Inside Story: How Russians Hacked the Democrats’ Emails,” AP NEWS, November 4, 2017,

Courteney O’Connor – PhD Thesis 2021

171

factor is an irrevocable element of intelligence, and the increasing technologisation of modern

society has not changed that. So, it is important to bear in mind that all discussions of CCI must

also pay due attention to human intent, motivation, and perception. One of the chief conclusions

from chapter four was that though the UK had effectively elevated cyberspace, they had done so

in a way that significantly overlooked the human factors in cyber security. In what follows, I

develop CCI as a way to build a more fundamental resilience into cyberspace.

This utility of this approach becomes obvious in chapter six, where I cover the effects of

disinformation campaigns through the lens of failed CCI practice and understanding at the

systemic level. Fundamentally, I argue that CCI can be broadly understood to apply to two

categories: assets and audiences. Of these, governments globally have successfully elevated the

concerns surrounding assets, focussing particularly on critical infrastructure and data.520 Policies

have been instated and practices undertaken to protect data, and ensure the defence and

(hopefully) future resilience of national critical infrastructure.521

Until recently, however, there has been insufficient consideration given to the importance or

protection of the second category: audiences. This point was borne out in the analysis of the UK’s

elevation of cyberspace. Given the ubiquity of cyber-enabled technologies, CCI requires

participation from the citizenry – a considerable departure from more traditional, covert forms of

counterintelligence. The greater the percentage of a population that actively engages in cyber

security measures (to include counterintelligence practices, either consciously or unconsciously),

the greater the levels of cyber security and digital integrity of the state in question. While the state

is the primary unit of analysis in this thesis, CCI shows us the importance of the human factor.

https://apnews.com/article/hillary-clinton-phishing-moscow-russia-only-on-ap-dea73efc01594839957c3c9a6c962b8a; Zurcher, “Hillary Clinton Emails - What’s It All About?,” BBC News, November 6, 2016, sec. US & Canada, https://www.bbc.com/news/world-us-canada-31806907. 520 See chapter three for extended discussion on threat elevation. 521 Centre for the Protection of National Infrastructure, “Centre for the Protection of National Infrastructure | CPNI,” Centre for the Protection of National Infrastructure | CPNI, 2021, https://www.cpni.gov.uk/; Steven Chabinsky and F. Paul Pittman, “Data Protection 2020 | Laws and Regulations | USA,” International Comparative Legal Guides, 2020, https://iclg.com/practice-areas/data-protection-laws-and-regulations/usa; Bernard Haemmerli and Andrea Renda, “Protecting Critical Infrastructure in the EU,” Centre for European Policy Studies (blog), December 16, 2010, https://www.ceps.eu/ceps-publications/protecting-critical-infrastructure-eu/; Information Commissioner’s Office, “Guide to the UK General Data Protection Regulation (UK GDPR),” Information Commissioner’s Office (ICO, 2021), https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/; Intersoft Consulting, “General Data Protection Regulation (GDPR) – Official Legal Text,” General Data Protection Regulation (GDPR), September 2, 2019, https://gdpr-info.eu/; UK Public General Acts, “Data Protection Act 2018,” Legislation.gov.uk (Queen’s Printer of Acts of Parliament, 2018), https://www.legislation.gov.uk/ukpga/2018/12/contents/enacted; United States Government, “National Infrastructure Protection Plan | CISA,” Cybersecurity and Infrastructure Security Agency, 2021, https://www.cisa.gov/national-infrastructure-protection-plan.

Chapter 5 – Counterintelligence and Cyberspace

172

Moreover, it explains how the individual is paramount to the security of the state. A state is but an

aggregate of individuals, and as the saying goes, a group is only as strong as its weakest member.

Put into context, any given aggregate (like the state) can only acquire and maintain a significant

level of security in cyberspace if the constituent members of the aggregate model the correct

behaviours and engage in the appropriate practices across the board. Unless most individuals

understand the importance of and engage in “cyber hygiene” and CCI practices, the greater the

likelihood that a virus will be able to move past existing defences.522 This is why the failure to

allocate the appropriate weight and importance to both systemic (state) and human

(individual/group) levels of CCI practice and understanding is a serious and ongoing national

security concern. Until audiences are considered, supported, and invested in at the same level as

assets, this problem is likely to continue unmitigated.

Greater investment in, and education of, democratic audiences in relation to CCI practices will not

just contribute to the security of cyber infrastructure, but to the security and resilience of

democratic infrastructure as well. Disinformation is a growing threat to security among Western

democracies – it is important to consider the effects that false information will have not only on

levels of comprehension but on the trust that democratic audiences have in their elected leaders

and the institutions that support them. Trust in both mainstream media and in democratic

institutions like elected governments has been trending downward, and this is in part due to a

stated distrust of information provided by those entities.523 The security and resilience of

infrastructure is intertwined with the strength of the social contract between democratic

governments and audiences, and disinformation that is effectively transmitted has the potential to

disrupt and degrade that social contract. Disinformation as a security threat will be discussed

further in chapter six in the context of the assets vs. audiences dilemma and the ways in which

false information can affect democratic integrity. Understanding and using CCI at strategic and

tactical levels, and the practice of CCI by states, organised groups, and individuals, will aid in

522 As pointed out in chapter four, where the UK initially used the cyber health analogy in relation to cyber security practices, this does not continue past the 2009 national security strategy. Australia does use cyber hygiene in certain contexts. See Australian Cyber Security Centre, “The Commonwealth Cyber Security Posture in 2019,” Australian Signals Directorate | Australian Cyber Security Centre, 2019, https://www.cyber.gov.au/acsc/view-all-content/reports-and-statistics/commonwealth-cyber-security-posture-2019. 523 Golovchenko, Hartmann, and Adler-Nissen, “State, Media and Civil Society in the Information Warfare over Ukraine: Citizen Curators of Digital Disinformation,” 981; OECD, “Trust in Government - OECD,” OECD, 2021, https://www.oecd.org/gov/trust-in-government.htm; Pew Research Center, “Public Trust in Government: 1958-2021,” Pew Research Center - U.S. Politics & Policy (blog), May 17, 2021, https://www.pewresearch.org/politics/2021/05/17/public-trust-in-government-1958-2021/; Jesper Strömbäck et al., “News Media Trust and Its Impact on Media Use: Toward a Framework for Future Research,” Annals of the International Communication Association 44, no. 2 (April 2, 2020): 139–56, https://doi.org/10.1080/23808985.2020.1755338.

Courteney O’Connor – PhD Thesis 2021

173

mitigating the threat of disinformation and strengthening the social contract between government

and audience such that democracy becomes more resilient to false information.

This chapter consists of two main sections. Section one will examine the specific field of CCI, and

how the understanding and use of counterintelligence has evolved as the perceived risks associated

with the broad use of cyberspace have increased. Section two will then examine three broad groups

that undertake CCI to contextualise the relationship between state security, cyberspace, and the

actors that engage in CCI: states; organised groups, and individuals. CCI is a multilateral effort,

and greater efficacy of CCI by individuals and groups will contribute to the security of the state;

greater state resilience can shield individuals and groups; groups can shield citizens and contribute

to the state’s security. The most efficient approach to CCI will be one in which all three actor

groups engage in CCI measures and practices, which will require a comprehensive threat elevation

of cyberspace and recognition of audience threat and vulnerability.

Cyber Counterintelligence

CCI is a branch of counterintelligence. It requires a high level of technical skill and investment by

parties that seek to implement CCI practices – primarily, states. As a subset of the general

counterintelligence field, CCI has developed along a similar line to both counterintelligence and

cyberspace itself. Like its parent field, CCI is largely reactionary, and utterly crucial to the

(perceived and actual) security of states in cyberspace, and the cyberspace infrastructure itself. CCI

is also difficult to define, as the traditional, analogous security concerns that require

counterintelligence have proliferated as the world becomes increasingly digitised. In addition to

the increasingly low threshold for instigation of or participation in conflict in cyberspace for those

that are digitally literate, it is also possible for cyber-attacks to emanate from the devices of

individuals who are unaware they have been compromised. Similarly, this can also occur from

people who wish to play a contributive role in cyber conflict but are not as capable in cyberspace

as hackers; it is possible to download and run computer viruses and other malicious programs with

little or no intimate knowledge of computer programming or coding. Organised criminal groups

are highly capable actors in cyberspace that have the potential to cause significant damage at

national and international levels. Consider the 2021 DarkSide ransomware operation against

Chapter 5 – Counterintelligence and Cyberspace

174

Colonial Pipeline on the East Coast of the US, which caused a days-long shutdown to oil access

which was only relieved when Colonial paid the requested ransom.524

Comparative to its importance to state and global security, there is a startling lack of academic

analysis and examination of CCI as an academic field, or as a practical endeavour. Having seen in

chapter four that countries like the UK have riskified cyberspace, then a greater understanding of

the interaction between cyberspace and security, cyberspace and intelligence, and the field of CCI

specifically is crucial to future security. This first section of this chapter will consider the history

and development of the CCI field, before considering the purpose and practice of CCI having

taken these elements into account. The final section of this chapter will contain a brief overview

and examination of the three main categories of actor utilising CCI practices, and how the security

of the state relates to these parties’ use of CCI.

CCI as a Discipline

CCI is a branch of the general field of traditional counterintelligence, which is itself within the

remit of the overall field of intelligence. As with traditional modes of intelligence and

counterintelligence, CCI is not necessarily disallowed according to international law, though as

with every other field of international interest, the intention exists to further control and regulate

cyberspace.525 Much like other fields of intelligence and security that have been translated to the

cyber domain, counterintelligence in cyberspace at this point in time is undertaken in an ad hoc

nature and with little oversight or regulation comparative to other means of international

interactions. Cyberspace is a domain to which the costs of entry into conflict are significantly lower

than would be expected of alternative, traditional modes of conflict. In an offensively-advantaged

domain, the costs of cyber defence are (often significantly) higher than those associated with

building a flexible and effective offense; this makes the likelihood of conflict both more probable

and more difficult to deal with.526 In a battle space where it is simple and relatively easy for conflict

524 Associated Press, “Colonial Pipeline Confirms It Paid $4.4m Ransom to Hacker Gang after Attack,” the Guardian, May 20, 2021, http://www.theguardian.com/technology/2021/may/19/colonial-pipeline-cyber-attack-ransom; Andy Greenberg, “The Colonial Pipeline Hack Is a New Extreme for Ransomware,” Wired, accessed July 3, 2021, https://www.wired.com/story/colonial-pipeline-ransomware-attack/; Lily Hay Newman, “DarkSide Ransomware Hit Colonial Pipeline—and Created an Unholy Mess,” Wired, accessed July 3, 2021, https://www.wired.com/story/darkside-ransomware-colonial-pipeline-response/. 525 Schmitt, Tallinn Manual on the International Law Applicable to Cyber Warfare; Schmitt and Vihul, Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations; United Nations Office for Disarmament Affairs, “Group of Governmental Experts - Cyberspace.” 526 Keir Lieber, “The Offense-Defense Balance and Cyber Warfare,” in Cyber Analogies, ed. Emily O. Goldman and John Arquilla (United States Cyber Command, 2014), 96–107; Andrea Locatelli, “The Offense/Defense Balance in Cyberspace” (Working Paper, Milan, 2013), https://publires.unicatt.it/en/publications/the-offensedefense-balance-in-cyberspace-10.

Courteney O’Connor – PhD Thesis 2021

175

to take place, with multiple potential actors taking part, and that same space and conflict potential

are no longer limited to the sovereign state, conflict becomes incredibly difficult to prevent or

mitigate.

In such a battle space (and cyberspace has been termed such for several years, in particular by the

United States’ Department of Defense),527 the importance of intelligence and counterintelligence

cannot be overstated. It is crucial to know and understand both the (quantitative) capacity and

capability of (potential) hostile parties, and also the motive/s that those parties may have for

undertaking cyber operations in the first instance. This is where the intelligence discipline comes

in; many (if not most) of the tenets and practices of traditional intelligence operations can be

translated almost directly from the physical world to cyberspace. The point of intelligence is to

know one’s enemy, and given the combination of high amounts of information that can be

accessed, the frequency of communications, and the many security vulnerabilities to attacks, this

can be done as well, or better, through cyberspace or via cyber-assisted means as it can through

traditional tradecraft. Counterintelligence performs the explicit purpose of preventing and/or

disinforming the intelligence operations of hostile parties, as well as tracking down any possible

‘turncoat’ individuals in domestic positions.528 While this, too, translates almost seamlessly into the

cyber domain, it must be pointed out that operations in cyberspace are not relegated or restricted

by geographic or political borders, and in fact are more likely than not to cross multiple

jurisdictions and involve individuals of multiple nationalities.

While it is true that the physical infrastructure supporting cyberspace is located in specific

jurisdictions, it cannot be denied that short of actually ‘turning off’ certain sections of that

supportive infrastructure, there is little that can be done about settling jurisdictional disputes if the

states party are uncooperative.529 Consider the following example: state A is in pursuit of an

individual or group that created a virus in state B, spoofed the ISP (Internet Service Provider)

address or used a VPN (virtual private network) that geolocated the device of origin in the

527 William J. Lynn III, “Cybersecurity - Defending a New Domain,” U.S. Department of Defense, 2010, https://archive.defense.gov/home/features/2010/0410_cybersec/lynn-article1.aspx; Lesley Seebeck, “Why the Fifth Domain Is Different,” The Strategist, September 4, 2019, https://www.aspistrategist.org.au/why-the-fifth-domain-is-different/; United States Department of Defense, “Department of Defense Strategy for Operating in Cyberspace” (United States Department of Defense, 2011), https://csrc.nist.gov/CSRC/media/Projects/ISPAB/documents/DOD-Strategy-for-Operating-in-Cyberspace.pdf. 528 Lowenthal, Intelligence: From Secrets to Policy, 201. 529 American Society of International Law, “Beyond National Jurisdiction: Cyberspace,” American Society of International Law, 2017, /topics/signaturetopics/BNJ/cyberspace; Wolff Heintschel von Heinegg, “Legal Implications of Territorial Sovereignty in Cyberspace,” in 4th International Conference on Cyber Conflict, ed. C. Czossek, K. Ziolkowski, and R. Ottis (NATO CCDCOE Publications, 2012).

Chapter 5 – Counterintelligence and Cyberspace

176

jurisdiction of state C, diffused that virus through multiple other jurisdictions and eventually

reached the targeted systems in state A. Which state is ultimately responsible for the effects of the

attack? With which states would state A need to cooperate in order to arrest the attackers? Short

of regional or international agencies with overarching jurisdiction (like Interpol in Europe)530 the

bureaucratic requirements would make such operations costly both in terms of financing and

effort. It may be impossible to prevent all attacks like this, but it should be possible to deter all but

the most determined of adversaries by increasing the cost of conducting the attack. Appropriate

and efficient use of CCI by the individuals and government in State A may have reduced the

severity of the consequences of the virus; similar practices in State C may have reduced the

potential to be use as a vector by hostile parties. Alternately, more efficient use of CCI practices

by the hostile actor may have prevented State A from being able to trace the trajectory of the virus

in the first place.

Counterintelligence, in a word, is the effort to make such attacks more costly and more unlikely to

occur; if counterintelligence officers and operators can make the opportunity cost of entering cyber

conflict higher than the benefit of maintaining a neutral cyber presence, then the balance of

likelihood of cyber conflict tips further toward the negative. By actively countering attempts to

collect intelligence on both domestic and allied cyber systems, individuals, organisations and

governments, CCI operators are reducing the overall offensive advantage of hostile parties in

cyberspace, by negating or decreasing the benefit to be gained if conflict (exploitative) operations

are undertaken. Effectively, good CCI is a form of deterrence by denial.

The key here is to realise that in addition to the immediate consequences of data corruption and

exfiltration, it is also the responsibility of the CCI discipline to subvert the efforts and operations

of hostile parties to affect negative change in a particular polity, be it individual, organisational,

national, regional, or international. In these terms, the attempts on the part of Russian hackers to

influence the outcome of the US’ 2016 and 2018 elections can be termed a failure of US CCI:

malicious actors were able to infiltrate sensitive systems, manipulate individuals for further access,

530 INTERPOL, “INTERPOL Member Countries,” INTERPOL, 2020, https://www.interpol.int/en/Who-we-are/Member-countries. I do note that the European Convention on Cybercrime (Treaty No. 185 with the Council of Europe, also known as the Budapest Convention) came into force in 2004 and has been signed by 46 States, ratified by 44. It is worthwhile to note that the Convention is limited in the crimes against which it is applicable, and being ratified by less than one third of sovereign States, needs to be developed further. Council of Europe, “Chart of Signatures and Ratifications of Treaty 185,” Treaty Office, November 4, 2021, https://www.coe.int/en/web/conventions/full-list.

Courteney O’Connor – PhD Thesis 2021

177

exfiltrate data, and introduce disinformation into the political conversation,531 reducing the overall

trust in the electoral processes in the US. Moreover, the debates over foreign influence on the

results of the election are still continuing.532

In an increasingly digitised era, counterintelligence in cyberspace is rising in both importance and

necessity: in many respects, it is also failing to keep up with demand. This may be due to education;

in general when CCI can be said to have failed, it can in large part be attributed to human error

that the systems did not work as designed.533 Consideration of the human factor is crucial to the

understanding and successful practice of CCI. The future resilience of cyberspace systems and

cyber-enabled technologies will depend upon the acknowledgement that humans design, build,

program, and use these systems and technologies. Cyber resilience and security starts with, and

rests on, the audience understanding and practicing CCI. As humanity moves further into the

realms of machine learning and artificial intelligence, our ability to differentiate true from false

information and engage in risk management processes will only increase in importance.

It has already been proven that early-stage artificial intelligences are exhibiting learned race and

gender biases, inherited from their human designers.534 When individuals do not admit to or

recognise inherent psychological models and biases, these cannot be mitigated against in thought

and decision-making, affecting how those individuals will act and react in certain situation. Without

training to recognise biases, or false information, and the techniques to mitigate against that false

information and the belief echoes it can cause,535 the (counterintelligence) system will not work as

531 CNN Editorial Research, “2016 Presidential Campaign Hacking Fast Facts,” CNN, October 28, 2020, https://www.cnn.com/2016/12/26/us/2016-presidential-campaign-hacking-fast-facts/index.html; Gerstein, “U.S. Brings First Charge for Meddling in 2018 Midterm Elections”; Donie O’Sullivan, “Facebook: Russian Trolls Are Back. And They’re Here to Meddle with 2020,” CNN, October 22, 2019, https://www.cnn.com/2019/10/21/tech/russia-instagram-accounts-2020-election/index.html; Alina Polyakova, “The Kremlin’s Plot Against Democracy,” Foreign Affairs, 2020, https://www.foreignaffairs.com/articles/russian-federation/2020-08-11/putin-kremlins-plot-against-democracy. 532Henschke, Sussex, and O’Connor, “Countering Foreign Interference: Election Integrity Lessons for Liberal Democracies.” See also chapter six for a detailed analysis of this problem and its relation to CCI. 533 Rishi Bhargava, “Human Error, We Meet Again,” Security Magazine, December 6, 2018, https://www.securitymagazine.com/articles/89664-human-error-we-meet-again?v=preview; Ross Kelly, “Almost 90% of Cyber Attacks Are Caused by Human Error or Behavior,” ChiefExecutive.Net (blog), March 3, 2017, https://chiefexecutive.net/almost-90-cyber-attacks-caused-human-error-behavior/; Daniel Miller, “The Weakest Link: The Role of Human Error in Cybersecurity,” Secure World Expo, January 14, 2018, https://www.secureworldexpo.com/industry-news/weakest-link-human-error-in-cybersecurity; Bob Violino, “Want to Help Stop Cyber Security Breaches? Focus on Human Error,” ZDNet, January 23, 219AD, https://www.zdnet.com/article/want-to-help-stop-cyber-security-breaches-focus-on-human-error/. 534 Daniel Cossins, “Discriminating Algorithms: 5 Times AI Showed Prejudice,” New Scientist, April 27, 2018, https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/; Hannah Devlin, “AI Programs Exhibit Racial and Gender Biases, Research Reveals,” The Guardian, April 14, 2017, https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals. 535 See chapter six for a discussion of belief echoes.

Chapter 5 – Counterintelligence and Cyberspace

178

intended. This is not to say that technology does not occasionally spontaneously malfunction:

anyone who has attempted to use a computer program of any kind will be well aware of the capacity

of technology to simply ‘not work’ as it is supposed to. The essential point here is that CCI

necessarily involves human factors in cyber security. In fact, this embrace of the human factors in

CCI is one of the significant contributions that engagement with counterintelligence can provide

to discussions of cyber security. This is, in part, what this thesis is designed to mitigate: to extend

enquiry into a field where little academic research has occurred, in order to better understand an

area of crucial importance to both individual and national security.

Development

The history of CCI is thus far a comparatively short one; cyberspace has existed for only a few

decades,536 and its past is thus fairly easy to examine. However, this chapter is not concerned with

the history and development of cyberspace itself; this was briefly considered in the chapter three

discussion on the riskification and potential securitisation of cyberspace, and so will not be

explored again here. CCI specifically can only really be traced as a disparate subset of the

counterintelligence field for approximately the last twenty to thirty years, and even so has rarely

been referred to in such specific terms as “CCI.” In the academic field, CCI has only truly begun

to be explored within the last decade.537 Comparative to the counterintelligence field as a whole

and to intelligence in general, and the burgeoning development of the interdisciplinary study of

cyber security, there is a relative dearth of literature available discussing CCI as a practice, as a field,

or in academic terms for examination and analysis. Much of the recent development in the field

of CCI is due to the (reactionary) creation of the computer emergency response teams (CERTs)

that form part of the governmental and institutional/corporate response to potential cyber-borne

threats that have been formed in many states, including the UK.538

CCI practices are something that have become more and more crucial as the number of people

that have access to cyberspace and cyber-enabled technologies grows. Like in the Western

536 Cerf, “A Brief History of the Internet & Related Networks”; Defense Advanced Research Projects Agency, “Paving the Way to the Modern Internet.” 537 See, among others, Duvenage and Solms, “The Case for Cyber Counterintelligence”; Duvenage and von Solms, “Putting Counterintelligence in Cyber Counterintelligence: Back to the Future”; Sigholm and Bang, “Towards Offensive Cyber Counterintelligence.” 538 Also known as computer security incident response teams (CSIRTs). European Union Agency for Cybersecurity, “CSIRT Inventory”; Forum of Incident Response and Security Teams, “FIRST Teams,” FIRST — Forum of Incident Response and Security Teams, 2021, https://www.first.org/members/teams; International Cyber Center, “C.E.R.T in Rest of the World”; United Nations Institute for Disarmament Research, “UNIDIR Cyber Policy Portal,” UNIDIR Cyber Policy Portal, 2021, https://unidir.org/cpp/en/.

Courteney O’Connor – PhD Thesis 2021

179

Australia password example,539 many individuals do not take security protocols seriously enough;

while that may or may not end up causing significant grief for a random private citizen, violation

of both common sense and security practices (like having easily-guessed passwords) when you are

a civil servant or member of a defence force can result in greater consequences for the security of

the state.540 This, perhaps, is one of the reasons that Hillary Clinton’s 2016 Presidential campaign

suffered so much damage from the revelation that she had been using a private email server for

the duration of her time in office as Secretary of State.541 In combination with the Russian hack of

the Democratic National Committee (DNC) and the exfiltration and subsequent publication of

sensitive documents, which was accomplished through the targeted phishing of former and current

DNC staffers, we can clearly see the human factor of CCI where problems and vulnerabilities

arise.542 One of the most important elements of future CCI development will be in education.

Education of the general populace in basic cyber security (to include basic cyber

counterintelligence) practices; further education of military forces and civil servants; and

investment in greater CCI innovation will be among the most crucial elements of any future CCI

policy. The vulnerability in cyber security and CCI identified and highlighted by the research

undertaken for this thesis is the human factor, which must be managed in order for there to be a

resilient cyberspace going forward.

How should we define cyber counterintelligence in terms of acts and actions? In addition to the

definition introduced in my 2017 chapter Cyber Counterintelligence: Concept, actors and implications for

security and articulated in chapter two, I also introduced a limited typology of intelligence collection

in cyberspace: passive, active, and proactive. The second half of this chapter will build on the

typology introduced in prior research, elaborating on the differences between active, passive, and

proactive cyber counterintelligence practices to best understand the discipline and the research

field.543

539 See chapter two, section “Defining cyber counterintelligence.” 540 In 2019, a NATO research group tested the vulnerability of professional soldiers to intelligence collection efforts and social engineering. Using social media and other information available online, the researchers identified approximately 150 soldiers that were ‘recruited’ to various Facebook pages; their battalions and movements were identified, and those personnel were compelled into a series of undesirable actions. Issie Lapowsky, “NATO Group Catfished Soldiers to Prove a Point About Privacy,” Wired, February 18, 2019, https://www.wired.com/story/nato-stratcom-catfished-soldiers-social-media/. 541 Zurcher, “Hillary Clinton Emails - What’s It All About?” 542 Barrett, “DNC Lawsuit Against Russia Reveals New Details About 2016 Hack”; Crowdstrike, “Our Work with the DNC: Setting the Record Straight,” Crowdstrike (blog), June 5, 2020, https://www.crowdstrike.com/blog/bears-midst-intrusion-democratic-national-committee/; Satter, Donn, and Day, “Inside Story.” 543 O’Connor, “Cyber Counterintelligence,” 117–18.

Chapter 5 – Counterintelligence and Cyberspace

180

Purpose

CCI is generally intended to subvert the actions of hostile parties that seek to negatively impact

the state (or the entity in question, if not a state) through collection of intelligence or exploitation

of cyber security vulnerabilities, as well as to effectively function as the ‘police’ for one’s own

personnel and assets, to avoid defection or provision of intelligence to a hostile or otherwise

unauthorised party.544 As this relates to counterintelligence, the purpose is much the same.

Translated into cyberspace, however, there are a number of variables which make

operationalisation and function of counterintelligence more difficult, or at least challenging in

different ways. In the physical domain, there are actions that can be performed which will provide

a great degree of certainty in security measures, relative to the perceived security when these actions

are not performed, such as bug sweeps, perimeter checking and surveillance, identification

verification, the use of white-noise generators to stifle sound that could be picked up by mid- to

long-range microphones, and so on.545

There is a certain level of the ‘known known’ and the ’known unknown’ that can be assertively

mitigated and combatted to increase both perceived and actual levels of security.546 In cyberspace,

however, there is a far lesser degree of confirmable knowledge; that is to say, the technology is still

new enough and uncertain enough that we cannot conclusively say that bug sweeps are efficient,

that the perimeter is capable of being checked and surveilled (what is a perimeter in cyberspace?),

that identification can be verified without doubt. This is in-part what makes attribution so difficult

in cyberspace and speaks to what Nina Kollars calls the “spaghetti nature – interconnected,

overlapping, and messy” of cyber and intelligence – “the tools of intelligence competition are

available to everyone [in cyberspace]. The battle space for intelligence competition grows by the

minute. The holder of the tools and the space itself is everyone.”547

544 Noting that the original typology classifications were passive, active, and aggressive, it is important to note that this typology has evolved over the conduct of research for this thesis and the version contained herein is updated from the original. O’Connor, 110. 545 Hank Prunckun, Counterintelligence Theory and Practice, 2nd ed. (Lanham: Rowman & LIttlefield Publishers, 2019), 128–29; 160–65, https://rowman.com/ISBN/9781786606884/Counterintelligence-Theory-and-Practice-Second-Edition. 546 According to former Secretary of Defense Donald Rumsfeld, "There are known knowns. There are things we know we know. We also know there are known unknowns. That is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don't know we don't know." Donald Rumsfeld, “Defense.Gov Transcript: DoD News Briefing - Secretary Rumsfeld and Gen. Myers,” U.S. Department of Defense, February 12, 2002, https://web.archive.org/web/20190428122842/https://archive.defense.gov/Transcripts/Transcript.aspx?TranscriptID=2636. 547 Kollars, “Cyber Conflict as an Intelligence Competition in an Era of Open Innovation.”

Courteney O’Connor – PhD Thesis 2021

181

Traditionally, counterintelligence practitioners are responsible for finding and subverting the

actions of parties foreign or domestic that seek to undermine the actions of security agencies as

relate to securing the state against threats.548 This has translated fairly directly into cyberspace: it

remains a primary obligation for counterintelligence agencies and operatives, though again, the

sophistication of modern technologies and the sheer incomprehensible size of the Internet in the

first instance, as well as the complexity of the cyber domain and cyber-physical infrastructure in

the second, have made this an exceedingly difficult duty to fulfil. In addition, given the opportunity

provided by the relatively low cost of entry into and use of cyberspace, the number of parties that

may be exfiltrating data on behalf of foreign or domestic parties is exponentially larger than it

would be traditionally.549 The definition of cyber counterintelligence accepted in this thesis assigns

several responsibilities to the CCI discipline: preventing, identifying, tracing, penetrating,

infiltrating, degrading, exploiting and/or counteracting the attempts of any entity to utilise

cyberspace as the primary means and/or location of intelligence collection, transfer or exploitation;

or to affect immediate, eventual, temporary or long-term negative change otherwise unlikely to

occur. Evidently, intelligence gathering on behalf of foreign/hostile entities is not the only concern

of CCI operatives: in addition to data exfiltration, those same parties are perfectly capable of data

infiltration and corruption, of depositing viruses and logic bombs for later deployment, and of

leaving themselves ‘back doors’ hidden from CCI practitioners in order to re-enter the space in

question at a later date. Perhaps functionally equivalent to the Cold War-era listening devices and

short-range transmitters, in cyberspace these are exceedingly difficult to locate or attribute, and as

we have seen, some malware are able to exist and function within a system or multiple systems for

months and years without being detected.550 This is the constant danger of cyber-based intelligence

operations, and one that must be constantly guarded against: a battle that must be waged

persistently but in which is difficult, if not impossible, to be victorious. Moreover, as we see in

chapter six, CCI now has to consider widespread disinformation campaigns launched and

coordinated by adversarial state actors.

548 Lowenthal, Intelligence: From Secrets to Policy, 201. 549 Jason Healey and Robert Jervis, “The Escalation Inversion and Other Oddities of Situational Cyber Stability,” Texas National Security Review, September 28, 2020, http://tnsr.org/2020/09/the-escalation-inversion-and-other-oddities-of-situational-cyber-stability/; Kollars, “Cyber Conflict as an Intelligence Competition in an Era of Open Innovation.” 550 Moonlight Maze, for example, targeted US military and government systems. Active from 1996-2003, sections of the original code have been found in several updated exploits in the last decade. See Guerrero-Saade et al., “Penquin’s Moonlit Maze: The Dawn of Nation-State Digital Espionage”; Osborne, “Ancient Moonlight Maze Backdoor Remerges as Modern APT”; Pankov, “Moonlight Maze.”

Chapter 5 – Counterintelligence and Cyberspace

182

In terms of securing and maintaining one’s own systems, counterintelligence practitioners are

responsible for both repelling and routing attempts of second parties to gather intelligence on the

state’s capabilities, personnel, technologies, economy, and so on. This focus on cyberspace as locus

for intelligence collection and exploitation operations separates CCI from cyber security, broadly

defined as the protection of the integrity and availability of systems and networks. For an

illustration of the relationship between CCI and cyber security, see Figure 1 The Elements of Cyber

Security. James Angleton’s description of counterintelligence as a wilderness of mirrors seems an

apt description for the current state of counterintelligence in cyberspace, wherein attribution is

improbably difficult.551 As in traditional counterintelligence, denial and deception are part and

parcel of the CCI discipline. The problem, however, is the peculiarly well-suited nature of

cyberspace to both denial and deception: it is relatively simple to ‘spoof’ your location and provide

fraudulent identification credentials, complicating tracing and identification.552 The growth of

government bureaucracies and exponential increase of both official personnel as well as

contractors and consultants has also increased the size and value of the target environment for

intelligence collectors, making subversion operations more difficult to operationalise. The laissez-

faire attitude and lack of education or knowledge in the field of a large percentage of individuals

toward device and network security is also a significant setback to CCI operatives and agencies.553

Finally, the increased reliance of people and institutions on information accessed through the

Internet requires counterintelligence practices to be increasingly engaged with identifying and

combating disinformation, misinformation, and conspiracy theories.

As we have seen, CCI efforts are inextricably intertwined with cyber intelligence and also cyber

security. The multi-use nature of many cyber-assisted and -enabled technologies, and the

innovative purposes to which they can be turned, has resulted in opportunities and vulnerabilities

that serve in multiple security fields. As such, the same operations that might be designed to repel

or subvert the attempts of hostile parties to collect intelligence (or CCI) may also be used to test

551 For discussions on attribution in cyberspace, see Allan, “Attribution Issues in Cyberspace”; Robert K Knake, “Untangling Attribution: Moving to Accountability in Cyberspace,” § Subcommittee on Technology and Innovation of the United States House of Representatives (2010); Erik M. Mudrinich, “Cyber 3.0: The Department of Defense Strategy for Operating in Cyberspace and the Attribution Problem,” Air Force Law Review 68 (2012): 167–206; Joseph S. Nye Jr., “Deterrence and Dissuasion in Cyberspace,” International Security 41, no. 3 (January 2017): 44–71, https://doi.org/10.1162/ISEC_a_00266; Michael N. Schmitt and Liis Vihul, “Proxy Wars in Cyberspace: The Evolving International Law of Attribution Policy,” Fletcher Security Review 1, no. 2 (2014): 53–72. 552 Made more difficult with the use of tools such as ToR (The Onion Router). One issue inherent in these practices is that individual CCI would encourage ToR; state CCI would be made more difficult by same. TOR, “The Tor Project | Privacy & Freedom Online,” TOR, 2021, https://torproject.org. 553 Bisson, “‘123456’ Remains the World’s Most Breached Password”; National Cyber Security Centre, “Most Hacked Passwords Revealed as UK Cyber Survey Exposes Gaps in Online Security”; Telford, “1,464 Western Australian Government Officials Used ‘Password123’ as Their Password. Cool, Cool.”

Courteney O’Connor – PhD Thesis 2021

183

new or repaired cyber security software (cyber security), or even to collect intelligence on the entity

whose own attempts at collection are being subverted (cyber counterintelligence). However, the

interconnected nature of cyber intelligence, counterintelligence and security is also a vulnerability

due to those same elements: if a hostile party succeeds in avoiding domestic counterintelligence

efforts, then they themselves are in a position to collect intelligence, to perform counterintelligence

activities aimed at subverting domestic intelligence efforts, and to otherwise negatively impact

domestic cyber security by planting logic bombs or further malware which can be activated in

(potential) future conflicts.

CCI technologies are not really a novel field, simply one where technology is being repurposed for

alternate ends than those for which they were designed. While it is true that actors in cyberspace

may develop programs or applications with counterintelligence purposes in mind, this is

reactionary practice to similar programs that are already in existence and operating at cross-

purposes. One effect of new technologies is the fact that counterintelligence, and CCI in particular,

are constantly on the back foot: counterintelligence is often a reactive discipline, responding to

events rather than pre-empting or anticipating them, and this is unlikely to change in the near

future given the increasing velocity of technological innovation and change. Implementation of

new technologies often supersedes any security concerns, thus requiring more of

counterintelligence practices and practitioners at a later date.554 While this means that CCI

practitioners and users are being forced to innovate almost as quickly as other parties, you cannot

guard against what you do not know. Once more, the offensively-advantaged nature of cyberspace

lends force to whichever party innovates an attack over a defence.

Practice

CCI is not uniquely within the purview of states. While states are among the greatest (and most

important) practitioners of CCI practices, these are also available to (and utilised by) a number of

other entities in the international system, and in the upcoming sections on tactical and strategic

CCI, I have generalised these entities into three subgroupings for the purposes of this thesis: states,

organised groups, and individuals. I further explore these aggregates in the final section of this

554 Mark Fenwick, Wulf A. Kaal, and Erik P. M. Vermeulen, “Regulation Tomorrow: What Happens When Technology Is Faster Than the Law?,” American University Business Law Review 6, no. 3 (2016): 561–94, https://doi.org/10.2139/ssrn.2834531; Camino Kavanagh, “New Tech, New Threats, and New Governance Challenges: An Opportunity to Craft Smarter Responses?,” Carnegie Endowment for International Peace, August 28, 2019, https://carnegieendowment.org/2019/08/28/new-tech-new-threats-and-new-governance-challenges-opportunity-to-craft-smarter-responses-pub-79736.

Chapter 5 – Counterintelligence and Cyberspace

184

chapter, enumerating subtypes of actor within each actor class and how they might or should use

CCI.

While many CCI practices are generalizable to the particular subgroup and/or all users or

cyberspace, it can also be said that the practice of CCI is different for every one of these entities.

For example, while individuals and states both have reasons to protect private information, the

information under consideration and the lengths to which the two entities would go to ensure that

security are different. More importantly, when you disaggregate the groupings and look at

individuals within the subgroup, each of those entities (two states, two groups, two individuals,

etc.) would undertake security and counterintelligence practices in a different manner, according

to a unique set of principles, and with a specific set of dedicated resources. While states are the

principal focus of this research, it is important to realise that conclusions drawn about the

importance, efficacy, and utility of CCI practices do not just apply to states, which, according to

Nina Kollars, “are not the primary agents in cyberspace.”555 In addition, conflict in cyberspace has

not been, is not, and will not be limited to state contra state situations: in cyber conflict where the

state may be one of the involved parties, it is quite likely that the opposing party will be an

individual or an organised group, and, as such, a general understanding of the ways in which

alternate entities engage in CCI practices is advised. It is also quite likely (and in fact more

common) that cyber conflict will not involve a state at all, though state representatives may find

themselves involved in a cyber conflict or exploitative situation irrelevant of their professional

status.556

Further to this, note that counterintelligence (including CCI) practices become more important

the closer an individual or organised group/institution is to a critical institution, such as

government or the military; as the term counterintelligence suggest, the idea is to prevent hostile

or opposing parties from collecting intelligence that may be employed against yourself or (perhaps)

your allies. This doesn’t necessarily even involve specifically ‘cyber’ practices and is in part

dependent on the human element. The uptake in the popular use of ‘wearables,’ or cyber-enabled

555 Kollars, “Cyber Conflict as an Intelligence Competition in an Era of Open Innovation.” 556 When the Democratic Republic of North Korea (CPRK or North Korea) hacked the Japanese company Sony, for example, the US government responded. See BBC News, “Sony Pays up to $8m over Employees’ Hacked Data”; David Brunnstrom and Jim Finkle, “U.S. Considers ‘proportional’ Response to Sony Hacking Attack | Reuters,” Reuters, December 19, 2014, https://www.reuters.com/article/us-sony-cybersecurity-northkorea-idUSKBN0JW24Z20141218; Ellen Nakashima, “Why the Sony Hack Drew an Unprecedented U.S. Response against North Korea,” Washington Post, January 15, 2015, sec. National Security, https://www.washingtonpost.com/world/national-security/why-the-sony-hack-drew-an-unprecedented-us-response-against-north-korea/2015/01/14/679185d4-9a63-11e4-96cc-e858eba91ced_story.html.

Courteney O’Connor – PhD Thesis 2021

185

devices that track variables such as heart rate, calories burned, and distance travelled, and are also

GPS-enabled, have added to the security risk of the military for example. A recent case illustrated

the damage potential of a concentration of such devices in a specific geographic locale; US military

personnel using Fitbits or other fitness trackers quite literally provided a map of a classified military

base because they uploaded their data to ‘the cloud’ and, by aggregating that data, these devices

provided a highly accurate map of that base.557 This was mitigated on discovery but not before a

highly-publicised series of tweets announcing the dangers of such devices that previously had not

been considered (complete with photographs and diagrams of the base in question) was

circulated.558 Beyond the importance of protecting personally identifiable information and

movement data, there is the ongoing potential that disinformation campaigns by hostile actors will

be utilised against all of the classes of actor identified in this chapter, and the relationships of those

actors with each other will affect both the targeting of disinformation campaigns, which can be

very specific, and the potential severity of the consequences of those campaigns. Practicing CCI

will enable earlier and more efficient mitigation of disinformation, which will increase, in turn, the

resilience of the democratic state as well as the audience itself.

The succeeding sections will offer definitions and discussions on both the utility classifications and

the operational classifications offered in this research as elements of a basic typology of CCI. It is

crucial to understand how, why, and in what context different CCI practices are used or misused

in order to further our understanding of the discipline as a whole, and to contribute to the

elucidation and education concerning the general fields of intelligence and counterintelligence.

This typology is by no means exhaustive, and is intended to be a foundation of future research as

well as an integral pillar of the understanding of CCI, which this thesis is designed to provide.

A Typology of Cyber Counterintelligence

In the modern era, the state is far from the only actor innovating, exploring, and employing CCI

practices. As said, whether consciously or not, most people with access to a cyber-enabled device

employ CCI. In its simplest forms, this includes the aforementioned password example; it can even

be such a small thing as actively covering the keypad to disguise your PIN when you use an ATM

machine to check your balance or withdraw cash. Every single person who has at any point utilised

557 Jeremy Hsu, “Strava Data Heat Maps Expose Military Base Locations Around the World,” Wired, January 29, 2018, https://www.wired.com/story/strava-heat-map-military-bases-fitness-trackers-privacy/. 558 Hsu; Nathan Ruser, “Strava Released Their Global Heatmap,” Twitter, January 28, 2018, https://twitter.com/Nrg8000/status/957318498102865920; Jon Russell, “Fitness App Strava Exposes the Location of Military Bases,” TechCrunch (blog), January 29, 2018, https://social.techcrunch.com/2018/01/28/strava-exposes-military-bases/.

Chapter 5 – Counterintelligence and Cyberspace

186

a machine that interfaces with cyberspace infrastructure is almost certain to have employed, to

some degree, CCI practices. You don’t have to be actively aware of this to do it. However, there

are levels to CCI practices; they are not all the same, and not all practices will suit all problems.

How an actor approaches CCI dovetails with the threat elevation analysis framework detailed in

chapter three – if cyberspace is viewed as an increasing threat, for example, the expectation is that

there would be a greater emphasis on active and proactive CCI by state actors, and potentially by

groups and individuals depending on their understanding and context. If an actor is confident in

their achievement of cyber security or resilience and has managed to repel or deter cyber-attacks

and exploitation attempts, it may be that there is a threat attenuation process, be that de-

securitisation or de-riskification, and the expectation would be that management would continue

without further innovation or investment. In this context, we might expect to see only passive or

active CCI practices. The maturity of an actor’s approach to cyber security and CCI practices will

also reflect in whether there is an overall strategic understanding and operationalisation at play, or

whether that actor is still engaging in reactionary CCI on a tactical level.

In this section, I identify three broad utility classifications: passive, active, and proactive. It is

important to note that none of these three categories are mutually exclusive, and, in fact, any actor

operating in cyberspace can employ any combination thereof. In addition to defining each of these

classifications, I will give examples of the practice under consideration, and the particular

technologies employed at each level and by the different identified subgroups. This is not to say

that these classifications, practices or technologies are mutually exclusive: on the contrary, they

can be mutually reinforcing, and are employable in any given combination. It may be helpful to

consider the utility classifications as a circular spectrum rather than a stepladder: a particular

technology or practice in the hands of one entity will be of different use and efficacy than in the

hands of another, but (theoretically) under no circumstances would it be useless.559 In addition to

the utility classifications, in later sections I will also offer two broad operational classifications,

identifying and examining tactical CCI and strategic CCI. Broadly speaking, most entities will at

some point utilise one or both, whether consciously or unconsciously. The utility classes of CCI

interrelate with the operational classes, so it is possible to have tactical proactive CCI and strategic

active CCI. As with the utility classes, the operational classes CCI are mutually reinforcing rather

than mutually exclusive, though unlike the utility classes, the operational ones are more likely to

be observed at the organised group/institution or state level, rather than the individual level.

559 Consider a dartboard: as long as the player is skilled enough to land the dart on the board, anywhere on that board will be worth points. Some areas are worth more (higher degree of utility) and some are worth less (lower degree of utility) but no matter where the player lands the dart it will be worth something (degree of security).

Courteney O’Connor – PhD Thesis 2021

187

Passive, Active, Proactive Intelligence Collection

In order to best understand both the various types of counterintelligence practices that apply to

actions and intelligence collection in cyberspace, it is first necessary to understand the nature of

those actions themselves. Making a study of the way that various actors (hostile or otherwise)

utilise cyberspace will offer insight into (potential) current and future threats, and may aid in the

development of security models. If we are able to discover the manner of utilisation and/or the

method by which operations in cyberspace are undertaken, it may then be possible to discover the

motivation behind the action as well. It is important, then, to examine and analyse cyberspace

operations on a case-by-case basis in order for the aggregate classification of actions to be able to

offer specific and utilisable conclusions. Counterintelligence practices do not exist without there

first being intelligence actions and collection efforts, so the aggregate model discussed here refers

mainly to intelligence collection efforts and their broad categorisation into passive, active, or

proactive. Following a brief examination of each of these categories, and a discussion of the

potential actions within each category, I will continue with an exploration of how these aggregate

categories may also be applied to counterintelligence actions in cyberspace and what these actions

may look like.

Passive

Passive collection in cyberspace refers to the broad class of actions by which an entity is engaged

in “the continual absorption of information by the actor of freely available information”.560 That

is to say, there is no pursuit of information by the entity in question: all information arrives freely

and without (undue) influence by the intelligence ‘collector’. In other words, raw information

received by passive collection in cyberspace could also be referred to by the intelligence phrase

‘background noise’.561 Passive collection in cyberspace could include (but is in no way limited to)

reading news headlines as you scroll through social media feeds; listening to radio news in the car

or on the train during your daily commute (see Table 1 – Passive, Active, Proactive Intelligence

Collection).562 Passive collection is differentiated from active collection in large part by intent. It

involves those measures or actions which are taken sub- or unconsciously, or without active

attention being paid – without the intent to collect information.

560 O’Connor, “Cyber Counterintelligence,” 117. 561 Background noise is an intelligence term, also known as ‘chaff,’ which refers to the abundance of information that is collected or available which is not usually directly relevant to the collector’s interests, but becomes apparent in the course of operations anyway (as opposed to the ‘wheat,’ or valuable intelligence). Lowenthal, Intelligence: From Secrets to Policy, 87. 562 Original form appeared in O’Connor, “Cyber Counterintelligence,” 118.; updates made to the current table.

Chapter 5 – Counterintelligence and Cyberspace

188

Table 1 – Passive, Active, Proactive Intelligence Collection

Active

Moving beyond passive collection, active (intelligence) collection in cyberspace is herein

understood as “the pursuit of freely available information concerning a specific topic of

enquiry”.563 Whereas passive collection involves the ad hoc and largely unconscious absorption and

filtering of raw information, active collection requires that not only that conscious effort be made

on the part of the entity in question, but also that there be a specific end toward which the entity

is working. An important facet of this category is the nature of the intelligence which is collected:

it cannot be, in any way, restricted, classified or otherwise secret. Active intelligence collection, like

passive intelligence collection, can only be employed in the pursuit of freely available information

but, to make it distinct from passive collection, requires a specific intent of discovery. Collection

efforts may include (but are not restricted to) searching (for) specific social media accounts; seeking

out and exploiting particular news sites, sources, or feeds; interviewing specific individuals (from

which the resulting intelligence is not restricted access); and contacting particular individuals about

specific topics in search for further sources.564 The practice of dragnet surveillance is considered

active collection because the information is gathered with the intent of having it available for

intelligence, law enforcement and security actors to use.

563 O’Connor, 116. 564 O’Connor, 117. Information which is paywalled but is created for the purpose of dissemination, such as scholarly works written for journals or journalistic reporting that requires a subscription for access, is still considered freely available as that access if not subject to clearance.

Passive Collection Active Collection Proactive Collection

Possible Actions Checking recent newsfeeds on social media. Reading news headlines. Listening to radio news in the car.

Searching for specific social media accounts. Searching for specific news items or searching particular news sites. Interviewing specific individuals.

Hacking private accounts. Stealing information. Writing/using malware. Utilising keyboard loggers. Cookie distribution and tracking. Decryption of communications and data.

Courteney O’Connor – PhD Thesis 2021

189

Proactive

The highest tier of the CCI model and the category that presents the highest degree of (potential)

danger to a targeted entity, proactive collection in cyberspace refers to those actions which are

undertaken in order to acquire or reveal restricted, classified, or otherwise secret information that

efforts have been made to conceal; or, information that would by common understanding be

considered private, such as personally identifiable information (PII).565 Proactive intelligence

collection in cyberspace usually involves actions which have become fairly well known through

popular media, such as (but not limited to) the hacking of private accounts (email, social media,

various websites); the theft and/or distribution of private information (health/medical, financial);

the development and creation of malware designed to obtain protected information; and the

decryption of communications and data.566 Proactive collection in cyberspace is arguably most

highly utilised by states or state-supported parties due to the expense and complexity of the above-

mentioned actions. In addition, the fact remains that the modern state has both more to protect

in cyberspace and also more to discover (for the purposes of self-interest) than most other active

entities operating in cyberspace today.

Passive, Active, Proactive CCI

The aggregate utility classifications in this section have so far referred to intelligence collection

efforts, largely because it is those actions that counterintelligence operations must be designed to

thwart. Now that we have a basic understanding of the typology of intelligence collection in

cyberspace, we can apply the aggregate model to CCI practices, and begin to understand how these

actions and their potential motivations impact the discipline and practice of CCI itself. The next

sections will discuss passive, active, and proactive CCI before moving on to the operational

applications of CCI.

Passive CCI

As presented here, passive CCI is the set of practices undertaken by individuals on a regular basis

that maintain or improve their cyber security, which they may or may not realise are counterintelligence

practices. A large part of the aggregate model of utility classification used here involves intent.

Passive CCI includes those measures which are undertaken without the specific intent to engage in

565 O’Connor, 117. It should be noted that there are multiple conceptions of what constitutes a breach of privacy. While beyond the remit of this thesis, see Adam Henschke, “Privacy, the Internet of Things and State Surveillance: Handling Personal Information within an Inhuman System,” Moral Philosophy and Politics 7, no. 1 (May 26, 2020): 123–49, https://doi.org/10.1515/mopp-2019-0056. 566 O’Connor, “Cyber Counterintelligence,” 117.

Chapter 5 – Counterintelligence and Cyberspace

190

CCI practices. They include those practices which, sub- or unconsciously, thwart passive

intelligence collection, such as choosing not to post personal information or family photos on

social media (see Table 2 – Passive, Active, Proactive Cyber Counterintelligence). In keeping with

the general practice of analogising cyber practices and events with the medical field, passive CCI

might be akin to passive bodily functions such as blinking or breathing; they are things that we as

individuals do without actually thinking about the action or the purpose. We simply perform the

function.

Table 2 – Passive, Active, Proactive Cyber Counterintelligence

Younger generations, or those individuals in professions which educate them about ‘cyber hygiene’

practices are more likely to have habituated themselves to such practices, having either grown up

with, or been accustomed to, constant utilisation of cyber-assisted or -enabled devices. Such

practices form the foundation of what we would consider basic cyber security, or what has been

termed cyber hygiene.567 Such practices would include not entering passwords in front of people,

or in such a way that it becomes obvious what your password is; changing your password regularly;

not using the same password for multiple accounts; not using easily guessed passwords;568 logging

out of an account once you have finished using it instead of just closing the browser window;

deleting (unopened) suspicious emails or emails from unknown people; and password or PIN-

567 Australian Cyber Security Centre, “The Commonwealth Cyber Security Posture in 2019.” 568 Such as ‘password’ (or similar), ‘12345678’ (or similar), or your name/pet’s name and date of birth.

Passive CCI Active CCI Proactive CCI

Possible Actions

Covering your password when entering it on a shared computer Logging out of accounts when not actively using them Deleting (unopened) suspicious emails Deleting (unopened) emails from unknown persons Password or PIN-protecting your devices

Regularly changing device and account passwords Logging out of operating accounts on shared computers Reducing the use of/avoiding public WiFi Enabling firewalls on computers Concealing of (personal) information Verification of information found through secondary or tertiary sources

Engaging in honeypot operations to trap adversary actors Creation of purpose-built software for espionage or exploitation ‘Poisoning the well’ Hack-backs and reversal/implementation of exploits Identification and shaming campaigns when disinformation is identified

Courteney O’Connor – PhD Thesis 2021

191

protecting your devices. These are actions that many people undertake every single day, most

without realising that they are, in fact, employing CCI practices in their normal life. Unfortunately,

many people do not employ even passive CCI and thus we still have people who use ‘password’

as their password, still click on attachments in emails from unknown people, and do not password

or PIN-protect their devices, among several other common and basic mistakes.569

Education in the use, misuse, and abuse of cyberspace is at best nascent, and at worst non-existent.

Much like the rest of cyberspace, education in relation to the cyber domain has been ad hoc and

largely disorganised, more a practice of dealing with ‘it’ when it occurs, whatever ‘it’ may be,

according to situation and context. Again, the offensively-advantaged nature of cyberspace

becomes relevant here because those who seek to exploit cyberspace seem to become familiar

with/more educated in the technical infrastructure of cyberspace and cyber-enabled technologies

as that knowledge results in exploitation advantage – consider the hunt for, and use of, zero-day

exploits, and the construction and use of botnets.570 As cyber defence is in large part reactionary,

there tends to be a lag in defensive knowledge, education, and innovation. In addition and as a

defensive mechanism, those seeking to exploit cyberspace profitably (and at length) will learn and

employ cyber security and CCI practices by necessity. As such, I contend that there are a greater

number of individuals and groups with the ability to exploit cyberspace through various means

than there are those who are familiar with even the most passive of CCI practices. This adds to

the offensive advantage currently dominant in the cyber domain.

At risk of repetitive statements, this is why education and research in the area of CCI are not only

important, but crucial to our security at various levels. The more the individual and the population

at large know about the security practices that they can employ to increase both their own security

and the security of the state at large, the likelier it will be that as a population, passive CCI practices

will enable greater resources to be put toward large issues of security concern. Eventually, practices

and processes that must initially require a greater percentage of effort and mindfulness, such as

protecting your password, changing it regularly, employing common sense when checking your

569 Bisson, “‘123456’ Remains the World’s Most Breached Password”; National Cyber Security Centre, “Most Hacked Passwords Revealed as UK Cyber Survey Exposes Gaps in Online Security”; Telford, “1,464 Western Australian Government Officials Used ‘Password123’ as Their Password. Cool, Cool.” 570 A compromised device is often called a zombie, as the actions taken by the device are ‘mindless.’ A botnet is a network of compromised devices, the combined computing power of which can be directed at targets by the botnet controller. See Garrett M. Graff, “The Mirai Botnet Was Part of a College Student Minecraft Scheme,” Wired, December 13, 2017, https://web.archive.org/web/20180210043047/https://www.wired.com/story/mirai-botnet-minecraft-scam-brought-down-the-internet/; Norton, “What Is A Botnet?,” Norton, 2019, https://us.norton.com/internetsecurity-malware-what-is-a-botnet.html.

Chapter 5 – Counterintelligence and Cyberspace

192

emails, and not posting compromising personal information in widely accessible public fora, will

become subconscious habit rather than an effortful requirement. Ideally, once a population is

habituated to such practices, security in general is improved as those people become both more

educated and more aware of the dangers that they expose themselves to when mindlessly utilising

cyberspace and cyber-enabled or -assisted technologies. A greater degree of passive CCI, utilised

by a greater percentage of the population, frees up state resources that previously may have been

required for mitigation in the public domain, to be turned to issues with a greater relevance or

danger to national or international security concerns.

One of the drawbacks of passive CCI is, of course, that no matter the degree of public education,

there are those individuals who either refuse to employ them, or who, despite willingness and best

intention, are unable to do so. This is an unavoidable pitfall of every foray into mass education on

a particular (security) practice, and one that, to the best of my knowledge, has yet to be successfully

prevented or mitigated. Public outreach can be difficult in the best of circumstances, and in an era

where political divides and clashing ideologies are greater than in recent memory that difficulty

may be exacerbated. Moreover, misinformation and disinformation is constantly being introduced

into the information environment – studies have demonstrated that even when individuals are

shown proof that their beliefs may be based in false information, those beliefs are either more

likely to solidify or will continue to affect their future perceptions even when accepted as false.571

Audience (that is to say, public) education on the identification of misinformation and

disinformation is a necessity for highly networked societies, and will be explored further in chapter

six. It must also be noted, however, that as useful as passive CCI practices are to the individual

and (eventually) to the collective security, unless practiced at scale they will not contribute directly

and specifically to security at the state level, and thus to national and international security in the

cyber domain.

While an overarching international institution dedicated to cyber security is unlikely in the current

political climate, the next stage of CCI practices include those actions which can be directly linked

to a higher degree of security. Active CCI is a step higher than the passive CCI practices discussed

in this section, and require a greater degree of conscious effort than those which are, by definition,

571 This is a concept known as a belief echo - where information proven false still affects perception and filtering of information received in future, or opinions around that subject. Jianing Li, “Toward a Research Agenda on Political Misinformation and Corrective Information,” Political Communication 37, no. 1 (January 2, 2020): 125–35, https://doi.org/10.1080/10584609.2020.1716499; Emily Thorson, “Belief Echoes: The Persistent Effects of Corrected Misinformation,” Political Communication 33, no. 3 (July 2, 2016): 460–80, https://doi.org/10.1080/10584609.2015.1102187.

Courteney O’Connor – PhD Thesis 2021

193

unconscious or “passive”. The next section will take the conversation one step further, by defining

and discussing the concept of active CCI, and how practices underneath this banner may be

employed by the individual, the organisation and the state in pursuit of a greater degree of security.

Active CCI

Active CCI measures differ from passive measures in one major way: they are undertaken

consciously and in pursuit of a particular goal, against a perceived threat. As noted above, intent is a

crucial element of utility classification, and is a necessary part of active CCI. For the purposes of

this thesis, the particular goal of active CCI is a greater degree of security no matter the entity in

question; the perceived threat in a specific event, actor, or outcome which is preferably avoided.

In this respect, the categorisation of CCI actions mirrors the aggregate categorisation of

intelligence collection in cyberspace as discussed above. If active intelligence collection measures

are those which are undertaken consciously and in pursuit of a specific goal, then active CCI must

be those practices which are consciously and actively pursued in order to counter and deny both

passive and active intelligence collection attempts by other entities. The section above on active

intelligence collection in cyberspace refers also to the fact that the field of information, to which

these categories apply, should not be restricted in nature. It should not be classified or otherwise

actively protected by measures that would require significant (and potentially illegal) actions to

subvert. Following from the cyber hygiene analogy,572 for example, active CCI measures would be

akin to being immunised; seeing the doctor regarding any concerns; washing your hands after

coming into contact with unfamiliar objects or substances. In unrelated analogies, active CCI

measures could also be akin to locking all of your doors and windows at night; setting the parental

controls on Netflix; and hiding your friends’ car keys when they have been drinking. Again, what

sets these apart from the passive actions is that they are consciously chosen, and done so in

anticipation of, or awareness of, a particular threat or threats. As a further note, an active CCI

practice, once habituated, can become unconscious, and then can become passive. In the ideal

scenario, a populace, educated in active CCI, soon internalise the CCI activities, and thus there is

widespread, dynamic passive CCI.

In real terms, active CCI measures include many security practices that you may previously have

heard being encouraged or enforced by employers, friends or family members versed in cyber-

enabled technologies. Practices may include (but are not limited to): changing your device and

account passwords every three months; logging out of your operating account; reducing or

572 Australian Cyber Security Centre, “The Commonwealth Cyber Security Posture in 2019.”

Chapter 5 – Counterintelligence and Cyberspace

194

completely avoiding the use of public WiFi; deleting (without opening) emails from unknown or

suspicious senders; utilising firewall software (built-in or purchased). For larger organised groups

or states, active CCI measures will of course be on a larger scale, but many of the practices will be

the same or similar to those of individuals. The important part is that the practices are undertaken

purposefully, with the explicit intent of avoiding intelligence gathering by other parties and

protecting your own information and data, without entering into more proactive measures, which

will be discussed below.

The state, it must necessarily be argued, pursues active CCI as a matter of course in the

contemporary era. We saw through the analysis of national security documentation in chapter four

that the UK has successfully riskified the cyberspace risk to assets, and in the period 2008-2018

began to engage intelligence and security services in active cyber defence. Because intelligence

operations like espionage can be undertaken at any time, with any goal, by any party, the state must

be always constantly on guard against those intelligence operations. This means that everyone in

the employ of the state (and particularly those of the intelligence and security agencies), each

organised group working within the hierarchy of state bureaucracy, as well as the Executive that

represents that state as a single functioning entity, must all constantly and consistently employ

active CCI measures and practices. Unlike the successful riskification of assets, the UK over the

analysed period failed to appropriately elevate the threat to and from audiences in consideration

of their national cyber security policies. I suggest here that as a representative of its constituent

members, the state is responsible for the overall ‘herd immunity’ of the population against

intelligence collection and (dis)information operations by foreign and potentially hostile entities.

As such, the state relies on both organised groups (within and without the state hierarchy) and on

individuals to contribute to herd immunity against malicious cyber-attacks or operations, to reduce

both the overall investment and the overall security costs that the state necessarily bears in assuring

its own security.

Organised groups will also seek to undertake measures and practices that will further secure the

information and data, as well as that of their constituent members, when undertaking active CCI.

Unlike individuals, whose only security concern is their own, organised groups must undertake to

ensure two levels of security: individual security, as members of the group; and, at the second level,

group security as an aggregated actor. While many of the measures and practices will be the same

as those undertaken by individuals, organised groups must scale up their efforts without verging

on the more proactive measures (to be discussed in the next section) in their active CCI. In addition

Courteney O’Connor – PhD Thesis 2021

195

to contributing to the collective resilience of both the in-group members (further shoring up the

personal security of those individuals), the organised group, in pursuing active CCI measures,

contributes to the collective resilience and security of the general population and the state, thereby

reducing the weaknesses that that state must cover at the national, regional, and international levels.

For the individual, unlike the (occasionally subconscious or even unconscious) measures that may

be taken without actively being aware of what is being done, active CCI is just that: active. As

mentioned above in general terms, one of the major differences between passive CCI and active

CCI is the “knowingness” of the practice. The individual should knowingly, and meaningfully,

seek to increase their levels of cyber security by undertaking those measures which, to them, are

not unreasonable to secure their information, and which will not deter them from using their

technologies and equipment as they ordinarily would. Individual active CCI will, to follow the

global health analogy, contribute to ‘herd immunity’; that is to say, every individual that actively

seeks to employ active CCI measures in their own lives will contribute to the overall cyber security

of both the general population and the overall state.

The overall degree of security that is offered state, group, and individual active CCI measures is

something that is almost impossible to quantitatively measure, due to the multitude of parties

involved and the many practices that could potentially be termed active CCI. Suffice to say that

the more an individual or group pursues their security by actively and purposefully instituting

measures to deny, repel, or otherwise thwart hostile or malicious activities in cyberspace, the better

the overall state immunity to such activities will become, as those activities will become more costly

to the exploitative entity. The next section will discuss and examine the aforementioned concept

and measures of proactive CCI, at each of the levels of analysis analysed in this thesis.

Proactive CCI

Proactive CCI is differentiated from active CCI both in the scale of effort involved, and also the

variety of practices that constitute it. With active CCI, the goal is to increase the level of security

for a given entity by purposefully instituting and practicing measures that will deter, deny, or repel

activities targeted to gather information through cyberspace, but is contained to those measures

which do not require significant effort and do not require specialised skills of software. Proactive

CCI, however, requires both specialised skill and specialised – sometimes occasionally unique,

purpose-built – software that are designed not only to deter, deny, and repel but to actively pursue

the would-be exploitative parties, and undertake to fold back their operations, potentially ‘flipping’

Chapter 5 – Counterintelligence and Cyberspace

196

them into intelligence-gathering programs of one’s own. Proactive CCI straddles the line between

defensive action and offensive action, including targeting malicious actors. In other instances,

proactive CCI may look like poisoning the well before an attack. A good example of this kind of

proactive CCI is the alleged insertion of false information into their own records by French

authorities in the lead-up to the 2017 election, also known as a ‘cyber-blurring’ strategy.573

Convinced that political actors would likely be the targets of hostile hackers, when President

Macron’s campaign staff were attacked and data exfiltrated and ‘leaked’ in the days preceding the

election, Macron’s head of digital operations cast doubt on the accuracy and utility of any data

leaked because it contained false information planted by the digital operations team.574 This

rendered the information far less valuable as there was no way to ascertain veracity of data, and

deflected attention to the alleged attacker (Russia).575

Proactive CCI is characterised by the sensitive and often classified nature of both the targets and

the defence mechanisms. To follow a public health analogy, proactive CCI would be akin to a

national, compulsory immunisation scheme in anticipation of or during a pandemic outbreak.

Alternately, proactive CCI would be akin to targeted chemotherapy and radiation treatments of

cancer patients. The chosen measures are undertaken with knowledge of the specific threat and a

strategic understanding of how each measure will aid in mitigating that threat. Unlike passive and

active CCI, which are solely defensive, however, proactive CCI measures are also offensive in

nature, including the cyberspace equivalent of the poisoning wells in hostile territories, or the

planting of false intelligence to see where exactly a leak has sprung. Proactive CCI is the pursuit of

security through any and all means necessary, including offensive measures generally attributed to intelligence rather

than counterintelligence operations.

Given the nature of proactive CCI, the wholesale institution of its practice is generally

attributed/attributable to the state, given the financial, technological, and human resources

required to properly pursue it. However, some individuals and organised groups (increasingly,

multinational and transnational corporations such as Google, Apple, Facebook and Microsoft)

573 A cyber-blurring operation distorts the value of hacked data by introducing false information that gets exfiltrated with the authentic information, ‘blurring’ the value of all data because accuracy is uncertain for any of it. Adam Nossiter, David E. Sanger, and Nicole Perlroth, “Hackers Came, but the French Were Prepared,” The New York Times, May 10, 2017, sec. World, https://www.nytimes.com/2017/05/09/world/europe/hackers-came-but-the-french-were-prepared.html. 574 Nossiter, Sanger, and Perlroth; Nicole Perlroth, “Russian Hackers Who Targeted Clinton Appear to Attack France’s Macron,” The New York Times, April 25, 2017, sec. World, https://www.nytimes.com/2017/04/24/world/europe/macron-russian-hacking.html. 575 Boris Toucas, “The Macron Leaks: The Defeat of Informational Warfare,” Center for Strategic & International Studies, May 30, 2017, https://www.csis.org/analysis/macron-leaks-defeat-informational-warfare.

Courteney O’Connor – PhD Thesis 2021

197

have the technical capacity and resources necessary to apply proactive CCI to their security, and

so I will discuss each level of analysis as I have for both passive and active CCI.

The state is potentially the only actor in cyberspace to whom proactive CCI activities are not only

entirely necessary but considered a native part of holistic security. Moreover, in many jurisdictions,

the state is the only actor that is legally permitted to engage in proactive CCI. The low barrier to

entry for cyber conflict and exploitation, as well as the perception among states that cyberspace is

a natural alternative to kinetic conflict, results in a contested domain that is both offensively

advantaged and within which a variety of actors engage in attack and exploitation.576 Engaging in

proactive CCI activities not only contributes to the resilience and security of the state, but also to

the overall understanding of the national and international threatscape within which states must

operate. Responsible for the general security of the state and its constituent members, the security

agencies of the modern state actively and purposefully create and pursue avenues and measures of

proactive CCI.577 Such measures as have been publicised by whistleblowers and leakers tell us that

not only is the state the constant target of intelligence-gathering activities on the part of foreign

entities, but that the state is actualising proactive measures of CCI in an attempt to negate those

activities, and regain the advantage in cyberspace. While the publicised programs have instigated

debate on such issues as privacy and the legalities concerning the constitutive elements of these

proactive CCI programs, it cannot be understated that without the CCI measures being instituted

and pursued by governments, the loss of intelligence and of intellectual property would be

astronomically higher than current levels. Proactive CCI is an integral part of national security in

the contemporary international system, particularities and legalities according to national and

international law notwithstanding.

Organised groups face many of the same challenges of proactive CCI as do individuals, and it

should be noted that those groups that have publicly proven capable of proactive CCI have been

in turns vilified and celebrated.578 Corporations, for example, have proved to be the constant target

of intelligence-gathering and hostile cyber operations and have a demonstrated ability to employ

576 Healey and Jervis, “The Escalation Inversion and Other Oddities of Situational Cyber Stability”; Kollars, “Cyber Conflict as an Intelligence Competition in an Era of Open Innovation.” 577 Consider the NSA PRISM program. Greenwald, No Place to Hide: Edward Snowden, the NSA and the Surveillance State; Harding, The Snowden Files: The Inside Story of the World’s Most Wanted Man; Lempert, “PRISM and Boundless Informant”; Sottek and Kopfstein, “Everything You Need to Know about PRISM.” 578 Consider the hacking group Anonymous. See Beran, “The Return of Anonymous”; Zak Doffman, “Anonymous Hackers Target TikTok: ‘Delete This Chinese Spyware Now,’” Forbes, July 1, 2020, https://www.forbes.com/sites/zakdoffman/2020/07/01/anonymous-targets-tiktok-delete-this-chinese-spyware-now/.

Chapter 5 – Counterintelligence and Cyberspace

198

proactive CCI measures to either protect their own information or, if unsuccessful in this respect,

to then ‘follow’ the exploitative parties for both attribution and intelligence-gathering operations

of their own.579 Organised groups are inherently more capable than individuals of designing and

deploying proactive CCI, usually being both more financially able, but also having access to a

greater number of individuals with the skills necessary to the creation and utilisation of proactive

CCI measures. Groups have a greater probability of being targeted for both volume and value of

information possessed; certainly, in the Internet age, corporations have lost significant intellectual

property to intelligence-gathering operations conducted via cyberspace by foreign entities.580

Organised groups, particularly corporations with links to or contracts with government, are highly

lucrative targets for foreign entities. However, it should be noted that given the aforementioned

undecided and problematic nature of proactive CCI, and particularly the practice of so-called

‘hack-backs’, proactive CCI on the part of organised groups can also present the state with an

entirely new set of problems, not least of which being that hacking back is typically illegal.

Individuals that pursue, or intend to pursue, proactive CCI are among the minority. They should

have specific information they wish to protect and/or conceal; they should be the target of foreign

intelligence-seeking activities or operations, or they should be (among those) actively targeting

foreign entities that represent a threat to their own security (individual or national, such as with

patriotic hackers) and thus have a decided interest in proactively pursuing exploitative factions.

They should also be able to, or have access to those individuals who have the technical capacity

to, understand, design and deploy software that will both protect their own systems and

579 Catalin Cimpanu, “FireEye, One of the World’s Largest Security Firms, Discloses Security Breach,” ZDNet, December 8, 2020, https://www.zdnet.com/article/fireeye-one-of-the-worlds-largest-security-firms-discloses-security-breach/; Kieren McCarthy, “Cybersecurity Giant FireEye Says It Was Hacked by Govt-Backed Spies Who Stole Its Crown-Jewels Hacking Tools,” The Register, December 9, 2020, https://www.theregister.com/2020/12/09/fireeye_tools_hacked/; David Neal, “Insider Reveals Details of Google Hacks,” iTnews, April 21, 2010, https://www.itnews.com.au/news/insider-reveals-details-of-google-hacks-172661; David E. Sanger and Nicole Perlroth, “FireEye, a Top Cybersecurity Firm, Says It Was Hacked by a Nation-State,” The New York Times, December 8, 2020, sec. Technology, https://www.nytimes.com/2020/12/08/technology/fireeye-hacked-russians.html; Kim Zetter, “Report: Google Hackers Stole Source Code of Global Password System,” Wired, April 20, 2010, https://www.wired.com/2010/04/google-hackers/. 580 Jeremy Bender, “FBI: A Chinese Hacker Stole Massive Amounts Of Intel On 32 US Military Projects,” Business Insider Australia, July 17, 2014, https://www.businessinsider.com.au/chinese-hackers-stole-f-35-data-2014-7; Committee on Energy and Commerce, “Cyber Espionage and the Theft of U.S. Intellelectual Property and Technology,” Pub. L. No. 113–67, § Committee on Energy and Commerce (2013), https://www.govinfo.gov/content/pkg/CHRG-113hhrg86391/html/CHRG-113hhrg86391.htm; Franz-Stefan Gady, “New Snowden Documents Reveal Chinese Behind F-35 Hack,” The Diplomat, January 27, 2015, https://thediplomat.com/2015/01/new-snowden-documents-reveal-chinese-behind-f-35-hack/; Jennifer Runyon, “60 Minutes Investigates Chinese Cyber-Espionage in Wind Industry,” Renewable Energy World (blog), January 18, 2016, https://www.renewableenergyworld.com/wind-power/60-minutes-investigates-chinese-cyber-espionage-in-wind-industry/; Davey Winder, “Lockheed Martin, SpaceX And Tesla Caught In Cyber Attack Crossfire,” Forbes, March 2, 2020, https://www.forbes.com/sites/daveywinder/2020/03/02/lockheed-martin-spacex-and-tesla-caught-in-cyber-attack-crossfire/.

Courteney O’Connor – PhD Thesis 2021

199

information and proactively deter, deny, repel, and then track and pursue hostile parties, and

perform offensive manoeuvres in response. It should be noted that in the current international

system and in many sovereign states, ‘hack-backs’, as many of these measures are called, are either

illegal or not necessarily legal, and currently being debated as a potential response.581 Thus, with

many governments prohibiting proactive CCI, it places an additional burden on those

governments to engage in CCI activities to ensure that these private targets of cyber-attacks are

protected, and the overall systems are resilient to such attacks.

The preceding sections have discussed the nature of CCI and how measures of CCI are being

deployed. The manner and method according to which CCI is pursued tells us whether the use is

tactical or strategic; operational CCI describes how CCI is actually being deployed. The succeeding

section of this chapter will examine two categories of operational CCI: tactical, and strategic.

Tactical and Strategic CCI

There are two main categories for how CCI operates; tactically, and strategically. As in the common

understanding of tactics and strategy, tactical CCI is more a micro-level view and use of the

technological capacity, whereas strategic CCI is part of the overall (war fighting) strategy of a given

(armed; exploitative; hostile; intelligence-gathering; and/or military) force. There is necessarily

overlap between these terms, as certain tools, techniques, and methods can be employed as an

operational tactic or to further the overall strategic aims of an operation, but the dichotomy is

designed to describe the overall purpose for which CCI measures and practices are deployed.

Tactical and strategic CCI is not undertaken by any single agency or military branch of the modern

state, or group. Jurisdictional ambiguity notwithstanding, operational CCI also needs to be defined

according to the intended physical location of the outcome of the operation: domestic or

international. That designation will generally lessen the number of parties that could or will be

involved in operationalising cyber activities, and reduce the ambiguity of which entity has the right

and/or responsibility of operations. Of course, in cyberspace, regardless of agency or branch

mandate, and the physical location of intended outcomes, infrastructure is such that multiple

581 Rob Lemos, “Why the Hack-Back Is Still the Worst Idea in Cybersecurity,” TechBeacon, accessed April 10, 2021, https://techbeacon.com/security/why-hack-back-still-worst-idea-cybersecurity; Gavin Smith and Valeska Bloch, “The Hack Back: The Legality of Retaliatory Hacking,” Allens: Insight, October 17, 2018, https://www.allens.com.au/insights-news/insights/2018/10/pulse-the-hack-back-the-legality-of-retaliatory-hacking/; Nicholas Winstead, “Hack-Back: Toward A Legal Framework For Cyber Self-Defense,” American University, June 26, 2020, https://www.american.edu/sis/centers/security-technology/hack-back-toward-a-legal-framework-for-cyber-self-defense.cfm.

Chapter 5 – Counterintelligence and Cyberspace

200

national jurisdictions may also be involved, and international law then becomes a factor in security

calculations.582

I contend that CCI is deployed when and where necessary, and categorised after the fact as either

tactical or strategic in nature. Despite this, an understanding of the use and deployment of CCI

tools and techniques is, I argue, a necessary foundation of the field and study of cyberspace

intelligence and counterintelligence generally. The sections below, tactical CCI and strategic CCI

respectively, will examine and discuss the definition and nature of each category and how each is

deployed both in conjunction with traditional security practices and on their own, as separate

undertakings.

Tactical CCI

Tactical operations are comprised of those actions and engagements that, when undertaken, are

intended to introduce an advantage into the battle space: they are the “means towards the strategic

end.”583 With this in mind, tactical CCI can be understood as those actions, processes, technologies

and/or techniques that are designed and utilised in order to acquire, retain, or maintain an

advantage in specific, bounded cyber operations, and contribute to the overall security of the entity

in question and/or a specific strategic end. This section will examine where tactical CCI fits into

the threat elevation paradigm discussed in chapter three, then explore state usage of CCI before a

brief consideration of how organised groups and individuals might use CCI, and finally consider

the potential overlap of tactical CCI with strategic CCI.

Tactical CCI employs various techniques in an ad hoc manner in order to mitigate immediate

threats. Because tactical CCI is an operational response rather than a strategic preparation

(discussed in the next section) in most cases, it is assumed in this thesis that tactical CCI operations

will occur as a threat mitigation response upon identification of a specific and immediate threat:

that is to say, tactical CCI is a de-securitisation technique according to the threat elevation

framework.584 Tactical CCI thus seeks to mitigate the threats, and reduce the vulnerability to, and

harms arising from, specific cyber-attacks. Supporting this view is the understanding that

582 Allan, “Attribution Issues in Cyberspace”; American Society of International Law, “Beyond National Jurisdiction”; Schmitt and Vihul, “Proxy Wars in Cyberspace.” The Budapest Convention on Cybercrime represents a good foundation upon which to base future agreements or treaties on jurisdiction assignation in cyber-attack cases. See Council of Europe, “Chart of Signatures and Ratifications of Treaty 185.” 583 T.E. Lawrence, “Science of Guerilla Warfare,” in Strategic Studies: A Reader, ed. Thomas G. Mahnken and Joseph A. Maiolo (Oxon: Routledge, 2008), 246. 584 Threat elevation analysis is elaborated in chapter three.

Courteney O’Connor – PhD Thesis 2021

201

exploitative cyber-attacks will usually be innovative, novel, and threaten the interests of the victim

actor in a way that is difficult to anticipate or prepare for, and thus beyond the bounds of ordinary

political processes. In the context of disinformation as a security threat, the tactical CCI response

to it would be the immediate measures identified by the responding actors that will manage or

mitigate immediate dangers. For example, this could consist of flagging or removing from public

fora the identified false information, and potentially identifying publicly the disinformation

operation. This carries its own dangers as the belief echoes of those already affected may result in

entrenched faith in false information, but this is the decision-making area that crosses between

tactical and strategic CCI.585

Utilisation of tactical CCI is expected to change according to the aggregate group under

examination and the actor’s context within that group, i.e. a small agrarian state with limited

widespread connectivity vs. a small, highly connected states, or a six-person organisation/business

vs. a 2,000-person organisation/business. That said, some generalised assumptions about the

deployment of CCI teams and the utilisation of tactical CCI can be made. I will examine states,

organised groups and individuals in my overview of expected usage, though it should be noted

that given the generally active or proactive nature of tactical CCI operations, it is expected that

most observable deployment of tactical CCI will occur at the state level, or on behalf of the state

by groups or individuals. States must contend with a variety of actors in cyberspace – by examining

individuals and organised groups, we gain a broader understanding of CCI uses by, alongside, and

against states.

States

States are expected to be the aggregate group that utilises tactical CCI the most, by the sheer fact

that under the definitions offered by this thesis, tactical CCI is either active or proactive, and, as

such, should be the responsibility of the sovereign state. The state apparatus is also uniquely

constructed for the use of intelligence agencies, and these intelligence agencies are all (to various

levels) responsible for their cyber security. Some of these agencies will also be mandated to defend

against intrusions into government networks by hostile actors, and to trace those attacks: this is

where tactical CCI should be displayed. “The tactical level of war is where individual battles are

executed to achieve military objectives assigned to tactical units or task forces,”586 and contribute

to the overall security strategy of the state (or other actor) in question. Because tactical operations

585 Belief echoes are discussed further in chapter six. 586 Jason Andress and Steve Winterfeld, Cyber Warfare: Techniques, Tactics and Tools for Security Practitioners (Waltham: Elsevier, 2011), 4. Andress and Winterfeld, 4.

Chapter 5 – Counterintelligence and Cyberspace

202

are reactionary, and may change ad hoc during engagement, there are few actors outside states or

extremely large corporations and specialised research groups that would be technically capable or

legally able to undertake such operations.

Tactical CCI operations should not be observable in the actions of all aggregate groups identified

in this thesis. As has been stated, tactical CCI is an active or proactive form of CCI, and, as such,

is expected to be seen primarily in the security operations of states, state-affiliated groups or

individuals, or hostile actors targeting the state. Tactical CCI is suited to ad hoc, short-notice and/or

immediate-concern threats and should therefore be seen as a threat mitigation (de-securitisation)

technique rather than as risk management (de-riskification) technique. In saying that, there may at

times be overlap depending on the parties involved in, and actions required for, identified risks

and threats. The next section will continue the discussion of operational CCI with an examination

of strategic CCI. State cyber security agencies responding to ransomware attacks against either

government or civilian systems is an example of tactical CCI – identifying the ransomware,

reversing the exploit or identifying methods of working around the exploit so normal operations

can recommence, and potentially attributing the attack. The UK NCSC, for example, will provide

advice and technical assistance if requested through their incident logging portal, and provides a

list of companies certified to provide cyber incident response (CIR) in the case of ransomware.587

Organised Groups

The use of tactical CCI by organised groups is also expected to align in large part with their

relationship to the state, or to their percentage of market share in their chosen field. Corporate

groups, for example, are incentivised to increase their levels of cyber security by customer/client

feedback, and the knowledge that if they are not perceived as secure enough, their market share

might decrease. In terms of the corporate groups and their relationship to the state, certain levels

of security will be contractually obligatory to obtain contracts; without that security, corporations

lose out. In an alternate scenario, tactical CCI would be utilised by groups that are acting against

the state, or in contravention of state laws and regulations (such as hacker collectives, organised

criminal groups or terrorist organisations), and would thus require significant investment in, and

practice of, tactical CCI.

587 National Cyber Security Centre, “Mitigating Malware and Ransomware Attacks,” National Cyber Security Centre, March 30, 2021, https://www.ncsc.gov.uk/guidance/mitigating-malware-and-ransomware-attacks; National Cyber Security Centre, “Reporting a Cyber Security Incident,” National Cyber Security Centre, 2021, https://report.ncsc.gov.uk/.

Courteney O’Connor – PhD Thesis 2021

203

One example of an organised group that could potentially employ tactical CCI measures would be

the Islamic State, or ISIS.588 ISIS had a considerable online presence and has demonstrated

considerable facility in the use of social media platforms for recruitment and propaganda.589 Terror

groups generally need to employ CCI measures in order to continue operations both in the physical

world and in cyberspace: consider the global War on Terror and the measures taken against terror

groups by a variety of states.590 Proactive cyber intelligence and counterintelligence methods and

practices in the hands of states that pursue terror groups and their members require purposeful

and extensive use of tactical CCI in order to avoid punitive measures. Not utilising such measures

would likely result in the inability of the group to operate, so the use of tactical CCI contributes to

both the overall goals of the organisation, as well as the strategic requirement of operational

continuity and capacity to engage.

Individuals

Individual use of tactical CCI will depend on several factors, not least of which is the individual’s

position relative to cyberspace and/or national or corporate security. In addition, my contention

588 Also known as IS or Daesh. 589 Imran Awan, “Cyber-Extremism: Isis and the Power of Social Media,” Society 54, no. 2 (April 1, 2017): 138–49, https://doi.org/10.1007/s12115-017-0114-0; Kyung-shick Choi, Claire Seungeun Lee, and Robert Cadigan, “Spreading Propaganda in Cyberspace: Comparing Cyber-Resource Usage of Al Qaeda and ISIS,” International Journal of Cybersecurity Intelligence and Cybercrime 1, no. 1 (2018): 21–39; James P. Farwell, “The Media Strategy of ISIS,” Survival 56, no. 6 (November 2, 2014): 49–55, https://doi.org/10.1080/00396338.2014.985436; Christina Schori Liang, “Cyber Jihad: Understanding and Countering Islamic State Propaganda,” GCSP Policy Paper (Geneva: Geneva Centre for Security Policy, 2015), https://www.jugendundmedien.ch/fileadmin/PDFs/anderes/schwerpunkt_Radikalisierung/Cyber_Jihad_-_Understanding_and_Countering_Islamic_State_Propaganda.pdf. 590 Avril McDonald defined the War on Terror as “a multi-faceted counter-terrorism campaign, some aspects of which involve the use of military force, most of it carried out in States [sic] where there is no armed conflict, although aspects of the counter-terrorism campaign assume the characteristics of armed conflict where the US attacks a State [sic] considered to be harbouring or assisting Al Qaeda…” Avril McDonald, “Defining the War on Terror and Status of Detainees: Comments on the Presentation of Judge George Aldrich - ICRC,” Humanitäres Völkerrecht (1, 14:16:41.0), https://www.icrc.org/en/doc/resources/documents/article/other/5p8avk.htm. See also Council of Europe, “Council of Europe Counter-Terrorism Strategy 2018-2022,” Council of Europe | Committee of Ministers, 2018, https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=09000016808afc96; Council of Europe, “Country Profiles on Counter-Terrorism Capacity,” Council of Europe | Counter-terrorism, 2021, https://www.coe.int/en/web/counter-terrorism/country-profiles; Dirk Haubrich, “September 11, Anti-Terror Laws and Civil Liberties: Britain, France and Germany Compared 1,” Government and Opposition 38, no. 1 (ed 2003): 3–28, https://doi.org/10.1111/1477-7053.00002; Eva Steiner, “Legislating against Terrorism-The French Approach,” Lecture (London: Chatham House, 2005), https://www.chathamhouse.org/sites/default/files/public/Research/International%20Law/ilp081205.doc; United Nations, “Counter-Terrorism Committee,” United Nations Security Council Counter-Terrorism Committee, 2021, https://www.un.org/sc/ctc/; United Nations Security Council, “20 Years after Adopting Landmark Anti-Terrorism Resolution, Security Council Resolves to Strengthen International Response against Heinous Acts, in Presidential Statement,” United Nations Meetings Coverage and Press Releases, January 12, 2021, https://www.un.org/press/en/2021/sc14408.doc.htm; United States Department of Homeland Security, “Counterterrorism Laws & Regulations,” Department of Homeland Security, June 4, 2009, https://www.dhs.gov/counterterrorism-laws-regulations; United States Department of Justice, “The USA PATRIOT Act: Preserving Life and Liberty.”

Chapter 5 – Counterintelligence and Cyberspace

204

is that tactical CCI is an active or proactive action and thus outside the operational or actionable

reality of the general population. I would expect that tactical CCI would be used mostly by

individuals with significant knowledge of and interest/expertise in cyber security, such as

consultants; contractors; hackers (any variety); or criminals. As such, I would also expect that most

individuals using or intending to use CCI would be in some respect connected to state security

apparatuses through contracting or consultancy, or, by use of these practices, avoid state security

apparatuses, as is the case with hackers or criminals working against the state/within the state

against other actors.

Patriotic hackers are one example of the type of individual who may need to engage in tactical CCI

practices. Patriotic hackers are those individuals who, while not contracted by or employed by the

state to whom they declare allegiance, will undertake cyber-attacks on behalf of that state.

According to Michael Dahan, patriotic hackers are “more willing to “cross the line” into criminal

activity, the purpose being to inflict as much damage as possible upon the enemy. The patriotic

hacker seems to drift between “defender of the homeland” to cyber-criminal and back. There is

no “ethic” for the patriotic hacker beyond that of nationalism, no manifesto beyond that of

patriotism, no code of conduct beyond maximum damage.”591 Upon a perceived insult or threat

to the state, and without specific instruction or support, hackers will undertake to inflict damage

upon the identified enemy. In order to avoid security measures or retaliation, hackers will engage

in tactical CCI measures intended to reduce the likelihood of punitive measures against them. One

concern is that patriotic hackers will one day cause enough damage to incite an international

incident; at the time of writing, this has not yet occurred. A typical example of patriotic hacking is

to deface government websites,592 as with two hackers indicted in the US in 2020 for a defacement

campaign against US government websites in retaliation for the death of Iranian General Qasem

Soleimani in January 2020.593

Strategic CCI

Strategic operations are those which are designed and implemented in order to contribute to the

overall benefit of the actor, the assumed goal of that actor being victory over the opponent.

Whatever the decided strategic end of that actor, multiple actions have been designated as

591 Michael Dahan, “Hacking for the Homeland: Patriotic Hackers Versus Hacktivists,” in Proceedings of the 8th International Conference on Information Warfare and Security: ICIW 2013, ed. Doug Hart (Denver: Academic Conferences Limited, 2013), 56. 592 Andress and Winterfeld, Cyber Warfare: Techniques, Tactics and Tools for Security Practitioners, 197. 593 Cimpanu, “FireEye, One of the World’s Largest Security Firms, Discloses Security Breach.”

Courteney O’Connor – PhD Thesis 2021

205

necessary in order to fulfil what that actor sees as the requirements of victory in a given

engagement: taken together, these actions (and they manner in which they will be accomplished)

form the strategy of that actor.594 With this general understanding of strategic operations in mind,

strategic CCI can be understood as those actions, processes, technologies and/or techniques that

are designed and utilised In order to obtain and defend an informational advantage in cyberspace

that will contribute to the overall victory of the actor in a specified conflict; the overall approach

to strategic CCI will inform how tactical CCI is undertaken. This section will discuss how strategic

CCI fits into the threat elevation paradigm; I will examine state use of strategic CCI before briefly

discussing how organised groups and individuals might use strategic CCI. I will then analyse the

potential overlap of tactical CCI and strategic CCI.

Unlike tactical CCI, which is considered in this thesis to be a largely reactionary field, strategic CCI

looks at ‘the bigger picture’, to use a trite expression. Approaches to strategic CCI may be inferred

from national cyber security strategies, as seen in the UK case study in chapter four. Tactical CCI

is the short-term operational response to outside risks and threats, and in service to the plan/s set

by the overall strategic CCI outlook.595 In contrast, strategic CCI is part of the strategic preparation

for cyber conflict, as well as the strategic response to attack or exploitation by a hostile actor, and

represents a more innovative, anticipatory CCI. Strategic CCI, unlike tactical CCI, is expected to

be seen in, and used by, all three identified aggregate groups in this thesis: states, organised groups,

and individuals. This is in large part because while certain actors may be unable or unwilling to

employ tactical CCI, they will still have an overall interest in, and plan for, strategic CCI: that is to

say, they have a vested interest in making sure that there is a general plan and willingness for

strategic initiatives to prevent hostile actors from easily accessing their systems and information.

Given the increasing understanding of the potential dangers of disinformation, for example, both

states like the UK,596 and larger corporations such as Apple and Twitter, are developing their policy

responses to disinformation actors and operations – this will be considered further in chapter six.

States and large corporations depend in large part on the trust of their citizens (for the former)

and their customers (the latter). There are serious and negative implications for actors that neglect

594 Lawrence, “Science of Guerilla Warfare,” 246. 595 Lawrence, 246. 596 In the case study time frame, I found that the UK had failed to comprehensively elevate the threat to audiences posed by cyberspace, which includes the risks of disinformation to electoral and democratic integrity. However, some progress was made in the intervening time between 2018 and 2021, with the publication of the Integrated Review. Given Integrated Review and associated Defence Command Paper fall outside the case study parameters, I consider these documents in the thesis conclusion in the context of recent developments.

Chapter 5 – Counterintelligence and Cyberspace

206

to put in place strategies and policies for the management and mitigation of disinformation in the

current era, not least of which is the erosion of trust in both government and mainstream media.597

Utilisation of strategic CCI is expected to change according to the group under examination; in

this respect, it is treated no differently to tactical CCI. The practicability, efficiency, and willingness

to commit to a strategic CCI outlook plan is also expected to vary according to the capacities and

interests of the actors within the aggregate groups. States will invest in those measures that are

considered necessary and practicable – the priorities of states will vary according to several

different factors, not least of which are geopolitical context and record of hostilities. That is to say,

the assumption above in the tactical CCI section about small state vs. large state is also held to be

true in any consideration of strategic CCI. Nevertheless, a strategic outlook for CCI may be more

common as even those groups unable or unwilling to actively and proactively pursued tactical CCI

operations may be more inclined to initiate strategic policies for CCI.

States

More than the other aggregate groups, it is expected that in the contemporary age, states would be

in possession of, and actively enforcing, strategic CCI policies and outlooks.598 While strategic CCI

is expected to be used by the other two aggregate groups, states have both more to defend and

more strategic vulnerabilities in the case of failure. The intelligence and security apparatus of the

state, again, will be mandated with certain security jurisdictions, among which will be CCI. It is

expected that there will be explicit norms and regulations for each state, in light of strategic security

calculations, as to what the strategic outlook is for cyber security and thus CCI. These can be seen

most often in the official national security and cyber security strategies and policies published by

governments, as with the variety of UK documents analysed in chapter four. The actions of state

actors will be able to undertake in preparation for, and response to, hostile actions in cyberspace

597 House of Commons Digital, Culture, Media and Sport Committee, “Disinformation and ‘Fake News’: Final Report,” 2019, https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179102.htm; See also Golovchenko, Hartmann, and Adler-Nissen, “State, Media and Civil Society in the Information Warfare over Ukraine: Citizen Curators of Digital Disinformation.” 598 According to the Center for Strategic and International Studies, 78 states maintain national cyber security strategies; 31 have military cyber strategies; 63 have cyber security mitigation and resilience plans for critical infrastructure; and 91 have cybercrime strategies Center for Strategic and International Studies, “Global Cyber Strategies Index,” Center for Strategic and International Studies, 2021, https://www.csis.org/programs/strategic-technologies-program/cybersecurity-and-governance/global-cyber-strategies-index.. The International Telecommunications Union its 100 states with current cyber security strategies or decrees, with an additional 12 states having drafts or with drafts in progress International Telecommunications Union, “National Cybersecurity Strategies Repository,” ITU | International Telecommunications Union, 2021, https://www.itu.int:443/en/ITU-D/Cybersecurity/Pages/National-Strategies-repository.aspx.. The UK’s strategies and their relation to CCI are examined in chapter four of this thesis

Courteney O’Connor – PhD Thesis 2021

207

will define the use and operation of tactical CCI. It is therefore expected to be a matter of law and

policy (though the explicit details may necessarily be classified and as such unavailable to

researchers).

Strategic CCI is expected to be observable in all aggregate groups identified in this thesis. While

strategic CCI, like tactical CCI, is both an active and occasionally proactive form of CCI,

implementation of strategy may not necessarily require the technical capacity or the continued

investment that maintenance of tactical CCI (cyber response teams) would. Unlike tactical CCI,

strategic CCI is a longer-term form of risk management. There may of course be overlap in terms

of the risks or threats in any particular situation, but the utility of knowing the difference between

tactical and strategic CCI is beneficial to an understanding of both CCI specifically, and cyber

security generally.

Organised Groups

The use of strategic CCI by organised groups is also expected to align, in large part, with those

that are identified as having the potential to use tactical CCI. Again, because strategic CCI doesn’t

necessarily require the technical capacity or the continued investment in the maintenance of cyber

response teams that tactical CCI would, it may be the case that strategic CCI is also more common

among actors such as corporate groups. As with tactical CCI, it is likely that corporate groups

would be contractually obliged to ensure certain levels of security, among which would be an

overall cyber security strategy in addition to the actions that would be undertaken in the event of

a breach (tactical CCI). Organised hacker groups, on the other hand, are incentivised to create and

then employ and enforce a strategic outlook in terms of CCI.

In terms of organised groups that employ strategic CCI, defence and government contractors that

specialise in cyber-specific or general defence technologies and services must have confidence in

their ability to secure their systems and those of their clients. In respect to government contracts,

it is often a requirement that contractors identify the measures that will be taken to ensure the

integrity of client data – it is in the interest of organised groups like consulting firms to ensure they

have measures in place to both avoid undue damage to systems in the event of a cyber-attack. If

undertaking offensive action in cyberspace, these groups need to have a plan in place that will

increase the probability of success and anticipate resistance or retaliation, such that defensive

options (on-the-spot, reactionary tactical CCI measures) are prepared. The UK National Cyber

Security Centre maintains a list of Cyber Assessment Framework (CAF) certified companies that

Chapter 5 – Counterintelligence and Cyberspace

208

provide cyber incident response services in the case of cyber-attack (or, in the case of proactive

entities, advice and tools for cyber defence).599

Individuals

An individual’s approach to, and use of, strategic CCI, again, will depend on a number of factors.

These include (but are not limited to) the individual’s position relative to cyberspace and/or

national security; the individual’s personal, economic, or political interest in cyber security; and the

relative worth of that individual in terms of data availability. My contention is that strategic CCI

also covers both risk management and threat mitigation; it can be both active and proactive; but

that utilisation of strategic CCI should be more common than tactical CCI as it doesn’t necessarily

require the technical knowledge or capacity required by tactical CCI. In saying this, I would expect

the same array of individuals that could potentially use tactical CCI to be interested in and likely

to utilise strategic CCI: experts; consultants; contractors; and hackers (any variety).

Returning to the example of the patriotic hacker, these individuals are also likely to engage in

strategic CCI planning and practices. In order to engage in cyber conflict, particularly against a

state or domestic opponents to the state, there must be a level of confidence in the hacker’s ability

to keep his identity removed from the conflict – a level of confidence such that the actions taken

either will not be traced back to them, or retaliatory action will be unlikely due to political

calculations. Unlike in the traditional (physical) domain, where intelligence and counterintelligence

considerations remained the remit of state intelligence agencies, in cyberspace, the barrier for entry

to conflict is much lower and the probability of conflict between classes of actors more likely. One

of the goals of this thesis is to build the body of literature around CCI, to conceptualise what CCI

looks like for three classes of actor (states, organised groups, and individuals), and examine how

CCI on the part of all three classes interacts with the modern state.

Employers of CCI

As I have mentioned previously, there are many parties active in cyberspace that utilise CCI

methods, techniques, and practices, whether knowingly or not. Previous sections of this thesis

have discussed the types of CCI an entity may use (passive; active; proactive), and the manner in

599 As of writing, there are nine CAF-certified CIR services on the NSCS web page: BAE, Context, Deloitte LLP, F-Secure, KPMG, Mandiant, NCC Group, PwC, and SecureWorks. National Cyber Security Centre, “All Products & Services,” National Cyber Security Centre, 2021, https://www.ncsc.gov.uk/section/products-services/all-products-services-categories. For further information on the Cyber Assessment Framework, see National Cyber Security Centre, “NCSC CAF Guidance,” National Cyber Security Centre, 2021, https://www.ncsc.gov.uk/section/private-sector-cni/products-services.

Courteney O’Connor – PhD Thesis 2021

209

which these types may be employed (strategic; tactical). The uptake of CCI technologies and

practices also presents security challenges to various parties in cyberspace, but particularly the state.

Indeed, the use of CCI technologies and practices on the part of individuals and organised groups

must be one of the great frustrations of twenty first century intelligence communities,600 as it

creates higher barriers to intelligence collection efforts targeted at legitimate threats, such as

terrorist groups or hostile governments. The more frequent the usage of CCI technologies and

practices, the more time-consuming and expensive these intelligence collection efforts must be.

The use of counterintelligence practices by many of these actors occurs at different levels, and for

different purposes. It is the goal of this thesis to contribute to the overall understanding of CCI as

a discipline, and to begin building the body of literature surrounding the way different classes of

actor utilise CCI. Intelligence has traditionally been the domain of the sovereign state – the

introduction of new conflict actors that can and do utilise CCI affects the pattern of interactions

in cyberspace. Following a brief discussion on actor goals and relative power, this section will

examine how these three classes of actor (states, organised groups, and individuals) utilise

cyberspace and employ practices and technologies designed to improve their security and

resilience.

Goals and Aims

The (suspected or stated) goals or aims of actors will change with the nature of the actor in

question. The generally defensive posture of corporate response groups, for example, is

diametrically opposite to the generally aggressive posture of NCGs like Anonymous. In addition,

the goals of an organisation like Anonymous change regularly, so there is a fluidity of purpose that

you would not see as frequently in corporate response groups, nor generally in the state-affiliated

groups that take direction from state policy. On the other hand, state actors are likely to engage

more in offensive actions in cyberspace, as they seek a position of relative dominance over

competing actors. Alternately, they may engage more in intelligence collection across state

boundaries to aid their own domestic industry, as has been accused of China.601 Because the goals

and aims of the various types and sub-types of actor are incredibly context-dependent, it is difficult

to ascertain what kinds of CCI practices they may employ at any given time. However, it is fair to

600 Kahney, “The FBI Wanted a Backdoor to the IPhone. Tim Cook Said No”; Kharpal, “Apple vs FBI.” 601 Nicholas Eftimiades, “The Impact of Chinese Espionage on the United States,” The Diplomat, December 4, 2018, https://thediplomat.com/2018/12/the-impact-of-chinese-espionage-on-the-united-states/; Christopher Wray, “Responding Effectively to the Chinese Economic Espionage Threat,” Speech, Federal Bureau of Investigation, February 6, 2020, https://www.fbi.gov/news/speeches/responding-effectively-to-the-chinese-economic-espionage-threat.

Chapter 5 – Counterintelligence and Cyberspace

210

argue that certain practices and methods would be generally applicable to any class of actor no

matter the setting or other actors involved. It is expected that states, organised groups and

individuals would employ basic ‘cyber hygiene’ practices as have been discussed in previous

sections; protecting and changing passwords; reducing access to certain information to those who

have a real need of that information; putting in place processes to deny malicious actors access to

their data. Depending on the actor, the aim, and the competing party, more proactive CCI practices

may also be employed, involving ‘hack-backs’ and overtly publicising attribution of (attempted)

cyber-attacks, while potentially continuing on to undermine the parties that engaged in the attack

in the first instance in order to collect intelligence.602

Relative Power

The relative power of actors to affect each other will depend on the type or sub-type in question,

and on the relative size of that actor and the resources that they wield. States that have highly

interconnected and cyber-integrated economies and security considerations will be both more

capable of effecting efficient CCI and more likely to be targeted for the value of the information

and resources that are being protected. The larger, heavily industrialised, and cyber-connected

states like the US, UK, China and Russia are extremely active in both intelligence collection and

counterintelligence operations as part of ongoing risk management in terms of national security.

States like Estonia, which are highly connected and dependent on cyberspace technologies and

platforms also have vested interests in cyber capacity and agility. Geostrategic considerations also

come into play when states design cyber strategies and invest in cyber capacities – Estonia is not a

particularly large country, but, as seen in chapter three in the discussion of the Estonian experience

with cyber-attacks in 2007, its proximity to and historical relationship with Russia make it more

sensitive to, and active in, cyber operations. Beyond state-to-state interactions, resource capacity

as well as domestic and international law make states influential in relation to the ability of

organised groups and individuals to act in cyberspace.

Organised groups such as corporations, particularly mega-corporations like Google or Amazon,

have a greater capacity to invest in cyber capabilities, and therefore a greater power relative to

smaller states (less developed and heavily interconnected), smaller groups, and individuals. In terms

of corporate response groups, Apple will be able to employ a greater number of individuals and

invest a higher amount into cyber innovation and defence than, for example, a start-up company

602 This is not a complete listing of the potential proactive CCI practices an organised group may employ; to enumerate them would require a greater scope than this thesis allows.

Courteney O’Connor – PhD Thesis 2021

211

with an employee roster of twelve people. Anonymous will have a higher probability of affecting

(the way they wish) an individual, another group, or indeed a state representative actor than will

smaller hacker collectives of only a handful of people. State-affiliated groups like those frequently

linked with the Russian Federation or the PRC will have a higher probability of adversely affecting

both other organised groups and states than will corporate response groups, for example. Since

the relative power of an organised group will inform where in the threat elevation hierarchy that

particular organisation will fall, it is also possible that a higher degree of risk management or threat

mitigation will result in the decreased efficiency or relative power of that group. When parties are

known to be hostile and aggressive in cyberspace, defences can more easily be created that are

specific to those actors, and in defence of the data most likely to be of interest to them. That is

not to say that the defences will be perfect or even 100% effective; simply that known actors can

be defended against more readily.

Individuals are more capable of affecting each other than they are of affecting groups or states by

virtue of the latter being generally more capable at marshalling both defensive and offensive

resources. Yet the individual, as part of an aggregate, has great influence over the shaping of cyber-

enabled interactions. In terms of the relative power of individuals to affect organised groups and

states, one must only look to the potential (and demonstrated) consequences of disinformation on

recent elections to see that the psychological state and thus the voting behaviour of individuals in

aggregate can have a massive effect on the outcome of leadership races and the succeeding national

and international actions and policies of the elected administration. The ability of individuals to

ascertain true from false, and engage in security and CCI practices, will have a major effect on

socio-political and strategic outcomes, which is why it is crucial that threats to democratic

audiences are appropriately elevated.

States

States are arguably the most important actor in cyberspace to actively and openly employ

intelligence agencies for both collection and counterintelligence operations. Obviously, intelligence

communities are an integral element of the security apparatus of the modern state, and, as such,

are responsible for obviating or at least providing information on a great deal of the security

concerns of any given government. Intelligence communities vary in size and operational mandates

according to the political and geographic security concerns of different states, but all intelligence

communities will have both intelligence collection and counterintelligence capacities.603 The degree

603 Johnson and Wirtz, Intelligence; Lowenthal, Intelligence: From Secrets to Policy.

Chapter 5 – Counterintelligence and Cyberspace

212

of efficacy, funding and capacity also varies greatly, and often (in democracies at least) according

to the political and strategic goals of the incumbent administration and global threat context.

Whether or not there are specific agencies mandated for particular jurisdictions or tactical/strategic

concerns is also a matter of context and government.

The security priorities of states are manifold, and too numerous to discuss in any true detail within

this thesis. For the purposes of this thesis, rather than discuss the variety of threats facing the

modern state, I will refer only to threats and risks as a class, rather than as specific varieties of

(potential) danger: intelligence and counterintelligence apparatuses, then, focus on securing the

state against internal and external threats and risks, according to mandate and contextual

requirement. As the modern state depends increasingly on networked technology for both

domestic and international interactions and transactions of all kinds, more investment and

development is being required in the areas associated with cyber technologies. Because the state is

responsible for its citizens’ security as well as its own continued existence, and because the

international system is increasingly interdependent, the safety of data is of paramount importance.

When one considers that hacks that have stolen terabytes of data and blueprints for current and

developing technologies, there is an enormous financial and socio-political toll to be paid for

unsecured and insecure data.604 State institutions also suffer a loss of public trust when significant

events such as large-scale hacks occur, as the public begins to question whether and how the state

can ensure the security of private, secret or otherwise classified information in an increasingly

hostile information environment.

Additionally, states are not equal in all things. Nor are all states operating under a shared set of

assumptions about each other, or the world in general; if you accept that all states will try to ensure

their own interests above the interests of other states, then it is inevitable that states will conduct

intelligence operations against each other.605 As such, it is imperative that there be highly capable

and efficient counterintelligence offices, agencies, or branches upon which the state can depend in

604 For example, the Joint Strike Fighter. Department of Justice Office of Public Affairs, “Two Chinese Hackers Working with the Ministry of State Security Charged with Global Computer Intrusion Campaign Targeting Intellectual Property and Confidential Business Information, Including COVID-19 Research,” The United States Department of Justice, July 21, 2020, https://www.justice.gov/opa/pr/two-chinese-hackers-working-ministry-state-security-charged-global-computer-intrusion; Dorling, “China Stole Plans for a New Fighter Plane, Spy Documents Have Revealed”; Gady, “New Snowden Documents Reveal Chinese Behind F-35 Hack”; Westbrook, “Joint Strike Fighter Plans Stolen in Australia Cyber Attack.” 605 While enshrined in domestic law, there are no international laws against espionage. See Veronika Prochko, “The International Legal View of Espionage,” E-International Relations (blog), March 30, 2018, https://www.e-ir.info/2018/03/30/the-international-legal-view-of-espionage/; A John Radsan, “The Unresolved Equation of Espionage and International Law,” Michigan Journal of International Law 28, no. 3 (2007): 595–623.

Courteney O’Connor – PhD Thesis 2021

213

order to both secure and defend their own secrets, as well as undertake operations to discover

which entities are conducting operations against the state. In addition to the threat from foreign

and potentially hostile governments, the modern state is also increasingly subject to operations

conducted by organised groups (some of which may be supported or sanctioned by other states)

and individuals that are highly capable operators in the cyber environment.606 The ability of the

state counterintelligence apparatus to deter these actors and operations, and mitigate the harms, is

of the highest importance. In cyberspace, however, it is an impossible task to ensure the

total/complete security and privacy of the entirety of state secrets, or indeed the entirety of private

or confidential information of any kind. While the state (generally speaking) has access to a greater

array of resources than most other entities,607 it is also true that the state has a much larger stable

of concerns than do most other parties, and they also have a wider array of stakeholders attempting

to undermine their security at any given time. Resources must be apportioned to the most valuable

of assets, and, even then, the offensively advantaged nature of cyberspace means that with

significant investment and development, CCI services are often ‘on the back foot’.

This is not to say that the state does not have functional and effective counterintelligence services

and practices; simply that, not everything can be perfectly secured. It should be noted that despite

the cyber context of this particular thesis, there is always a human element to intelligence: the

greatest of firewalls can be designed by the greatest computer engineers alive and yet can still be

defeated by somebody failing to keep their password secret, or failing to change their password

from the factory default, or failing to install and run software patches.608 Part of the Russian

interference against the Clinton campaign in the 2016 involved phishing emails sent to former and

current campaign staff; this tactic succeeded against John Podesta, Clinton’s campaign manager,

who was tricked into entering his password into a false site and became the victim of a hack-and-

leak operation.609 There is also the possibility that cyber security policy may not be followed, or

basic cyber security practices be ignored. In 2018 it was revealed by The Telegraph that sensitive

606 Kollars, “Cyber Conflict as an Intelligence Competition in an Era of Open Innovation.” 607 Excepting the largest of multinational corporations, some of which have higher gross domestic product (GDP) than some States. Fernando Belinchón and Qayyah Moynihan, “25 Giant Companies That Earn More Than Entire Countries,” Business Insider, July 25, 2018, https://www.businessinsider.com/25-giant-companies-that-earn-more-than-entire-countries-2018-7?r=AU&IR=T; Eelke Heemskerk, Jan Fichtner, and Milan Babic, “Who Is More Powerful – States or Corporations?,” The Conversation, July 11, 2018, http://theconversation.com/who-is-more-powerful-states-or-corporations-99616. 608 Bhargava, “Human Error, We Meet Again”; Bisson, “‘123456’ Remains the World’s Most Breached Password”; Miller, “The Weakest Link.” 609 Barrett, “DNC Lawsuit Against Russia Reveals New Details About 2016 Hack”; Graff, “The Mirai Botnet Was Part of a College Student Minecraft Scheme”; Lily Hay Newman, “The Midterm Elections Are Already Under Attack,” Wired, July 20, 2018, https://www.wired.com/story/midterm-elections-vulnerabilities-phishing-ddos/; Krawchenko et al., “The John Podesta Emails Released by WikiLeaks.”

Chapter 5 – Counterintelligence and Cyberspace

214

documents originating in the Cabinet Office, Home Office, and National Health Service of the

UK had been “accessible on Google for up to four years” because civil servants used public Trello

pages to store sensitive documents - including those dealing with MI5 and counterterrorism -

despite being instructed not to do exactly that.610 Examples like these illustrate both how central

the human factor is in considerations of cyber security, and why the capacity of the individual to

be both a vulnerability and a threat vector to state security needs to be comprehensively elevated.

That said, it could be argued that states have the most to lose in a cyber-attack, and thus should

be expected to develop considerably higher capacity in CCI than any other aggregate currently

operating – and this is a fair assertion. In the 2013 UNIDIR Cyber Index, more than 90 states

were listed as maintaining computer emergency response teams; even more were mandating their

agencies with cyber security responsibilities.611 Since then, cyber policies have increasingly made

their way into both popular thought and political administration, with many states either

designating cyber agencies and commands, or adjusting existing agency mandates to ensure that

CCI is being considered and implemented by at least one section of government or the military.

As seen in the case study in chapter four, national and cyber-specific security documentation612

have developed significantly over time, and that development in large part aligns with the

increasing concern over cyberspace being a threat vector in the contemporary international

threatscape, and the need for a coordinated approach to CCI. This thesis contributes to a general

understanding of what CCI is; how and by whom it is employed; and how the threat perception

of cyberspace has been elevated.

Organised Groups

Within the aggregate model employed in this thesis, the second type of actor in cyberspace is the

organised group. Further, within this particular type, I have identified three sub-types within which

most, if not all, group actors in cyberspace should fall. These sub-types are

a) the state-affiliated or -sponsored group (overt or otherwise) b) the non-affiliated or -sponsored citizen group c) the corporate group.

610 James Cook and Joseph Archer, “Telegraph Investigation: Google Search Exposes Sensitive Files and Emails from inside the Government and the NHS,” The Telegraph, April 18, 2020, https://web.archive.org/web/20200418181314/https://www.telegraph.co.uk/technology/2018/07/21/telegraph-investigation-google-search-exposes-sensitive-government/. 611 United Nations Institute for Disarmament Research, “The Cyber Index: International Security Trends and Realities.” United Nations Institute for Disarmament Research. 612 For the purposes of this thesis, national security documentation herein refers to national security strategies; cyber security strategies; and security reviews.

Courteney O’Connor – PhD Thesis 2021

215

Following a brief discussion of these sub-types, this section will consider the security concerns and

vulnerabilities of this actor type, the potential aims of organised groups, their power relative to

other actors in cyberspace, and the intersection of the organised group actors with other actors in

the system. The ways in which organised groups are capable of acting, and their motivations in so

doing, are estimated to roughly coincide with their perception of the relative risk or threat that

cyber-enabled attacks represent to their interests.

State-affiliated and/or State-sponsored Groups

The first of the organised group sub-types I will discuss is that which is linked, however closely or

loosely, to a sovereign state. One thing to mention concerning these groups is the relative difficulty

of positively ascertaining state links when these groups will usually go to great lengths to either

obscure their (national) affiliation, or when the states suspected of sponsoring these groups

contend that they’re doing no such thing. The use of state-sponsored or -affiliated groups (SSAGs)

are, in many cases, a way for states to undertake operations in a relatively risk-free way, given that

officially there are no links of support between the government and the groups that may be taking

instruction or direction (however loosely defined) from government representatives. Examples of

groups suspected to be affiliated with their national governments include APT41 (also known as

DoubleDragon) from the PRC,613 and Energetic Bear (or CrouchingYeti) from the Russian

Federation.614 Groups that represent a significant concern are assigned (in the Western system, at

least) an APT numerical designation, for Advanced Persistent Threat.615

While this class of actor is state-sponsored or state-affiliated and so has either instruction or

support from state authorities, the existence of such groups and their capacity to engage with both

state and non-state actors in cyberspace speaks to the importance of CCI – not just to these actors,

but to their targets. The number of capable actors in the cyber domain that have the capacity to

affect state action and policy is growing; SSAGs, with the support (official or nonofficial) of states,

613 Nalani Fraser et al., “APT41: A Dual Espionage and Cyber Crime Operation,” FireEye, August 7, 2019, https://www.fireeye.com/blog/threat-research/2019/08/apt41-dual-espionage-and-cyber-crime-operation.html. 614 Cimpanu, “FireEye, One of the World’s Largest Security Firms, Discloses Security Breach”; Andy Greenberg, “The NSA Confirms It: Russia Hacked French Election ‘Infrastructure,’” Wired, May 9, 2017, https://www.wired.com/2017/05/nsa-director-confirms-russia-hacked-french-election-infrastructure/; Kaspersky ICS CERT, “Energetic Bear / Crouching Yeti: Attacks on Servers,” Kaspersky ICS CERT | Kaspersky Industrial Control Systems Cyber Emergency Response Team (blog), April 23, 2018, https://ics-cert.kaspersky.com/reports/2018/04/23/energetic-bear-crouching-yeti-attacks-on-servers/. 615 FireEye, “Advanced Persistent Threat Groups (APT Groups),” FireEye, 2021, https://www.fireeye.com/current-threats/apt-groups.html.

Chapter 5 – Counterintelligence and Cyberspace

216

are increasingly targeting not only states, but businesses within those states.616 Of the 10,000

victims of state-sponsored cyber-attacks identified by Microsoft in 2019, most were businesses,

and many were conducted for the purposes of either political influence or intelligence collection.617

It is crucial that states, organised groups (including businesses), and individuals approach CCI in a

coordinated manner, and fully understanding the risks of not investing sufficiently in cyber security

and resilience.

Non-affiliated Citizen Groups

The second of the identified sub-types of the organised group aggregate is that which is not

affiliated to or dependent on any national government, but which is capable of acting with unity

of purpose and affecting the international political system through their actions in cyberspace, to

varying degrees. It is important to note that, while these groups are herein referred to as non-

affiliated citizen groups (NCGs), the members of such groups should not be confused with

individual actors, which will be discussed in a subsequent section. These are generally understood

as not acting in aggregate toward a common goal according to the aggregate model utilised in this

thesis. NCGs, by definition, must remain unaffiliated with a national government; have a decision-

making apparatus capable of setting direction for the group; and act with unity of purpose toward

the attainment of a specific goal. Groups such as Anonymous618 are examples of the non-affiliated

citizen groups as they are understood in this thesis.

The relevance of CCI to NCGs is similar in many ways to its relevance to state-affiliated groups,

with the addendum that these groups act without the protection or tacit permission of state actors.

The ability of these groups to engage with other actors in cyberspace, and to continue to exist as

actors themselves, is dependent at least in part on the security practices and principles undertaken

by the members therein. For NCGs like Anonymous, the use of CCI measures decreases the

likelihood of identification and (potentially) prosecution of individual members engaging in cyber

conflict operations. Imperfect CCI practices can lead to unmasking and prosecution, as was the

case with Hector ‘Sabu’ Monsegur in 2011: identified by federal agents as one of the Lulzsec

616 Adi Gaskell, “State Sponsored Cyberattacks Are Happening Right Now,” CyberNews, June 23, 2020, https://cybernews.com/news/state-sponsored-cyberattacks-are-happening-right-now/; Gaskell, “The Rise in State-Sponsored Hacking in 2020.” 617 Gaskell, “State Sponsored Cyberattacks Are Happening Right Now”; Gaskell, “The Rise in State-Sponsored Hacking in 2020.” 618 A global hacker collective which, while not necessarily hierarchical, does have a decision-making apparatus through which it designates targets for the collective and goals toward which they work.

Courteney O’Connor – PhD Thesis 2021

217

hackers, Monsegur was successfully ‘flipped,’ becoming a federal informant.619 His information

lead to the 2013 prosecution and incarceration of Anonymous leader Jeremy Hammond, speaking

to the undeniable importance and impact of the human element in any CCI operation, plan, or

practice.620

Corporate Response Groups

The final sub-type of organised groups identified herein is the corporate response group, or CRG.

Specific to corporations (irrespective of size or regional/global reach), CRGs are the branches of

corporate bodies responsible for the cyber security and cyber response of those bodies and their

interests, affiliates, and clients/customers. Because cyber security is a burgeoning field rather than

a traditional and longstanding security concern, and due also to the fact that unlike more traditional

concerns, cyberspace is a less well-regulated space globally, CRGs face a significant number of

challenges in protecting interests and responding to threats. In many countries, a cyber response

to an attack (usually termed a hack-back) have been deemed illegal, and so CRGs must be

innovative and extremely responsive to risk and threat. CRGs in the financial sector have proven

particularly effective, often driving innovation and policy, and sharing intelligence on risk

assessment and security vulnerabilities.621 For the mega corporations such as Apple, Facebook,

Google and Microsoft, security concerns are requiring increasing research, development and

innovation, as the consumer demands greater security of private data and Internet traffic. In some

respects, the security concerns and vulnerabilities mirror those of the state, albeit usually on a

smaller scale. Malicious software is a particular concern to corporate response groups, for example,

who, in addition to the immediate losses of intellectual property and client data in the case of a

significant data breach, may suffer a loss of reputation or public trust that then has knock-on

economic effects. The latter may result, in extreme cases, in the bankruptcy or closure of the

corporation in question.622 In the early 2000s, Google Inc. suffered a cyber-attack, generally

619 Andy Greenberg, “Anonymous’ Most Notorious Hacker Is Back, and He’s Gone Legit,” Wired, October 21, 2016, https://www.wired.com/2016/10/anonymous-notorious-hacker-back-hes-gone-legit/. 620 Greenberg; Ed Pilkington, “Jeremy Hammond: FBI Directed My Attacks on Foreign Government Sites,” the Guardian, November 15, 2013, http://www.theguardian.com/world/2013/nov/15/jeremy-hammond-fbi-directed-attacks-foreign-government. 621 Deloitte, “Transforming Cybersecurity in the Financial Services Industry: New Approaches for an Evolving Threat Landscape” (Johannesburg: Deloitte, 2014); Adrian Nish, Saher Naumaan, and James Muir, “Enduring Cyber Threats and Emerging Challenges to the Financial Sector,” Carnegie Endowment for International Peace, 2020, https://carnegieendowment.org/2020/11/18/enduring-cyber-threats-and-emerging-challenges-to-financial-sector-pub-83239; World Bank Group, “Financial Sector’s Cybersecurity: Regulations and Supervision” (Washington, D.C.: The World Bank Group, 2018). 622 Following a hack of trust certificate issuers, the Dutch certificate authority DigiNotar was forced into bankruptcy after a hack resulted in the issuance of fake trust certificates that may have exposed around 300,000 Iranians to data interception. They were unable to recover from the loss of trust in their services. Espiner, “Hack Attack Forces DigiNotar Bankruptcy.”

Chapter 5 – Counterintelligence and Cyberspace

218

attributed to parties geo-located in the PRC, and eventually discovered that they had lost the

‘crown jewels’ of the Google empire: more specifically, the search algorithms that underpin the

Google search engine.623 Intrusion defence is, therefore, as much a concern for corporate actors

as it is for state governments.

Microsoft, for example, maintains the Microsoft Security Response Center, the stated mission of

which is “to protect customers from being harmed by security vulnerabilities in Microsoft’s

products and services, and to rapidly repulse attacks against the Microsoft Cloud,” and going on

to affirm that cyber security and defence is “not a static game: it requires continual engagement

and innovation.”624 Given that larger corporations are targets for both state and non-state actors

in cyberspace, and also contract services to both, Microsoft found it necessary to maintain

response teams and engage in both tactical and strategic CCI operations and practices.

In October 2020, Microsoft announced that in concert with telecommunications partners, they

had taken action against, and disrupted, the botnet operation known as Trickbot, first identified in

2016.625 Microsoft and its partners were granted approval by a US Court “to disable the IP

addresses, render the content stored on the command and control servers inaccessible, suspend

all services to the botnet operators, and block any effort by the Trickbot operators to purchase or

lease additional servers.”626 In essence, Microsoft employed tactical CCI measures against the

infrastructure of a botnet comprising more than one million devices, contributing to the overall

strategic CCI goal of operative security for not just Microsoft, but any actor that has been, or

would have been, targeted by the botnet’s (unknown) operators.

Individuals

Individual use of CCI practices and methods is also context-dependent for each individual. The

employment status and specialisation; the social position; knowledge of cyber-enabled threats; and

general interest will all play a role in how much attention an individual pays to their individual

cyber security, and thus their CCI practices. It should be noted that in this section (and in this

thesis generally), little attention is devoted to those individuals, and indeed organisations, that

either do not use (out of lack of knowledge) or choose not to utilise, CCI practices. They are

623 Neal, “Insider Reveals Details of Google Hacks”; Zetter, “Report.” 624 Microsoft Security Response Center, “MSRC | Our Mission,” Microsoft, 2021, https://www.microsoft.com/en-us/msrc/mission. 625 Tom Burt, “New Action to Combat Ransomware Ahead of U.S. Elections,” Microsoft On the Issues, October 12, 2020, https://blogs.microsoft.com/on-the-issues/2020/10/12/trickbot-ransomware-cyberthreat-us-elections/. 626 Burt.

Courteney O’Connor – PhD Thesis 2021

219

acknowledged as a potential threat vector, but as this thesis concerns CCI and those who employ

CCI practices and processes, the actions or non-actions of those entities that do not employ CCI

are beyond the scope of this research. This section is meant to be an overview of the types of

individuals that are likely to employ CCI practices in the normal course of their lives for a variety

of reasons. Again referring to the aggregate model of analysis employed in this thesis, I have

identified three aggregate groups according to which individuals likely to use CCI can be classified:

hackers (either white or black hat); consultants/contractors; and interested individuals.

White Hat, Grey Hat, Black Hat

Hackers are the first aggregate group of individual users that spring to mind when considering the

practice of CCI. These individuals are highly active in cyberspace, and (depending on whether they

are white hat or black hat) are extremely likely to be interested in, and concerned about, their

individual security in cyberspace generally, but also during cyber operations particularly. White hat

hackers, or so called ‘ethical hackers’, are usually conducting operations with permission from the

owner of a particular site (and are sometimes contracted to do just that) to test that site's security.627

The purpose of these actions is to improve overall security, or at least reduce insecurity

surrounding that particular website. Grey hat hackers will often conduct the same sort of

operations as white hat hackers, but without the permission of the system owner, only requesting

remuneration or restitution or recompense after the fact; if this is not offered, the grey hat hacker

may or may not post the exploit online.628 Black hat hackers undertake cyber operations that are

likely to degrade, destroy, exploit, or manipulate online data. They are the producers of malware,

and often have the motivation of personal gain (usually financial). Occasionally motivations can

be political, as with so-called 'patriotic hackers' that act, usually in support of a state government,

against an entity they perceive to have wronged them or the entity they support. Both black and

white hat hackers may potentially be sponsored by state governments, acting alone but according

to the wishes or direction of an Administration.629

627 Mary Manjikian, Cybersecurity Ethics: An Introduction (Oxon: Routledge, 2017), 65, https://www.routledge.com/Cybersecurity-Ethics-An-Introduction/Manjikian/p/book/9781138717527; Norton, “What Is the Difference Between Black, White and Grey Hat Hackers?,” Norton, 2017, https://us.norton.com/internetsecurity-emerging-threats-what-is-the-difference-between-black-white-and-grey-hat-hackers.html. 628 Manjikian, Cybersecurity Ethics, 70; Norton, “What Is the Difference Between Black, White and Grey Hat Hackers?” 629 Manjikian, Cybersecurity Ethics, 66–67; Norton, “What Is the Difference Between Black, White and Grey Hat Hackers?”

Chapter 5 – Counterintelligence and Cyberspace

220

Contractors/Consultants

The second aggregate group of individuals identified in this thesis are the contractors/consultants.

There is a fine line between this aggregate group and that of hackers, and indeed many hackers

can and have been contracted to undertake tasks or have found themselves consulting. Specifically,

however, this group refers to those individuals who are likely to contract to, or consult for,

corporate groups or state governments without forming part of the official cadre of corporate or

government employees. They work as individuals (forming a company of said individuals would

remove them from this aggregate and place them in a corporate aggregate); they are likely to have

security clearances to work on either corporate or government projects; they must, by necessity,

be extremely familiar with the dangers of cyber insecurity and are (usually by contract) required to

maintain a certain level of cyber security, thus increasing their uptake and utilisation of CCI

practices and processes. Contractors and consultants are also likely to undertake research on

malware found ‘in the wild’, contributing to both knowledge of malware and how to avoid

becoming a victim to particular types. Due to both their expertise and their social positioning, and

in conjunction with (the potential for) high-level security clearances, contractors/consultants are

also likely to actively employ CCI measures as part of the natural course of their work. The

feasibility of their employment and thus their personal economic stability depend on being able to

prove that they are committed to cyber security, of which CCI is an integral element.

Interested Individuals

The final aggregate group in this section is that of interested individuals. Persons who are not state

or corporation affiliated and are not classed as a black or white hat hacker for a particular reason,

and are not contractors or consultants in information technology security or the security of IT-

integrated systems, but have an interest in CCI for other reasons. Included in this aggregate would

be academics and civilians of any kind not fitting into another aggregate group, but still interested

in CCI. Interested individuals are not likely to require a comparable level of personal security as

would be required of individuals affiliated with a state or corporation, or contractors and

consultants whose financial security depends on their familiarity with cyber security measures. The

motivations of the interested individual usually concern data and personal privacy, areas which

have become increasingly important in public debate. As such, the utilisation of CCI practices may

not be as sophisticated or high-level as those practices employed by other aggregates, but

nonetheless are more sophisticated than any (potentially unconscious) practices utilised by an

individual unaware (or uncaring) of personal cyber security, and CCI specifically. Increasing public

awareness of these issues and the CCI practices easily enacted to mitigate them would contribute

Courteney O’Connor – PhD Thesis 2021

221

to the overall security of the cyber domain. The human element is integral to the success of CCI

and the ongoing resilience and security of cyberspace, and, as such, is one of the focal threads of

this thesis.

Conclusion

Employers of counterintelligence cover a range of actors and entities currently active in

international cyberspace. Largely ungoverned, there is a significant need for further research on

the methods and processes according to which each of these actors operate, and the motivations

by which each decides strategy and designed tactics. By providing a short examination of the

varieties of actors that I have identified as operational in cyberspace, I hope to have built a

foundation upon which further research in this area can be developed. My intent is to provide a

frame of reference for the different actor groups and how they may interact with each other in

cyberspace, further providing a focus and drawing out some limits within which future research

can be undertaken. The chapter four analysis of how the UK has elevated the threat of cyberspace

concluded that while the cyber threat to assets has been successfully riskified, the UK has fallen

short of a comprehensive elevation by failing to acknowledge and elevate the risks to and from the

audience. At its core, CCI is about the human factor – the ability of individuals, and increasingly

large aggregates of individuals like organised groups and states, to effectively engage in CCI

practices that will contribute to the overall cyber security and information advantage of the state.

Managing the risks to and from the audience must become part of the national (and international)

approach to risk management and threat mitigation in cyberspace.

This chapter has identified and developed my definition of CCI. I have expanded my basic

typology of CCI introduced in prior research and developed the tactical-strategic understanding of

CCI operations. To properly assess the uptake of CCI by modern states, it is first necessary to

examine the way that cyberspace has been perceived and treated by modern states in terms of

potential risk and threat. As an integral element of contemporary society, cyberspace and cyber-

enabled technologies are woven into every element of life – communications, diplomacy,

international trade, etc. While a boon to efficiency, this has also introduced new vulnerabilities to

the three classes of actor identified in this thesis – states, organised groups, and individuals. The

way in which these vulnerabilities are perceived and acted upon affects attitudes and policies

toward cyberspace, cyber-enabled technologies, and the management and mitigation measures

employed in balance and defence. Chapter four traced the elevation of cyberspace as a risk in the

UK, and the counterintelligence responses this development is inducing. This chapter articulated

Chapter 5 – Counterintelligence and Cyberspace

222

a definition and typology of CCI as a contribution to the nascent body of literature in this field,

and to address the information gaps identified in the analysis of the UK response to cyber risks

and threats. Chapter six uses the conclusions reached in chapters four and five, applying them to

a specific national security concern: the rising threat of cyber-enabled disinformation campaigns

as a form of foreign interference. I will discuss the assets vs. audiences divide in the current

application of CCI principles and examine two specific vignettes to identify how disinformation is

being countered (or not), illuminating the integral nature of CCI to national security.

223

6 - CCI in an Age of Disinformation: Assets vs. Audiences One of the foci of this thesis so far has been to emphasise the importance of the human factor in

any consideration of cyber counterintelligence (CCI). A key point that this thesis makes is that it

is crucial for an effective and efficient approach to CCI that counterintelligence actors draw out

how important human factors are for related issues in cyber security. Regardless of the purpose of

intelligence collection, regardless of the means utilised to collect intelligence, regardless of the

methods utilised to prevent other parties from conducting collection operations against you as an

individual, an organised group, or the state – every equation centres on the human variable. One

of the most concerning and timely demonstrations of the ongoing importance of the human factor

in considering the interaction of society and technology is the use of that technology by hostile

actors to influence the conscious or subconscious decisions and opinions of individuals. I refer

here to the practice and use of disinformation by foreign parties against the domestic populace of

a given state. It is my contention that, despite the centrality of the human factor in all calculations

of national security, modern states have failed to a concerning degree to properly elevate the threat

of foreign influence and interference against the general population. Moreover, drawing from the

history and tools of counterintelligence discussed in chapter two, and adapting them to the

evolving cyber threat landscape, CCI as defined in chapter two and developed in chapter five is

uniquely suited to recognise, monitor, mitigate, and respond to the recent evolution of foreign

influence and interference.

The second chapter of this thesis provided the contextual and background information necessary

for the analysis that has so far taken place. The discussions around traditional securitisation

theories, traditional counterintelligence and historical disinformation provided the foundation for

the theoretical development of the threat elevation framework in chapter three. Threat elevation

analysis provided the lens through which the evolving threat perception of cyberspace in the

United Kingdom was traced in chapter four. The examination of changing state perceptions of

cyberspace as a vulnerability and threat vector informed the elements of cyber counterintelligence

in the typology provided in chapter five, highlighting in particular the necessity of focusing on the

human element of counterintelligence practices moving forward. Taken together, these chapters

have provided a perspective on the evolving understanding and perception of the risk profile of

cyberspace in a mature and highly networked democracy and the ongoing design of measures and

policies to secure the cyber domain. The close analysis has identified, however, that there must be

an increased focus on and recognition of the danger to audience psychology of adversary

Chapter 6 – CCI in an Age of Disinformation

224

intelligence operations, such as disinformation, that have the potential to seriously and adversely

affect the democratic integrity of the state.

The evolution of disinformation in the digital age — particularly in terms of the creation and rapid

dissemination of false information, as well as the vastly increased target surface available to

adversaries given skyrocketing connectivity — has resulted in the need to reconsider how we must

approach counterintelligence in cyberspace. As disinformation is both an activity undertaken

through and content held in cyberspace, according to the British definition of cyberspace identified

in chapter four there is a requirement for the state to secure the population against foreign

influence through disinformation operations. Despite this claim, the analysis of the national

security documentation of the United Kingdom showed an explicit favouring of managing and

mitigating the threats to assets over the recognition of risks and threats to audiences. According

to the definition of CCI offered in chapter two of this thesis, the state is responsible for preventing,

identifying, tracing, penetrating, infiltrating, degrading, exploiting and/or counteracting the

attempts of any entity to utilise cyberspace as the primary means and/or location of intelligence

collection, transfer or exploitation; or to affect immediate, eventual, temporary or long-term

negative change otherwise unlikely to occur. Per both of these definitions, it is the responsibility

of the state to respond to the problems posed by disinformation, given the potential adverse effects

of such on democratic integrity. However, the preceding chapters have also shown that, in addition

to state responsibilities, there is a rising necessity for substate actors to engage in cyber

counterintelligence practices as part of an holistic approach to cyber security and resilience. Threat

elevation analysis provides a method by which to ascertain the evolution of threats to both the

state and the audience posed by disinformation, which will be considered in the subsequent

sections.

We saw in the chapter four analysis of the United Kingdom (UK) that critical national

infrastructure has been a serious concern for states, as has both international and, increasingly,

domestic terrorism, against the population and against infrastructure alike. We have also seen the

consideration of cyber-attacks and exploits against critical national infrastructure. What we have

not seen, however, in the examination of the national security documentation of the UK, is the

recognition that foreign action against the opinions and perceptions of the populace has or will

receive the same investment and protection as the physical infrastructures supporting the modern

state. Dr. Constanze Stelzenmuller, in her testimony before Congress during the investigation of

Courteney O’Connor – PhD Thesis 2021

225

Russian interference in US elections by the Senate Select Committee on Intelligence, asserted that

what the Russians were trying to hack was not machines; it was people's minds:

The real target of Russian interference…is voters’ heads. They’re trying to hack our political consciousness. For this, they use a broad spectrum of tools, from propaganda, to disinformation, to hacking…630

The diffusion of access to cyberspace and cyber-enabled technologies has succeeded in making

the world smaller. We are more interconnected than ever before. Increasing interconnectedness

and easy access to foreign populations represent inherent vulnerabilities. It is now easier than ever

for hostile parties to inject false information into the ‘information ecology’ of domestic political

systems, seeking to influence public perceptions and decision from within. Done correctly, these

operations may never become public knowledge. Even when they become evident and are revealed

to the public eye, however, it is incredibly difficult to dissuade the population from beliefs based

on false information they had believed to be true. The ‘Protocols of the Elders of Zion’ is one

extremely well known and studied example of a deliberate act of disinformation that sought to

deepen and perpetuate anti-Jewish sentiment in Europe in the late 19th century.631 Importantly for

this discussion, despite repeated acknowledgment that the document was a fraudulent piece of

disinformation, the conspiracies that it dealt in remain believed by individuals exposed to it.

“Although most leading Nazis realised that The Protocols of the Elders of Zion was a spurious

document, they found it useful in promoting belief in the international Jewish conspiracy of which

they were already convinced. Authorship and other details were irrelevant, they averred, if the

book expressed “inner truth”.”632 The Protocols Of The Elders Of Zion show that, if successful,

it is likely that false information, even after being acknowledged as false, will continue to influence

the perceptions and behaviour of that person in an ongoing fashion, a concept known as ‘belief

echoes’ that will be examined in subsequent sections.

The success of disinformation operations is difficult to quantify and thus difficult to measure.

However, the instability in political systems and institutions that arises as a result of these

630 Senate Select Committee on Intelligence, “S. Hrg. 115-105 Russian Intervention in European Elections,” U.S. Senate Select Committee on Intelligence, June 28, 2017, https://www.intelligence.senate.gov/hearings/open-hearing-russian-intervention-european-elections#. 631 Esther Webman, The Global Impact of the Protocols of the Elders of Zion: A Century-Old Myth (Florence, United States: Taylor & Francis Group, 2011); Steven T. Katz and Richard Landes, The Paranoid Apocalypse : A Hundred-Year Retrospective on the Protocols of the Elders of Zion, 1st ed., vol. 3, Elie Wiesel Center for Judaic Studies (New York University Press, 2011). 632 Randall L. Bytwerk, “Believing in ‘Inner Truth’: The Protocols of the Elders of Zion in Nazi Propaganda, 1933–1945,” Holocaust and Genocide Studies 29, no. 2 (2015): 212–29.

Chapter 6 – CCI in an Age of Disinformation

226

operations is something with which many states have had to deal with over the last several years.633

Whether or not such operations have succeeded in changing election outcomes is arguable – it is

the contention of this thesis that the failure to elevate the education and protection of domestic

audiences with regard to the nature and use of disinformation operations has and will continue to

destabilise democratic institutions.634 While assets have been elevated successfully and measures put

in place to manage the risks against them, there has been a failure to elevate the risks to audiences

until the problem is already in need of threat mitigation measures, an example of reactive event-

based elevation and failed risk management.635 The point here is that, first, audiences need to be

treated equivalently to assets. Second, the development of, and reliance on cyberspace, mean that

these human vulnerabilities are an essential aspect of cyber security.

Previous chapters have examined the threat elevation analysis framework which underpins this

thesis and informs my understanding of how states have elevated political concerns to risks and/or

threats; the national security documentation of the UK; and the nature and use of CCI. The UK

chapter served as demonstration of the tenets of threat elevation theory, as well as context for the

discussion of disinformation that this chapter will develop. In the preceding chapter, I identified

and developed the positioning and inherent importance of the human factor to the understanding

and practice of CCI. From the level of the individual, CCI practices and measures contribute to

the overall cyber resilience and security of the state. Organised groups and states are aggregates of

individuals, and individuals represent both a vulnerability and a threat vector in terms of security

threats like cyber-enabled disinformation. Individuals at once have the potential to be targets,

creators, and disseminators of both disinformation and counter-disinformation narratives. By

engaging actors at the individual; organised group; and state level, and comprehensively elevating

the risks inherent in cyberspace in order to develop risk management measures such as CCI, the

potential consequences to democratic integrity from successful disinformation campaigns can be

managed or mitigated.

This chapter will first define the concept of disinformation, differentiating it from misinformation

and discussing the purpose and use of disinformation in the political information ecology as a form

633 Rid, Active Measures: The Secret History of Disinformation and Political Warfare; Darrell M. West, “How to Combat Fake News and Disinformation,” Brookings, December 18, 2017, https://www.brookings.edu/research/how-to-combat-fake-news-and-disinformation/. 634 See also a recent article, that I co-authored with Adam Henschke and Matthew Sussex that looks at how foreign interference campaigns can undermine the integrity of elections: Henschke, Sussex, and O’Connor, “Countering Foreign Interference: Election Integrity Lessons for Liberal Democracies.” 635 Event-based elevation is discussed further in chapter three.

Courteney O’Connor – PhD Thesis 2021

227

of foreign influence. It will then consider disinformation in the context of what I call the assets vs.

audience dichotomy, according to the threat elevation analysis framework. This will be done in

two ways. First, there will be a short discussion of the relevance of event-based threat elevation in

considering interference and influence operations undertaken in cyberspace, or through cyber-

enabled means and technologies. Following this, I offer two vignettes: the first is an examination

of a failed elevation of audience importance through the prevalence in the contemporary

information ecology of COVID-19 disinformation. The second is a successful elevation of

audience importance through Sweden’s approach to disinformation campaigns. Sweden has

engaged in a strategic approach to CCI that not only evaluates the threats to assets but specifies

the importance of teaching the audience to ascertain the veracity of information – this is done

from childhood through education, as the UK has started to do, but also through civil service

campaigns targeted at older populations to engage as much of the audience as possible. In assessing

each vignette, I look for the target; the intent or purpose of the operation; the outcome or

consequences; and whether CCI measures were used and successful, were not used, or were used

and failed. The chapter will end with a consideration of potential countermeasures against foreign

influence and interference through cyber-enabled disinformation operations.

Disinformation

Disinformation is not a novel concept or practice. Far from being a recent tactic in terms of

political influence or interference, disinformation is a time-honoured and thoroughly tested tool.636

Such psychological operations are “as old as warfare itself.”637 In recent years, the use of

disinformation to affect foreign audiences' behaviour or perceptions has moved more into the

public eye, particularly in terms of interference in Western democratic nations by the Russian

Federation.638 However, the advent of cyberspace and the diffusion of cyber-enabled devices is

lending disinformation a new lease in political life, as hostile actors are more easily able to

communicate directly with citizens of a target state than ever before. Given the risks to audience

psychology and decision-making posed by successful disinformation, these risks need to be

636 James H. Fetzer, “Disinformation: The Use of False Information,” Minds and Machines 14, no. 2 (May 2004): 231–40, https://doi.org/10.1023/B:MIND.0000021683.28604.5b; Julie Posetti and Alice Matthews, “A Short Guide to the History of ’fake News’ and Disinformation” (International Center for Journalists, 2018), https://www.icfj.org/sites/default/files/2018-07/A%20Short%20Guide%20to%20History%20of%20Fake%20News%20and%20Disinformation_ICFJ%20Final.pdf; Rid, Active Measures: The Secret History of Disinformation and Political Warfare; Romerstein, “Disinformation as a KGB Weapon in the Cold War.” 637 Peter J. Smyczek, “Regulating the Battlefield of the Future: The Legal Limitations on the Conduct of Psychological Operations (PSYOP) under Public International Law. - Free Online Library,” Air Force Law Review 57 (2005): 214. 638 For a recent overview of this, see Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare, pp. 329-423.

Chapter 6 – CCI in an Age of Disinformation

228

comprehensively elevated as a democratic integrity and foreign interference threat. The education

of the democratic audience in the use of CCI practices and methods will contribute to the overall

security of the state and of democratic institutions at large.

According to James Fetzer, disinformation “entails the distribution, assertion, or dissemination of

false, mistaken, or misleading information in an intentional, deliberate, or purposeful effort to

mislead, deceive, or confuse.”639 To this definition, I would add the qualifier ‘for political purposes,’

as for the purposes of security studies (in which field this thesis is broadly situated);

“disinformation is frequently analysed as a tactic of hostile foreign state actors.”640 It is important

to note that intent plays a significant role in the differentiation between disinformation and

misinformation, two terms that are often conflated but are distinct concepts. Misinformation

(explored in chapter two) is that information which is unintentionally (factually) incorrect or

misunderstood – transmitting misinformation is an honest mistake and could feasibly be corrected

with provision to the offending party of the correct information. Disinformation is often

considered a tactic employed by hostile actors because incorrect information is transmitted

purposefully, with the intent of modifying a perception or behaviour in some way either inimical to

the target, or beneficial for the attacker. Disinformation can also include factually accurate

information where the source is hidden or disguised, or which is delivered in such a context that

incorrect conclusions are drawn based on that information, such that benefit of some kind is

delivered to the attacker, or negative consequences befall a targeted party.641

The use of disinformation operations to sway public opinion is by no means a novel technique for

foreign actors. As Thomas Rid has recently argued, the Soviet Union used disinformation and

propaganda operations extensively throughout the Cold War,642 even to the point of risking self-

deception. KGB personnel, through self-aggrandisement and the desire to make their

disinformation campaigns seem successful, claimed that the theory of nuclear winter was Soviet

disinformation.643 This claim was later supported by Soviet defectors:

639 Fetzer, “Disinformation.” p. 231. 640 Jennifer Hunt, “The COVID-19 Pandemic vs Post-Truth” (Global Health Security Network, 220AD), https://www.ghsn.org/resources/Documents/GHSN%20Policy%20Report%201.pdf p.6. 641 For more on the philosophical basis and ethical implications of these notions of disinformation and misinformation, see Adam Henschke, Ethics in an Age of Surveillance: Personal Information and Virtual Identities (Cambridge: Cambridge University Press, 2017), https://doi.org/10.1017/9781316417249 pp. 131-149, 222-236. 642 Rid, Active Measures: The Secret History of Disinformation and Political Warfare pp. 101-227. 643 Rid, pp. 288-297.

Courteney O’Connor – PhD Thesis 2021

229

The more an intelligence agency engages in organised and persistent disinformation operations, the more disinformation is likely to have been deposited in official archives and the memories of former officers.644

The United States (US) also used disinformation extensively during the Cold War; the Central

Intelligence Agency (CIA) maintained a bureau in Berlin which ran lengthy and complicated

operations, either conceived and undertaken from within the agency itself, or in support of local

actors engaging in operations against the Soviet Union in East Berlin and elsewhere.645 Given

decades worth of examples to choose from, why are disinformation operations coming to the fore

now? What makes modern disinformation operations more potentially damaging or invasive than

the operations of the past?

While other answers are possible (population density, increase in partisanship, globalisation, etc.),

I argue that cyberspace is a force multiplier of modern disinformation operations. Because modern

disinformation operations increasingly target the audiences of democratic nations rather than the

elites (such as policymakers, though they are likely still targeted), disinformation campaigns are

increasingly capable of directly influencing not just national policies (through elites, as in

antecedent times), but through the leadership of nations. Vladimir Putin, for example, was known

to have preferred the candidature of Donald Trump over that of Hillary Clinton in the US 2016

elections, and that it was at “the direction of the Kremlin [that] the IRA sought to influence the

2016 US presidential election by harming Hillary Clinton’s chances of success and supporting

Donald Trump.”646 It was, therefore, in the interests of the Russian administration that one

candidate be favoured in that election. What was novel was the use of cyber means to enhance the

effectiveness of targeting sectors of the US population.

Despite these sorts of actions being unlawful according to the United Nations Charter, meddling

in the internal affairs of foreign states is not new in international relations. The communications

potential of cyberspace, particularly through the massive social media platforms, multiplies the

access opportunities for foreign nations to gain influence and interference opportunities. The

insecure nature of the Internet as well as cyber-physical technologies647 represents a massive

644 Rid, p. 46. 645 Rid, pp. 61-100. 646 Senate Select Committee on Intelligence, “Volume 2: Russia’s Use of Social Media,” 32; See also Uri Friedman, “Putin Said in Helsinki He Wanted Trump to Win,” The Atlantic, July 19, 2018, https://www.theatlantic.com/international/archive/2018/07/putin-trump-election-translation/565481/; Stephanie Murray, “Putin: I Wanted Trump to Win the Election,” Politico, July 16, 2018, https://www.politico.com/story/2018/07/16/putin-trump-win-election-2016-722486; Office of the Director of National Intelligence, “Assessing Russian Activities and Intentions in Recent US Elections.” 647 The Internet of Things will be discussed in a later section of this chapter.

Chapter 6 – CCI in an Age of Disinformation

230

challenge in terms of CCI as well, in that there are many intelligence collection and exploitation

opportunities inherent in a highly-connected society that operates under a liberal capitalist

economic system; technological innovation is both comparatively cheap and widely adopted. In

combination with systems of surveillance capitalism as examined by Shoshana Zuboff,648 this

uptake creates an abundance of data that is both available and exploitable. Under these conditions,

where access to the Internet, and to inordinately large volumes of data that can be disaggregated

and used in psychological targeting649 are widely available, and where foreign access to domestic

audiences is barely or not at all limited, the opportunities to undertake extensive disinformation

campaigns with limited or no restriction/retaliation are considerably larger than in prior eras. As

we saw in chapter four, states have largely overlooked the vulnerability of audiences to such efforts,

and CCI has not been developed to increase the audience resilience to such efforts.

The increased capacity to access domestic audiences remotely (which is exploitable by domestic

factions as well, not just foreign actors) in combination with the communication potential of the

social media platforms, and the data that those platforms collect as a matter of course, make

disinformation campaigns undertaken through those same platforms difficult to combat. Initially,

disinformation campaigns must be identified and, if possible, traced to origin and attributed to

the/a responsible actor. This is explicitly a matter for state CCI actors, as belief echoes have

potential consequences for electoral and democratic integrity as well as national security, in the

case of externally imposed strategic narratives. While individual CCI will help in reducing the

instances of successful belief undermining or transformation among the democratic audiences,

state CCI actors will need to mitigate and then manage that threat going forward. Even once a

campaign has been identified, however, there are several factors which make countering that

campaign difficult for counterintelligence actors. The first issue concerns the platforms

themselves; the second, the strategic narratives being conveyed by the disinformation campaigns

and the local context in which they are received, which can impact the third (related) issue of belief

echoes.650 The next several sections will consider these three issues, identifying the importance of

CCI and why greater recognition of threats to audiences is necessary to the ongoing security and

resilience of the modern state.

648 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (London: Profile Books, 2018). 649 Psychometrics and microtargeting will be discussed in a subsequent section of this chapter. 650 There are a range of ethical and political concerns related to these issues, which are beyond the scope of this thesis to address. For a starting point, see Henschke and Reed, “Toward an Ethical Framework for Countering Extremist Propaganda Online.”

Courteney O’Connor – PhD Thesis 2021

231

Preferencing Assets, Forgetting Audiences: The Rise of Disinformation

One of the key points of this chapter is the differentiation of assets and audiences, or more

specifically the differences in elevation of both of those elements. The interaction of cyberspace

and cyber-physical technologies with almost every aspect of everyday life (especially considering

the growth rate of the Internet of Things, see below) means an increase in surface area for attack

by hostile parties; it also means a profusion of intelligence collection stations that, if not properly

secured, can be used to great effect by both domestic and foreign intelligence collectors. The

integration of critical infrastructure technologies with cyberspace has been identified as a

significant national security risk by sovereign governments worldwide, including the case study

country of this thesis, namely the UK. As noted in chapter four, we saw the elevation of cyberspace

as a threat vector, but the focus was on the physical and critical infrastructure: the assets. However,

this focus has resulted in the disregard of the audiences of liberal democratic countries like the UK

– the public, the individuals whose perceptions can be affected by both domestic and foreign

operators. If perceptions can be affected enough, behaviour modification like voting decisions

become possible.

So, why has there been an uneven recognition of the two threat vectors – critical physical

infrastructure and audience psychology? I would argue that the profusion and diffusion of both

highly networked technologies and access to the internet, which is itself fundamentally insecure,

in combination with the ‘cyber doom-sayers,’651 captured the imagination of policymakers and thus

a significant share of defence and intelligence spending and focus. Assets were thus effectively

elevated as the risks to them were made clear. But, as we saw in chapter four, audiences have been

largely overlooked.

Over the last several years, despite the elevation of the threats posed by cyberspace and while there

has been a rise in influence and interference operations which specifically target not just types or

classes of voter, but specific voters, there has not been a parallel rise in efforts to secure and defend

these ‘soft psychological’ targets. Logically, if a foreign actor can avoid the expense of breaking

into technologies and systems by targeting influence operations against democratic populations,

and in this way ‘take part’ in democratic institutions such as elections, less inimical leaders might

be raised to office, whom that foreign actor might find more advantageous. Alternately, the

651 We can look here to former US Special Advisor to the President on Cybersecurity, Richard A Clarke’s concerns about the need to prepare for a ‘cyber Pearl Harbour’ as one such example. See: Richard A. Clarke and Robert Knake, Cyber War: The Next Threat To National Security and What to Do About It (New York: HarperCollins, 2012), https://www.harpercollins.com/products/cyber-war-richard-a-clarkerobert-knake.

Chapter 6 – CCI in an Age of Disinformation

232

degradation of alliances and reputation of the country which has been the target of influence

and/or interference operations might itself prove to provide the only advantage that hostile actors

require to obtain greater power and influence over the international system. While

counterintelligence remains primarily the remit of the state, is also important that domestic

audiences be educated both in terms of what is occurring and what countermeasures are being

undertaken, but also in how to ensure that their own actions in cyberspace contribute to their

security rather than detract from it. According to the earlier examinations of the development of

cyberspace as a threat vector in the UK, it is rare that states identify audiences as a critical

vulnerability.

Audiences, then, need to be protected just as critical national infrastructure is protected. This

speaks to the necessity of CCI not only at the state level but also at the level of the individual, and

aggregates of individuals like organised groups. While the state may be able to secure critical

infrastructure from cyber-attack – and this is arguable – the number of actors capable of entering

and operating in cyberspace means that it is no longer possible for the state to be the sole guarantor

of cyber security, whatever that may look like. Because it is now simple for hostile actors to target

domestic populations to engage in foreign interference operations through the use of tools like

disinformation, some responsibility for CCI has necessarily been devolved to the individual actor.

Efficient CCI practices on the part of individuals and organised groups will contribute to the

resilience of the state against cyber-enabled information exploitation and interference and reduce

the attack surface the state must protect. A strategic approach to CCI must acknowledge the risks

to and from the audience and manage that risk appropriately, which will include engaging in

education campaigns targeted at all age levels and sectors of society.

The rise of disinformation and propaganda operations affects audiences not just temporarily but,

in the informational residue — what has been called belief echoes (see below) — in the long-term

as well. The use by both domestic and foreign actors of psychometric targeting based on voter

profiles compiled by consulting firms such as Cambridge Analytica,652 in particular, has enabled

more specific targeting of individual voters – this practice of micro-targeting allows for much finer-

grained operations than could previously have been imagined.653 The fairly opaque nature of the

652 I discuss Cambridge Analytica later in this chapter in the section concerning psychometrics. 653 According to Borgesius et al., microtargeting is a type of (political) communication for which specific data is gathered about an individual and used to create a profile based on which targeted advertisements are shown. Frederik J. Zuiderveen Borgesius et al., “Online Political Microtargeting: Promises and Threats for Democracy,” Utrecht Law Review 14, no. 1 (February 9, 2018): 82–96, https://doi.org/10.18352/ulr.420. For more on

Courteney O’Connor – PhD Thesis 2021

233

large digital social media platforms, particularly Facebook, means that the advertising and

disinformation used in many of these influence operations is only ever seen by the specific

individuals targeted, making it extremely difficult to trace and attribute the operations, but also to

quantify their effects on audiences and thus the political constitution of democratic

administrations.654 This sort of reach and potential impact would have been improbable prior to

the rise of social media platforms and cyber-enabled disinformation, and will require a

comprehensive elevation of audience vulnerability in combination with efficient and widespread

practice of CCI to manage.

Countering Disinformation – Social Media Platforms

One of the primary difficulties associated with countering disinformation campaigns concerns the

social media platforms which convey much of the false information, particularly the massive

platforms of Facebook and Twitter. Facebook has been the subject of considerable scrutiny,

particularly in the US, following the revelations of Russian influence operations being undertaken

via that platform in the run-up to the 2016 presidential elections. The Senate Select Committee on

Intelligence, which undertook an almost three-year investigation on Russian interference, called

Facebook and Twitter upper management to testify several times over the course of the

investigation; representatives of the social media giants were also asked to testify in the UK 's

investigation into interference in the Brexit referendum.655

Google as a search engine is as likely to convey disinformation as it is facts; results appear to the

individual searcher according to both search terms used and according to what Google's search

algorithm decides would be of greatest interest to the individual conducting the search. Regardless,

microtargeting, see Kyle Endres and Kristin J. Kelly, “Does Microtargeting Matter? Campaign Contact Strategies and Young Voters,” Journal of Elections, Public Opinion and Parties 28, no. 1 (January 2, 2018): 1–18, https://doi.org/10.1080/17457289.2017.1378222; Young Mie Kim et al., “The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook,” Political Communication 35, no. 4 (October 2, 2018): 515–41, https://doi.org/10.1080/10584609.2018.1476425; Chris Tenove et al., “Digital Threats to Democratic Elections: How Foreign Actors Use Digital Techniques to Undermine Democracy” (Centre for the Study of Democratic Institutions, 2018), https://www.ssrn.com/abstract=3235819. 654 Amer and Noujaim, The Great Hack; Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 655 Despite several requests, Mark Zuckerberg never appeared before the UK investigative committee, nor did any Facebook employee. Twitter representatives also failed to appear before that same committee. Facebook, Twitter, and Alphabet (Google) representatives were called to testify multiple times before Congress during American investigations into potential interference. Five volumes were published by the Senate Select Committee on Intelligence (SSCI) concerning the allegations of Russian interference in US elections, which can be found at the Homeland Security Digital Library: U.S. Department of Homeland Security, “Homeland Security Digital Library,” Homeland Security Digital Library, accessed March 20, 2021, https://www.hsdl.org/?search&exact=Senate+Report+on+Russian+Interference+in+2016+Elections&searchfield=series&collection=limited&submitted=Search&advanced=1&release=0&so=date.U.S. Department of Homeland Security.

Chapter 6 – CCI in an Age of Disinformation

234

given that it is the primary filter and source of information for people, the proliferation of

disinformation that can be surfaced through Google is of great concern. Of more concern in the

current era, however, are specifically the social media giants Facebook and Twitter. Until recently,

both platforms had a view toward publication of information that amounted to a free-for-all policy

– individuals have a right to freedom of speech and could publish it as they saw fit. With the rise

of disinformation and increasing partisan battles over elections, and the potential of foreign

interference in democratic institutions, the social media platforms have had to design increasingly

specific policies and justifications for those policies. After years of advocating a right to unfettered

public communication,656 Twitter then-CEO Jack Dorsey started to take an increasingly active role

in identifying and flagging potential breaches of their accurate information policy. Tweets that

could contain false or misleading identification are flagged as such – see Figure 6 - Twitter flags

potential disinformation.657

Figure 6 - Twitter flags potential disinformation

However, this is a recent policy; up until March 5, 2020, Twitter performed no regulatory function

in terms of allowable publication of Tweets, unless those Tweets incited violence.658 Prior to the

656 Austin Carr, “When Jack Dorsey’s Fight Against Twitter Trolls Got Personal,” Fast Company, April 9, 2018, https://www.fastcompany.com/40549979/when-jack-dorseys-fight-against-twitter-trolls-got-personal. Carr. 657 Donald Trump’s Twitter account was suspended in January 2021 following the insurrection of January 6, however his tweets can still be found on the Internet Archive. See Donald Trump, “Nevada Is Turning out to Be a Cesspool of Fake Votes. @mschlapp & @AdamLaxalt Are Finding Things That, When Released, Will Be Absolutely Shocking!,” Twitter, November 10, 2020, https://web.archive.org/web/20201109195514/https://twitter.com/realdonaldtrump/status/1325889532840062976. 658 Yoel Roth and Ashita Achuthan, “Building Rules in Public: Our Approach to Synthetic & Manipulated Media,” Twitter, February 4, 2020, https://blog.twitter.com/en_us/topics/company/2020/new-approach-to-synthetic-and-

Courteney O’Connor – PhD Thesis 2021

235

current practice of flagging potential disinformation to the consumers of information on the

Twitter platform,659 false information could be introduced and circulated without restriction or

regard for potential consequences. This made, and still makes, the Twitter platform an ideal vehicle

for disinformation operations regardless of the initiating actor. As of October 2020, there are

approximately 340 million users of Twitter.660 How many of those users are legitimate individuals

rather than fake accounts is both debatable and subject to change. If even 70% of those active

accounts are legitimate individuals, however, that is still a considerable number of people that were

likely to come across misinformation or disinformation at some point during their interactions

with the Twitter platform. The practice of information laundering makes that spread of

disinformation both degrees of magnitude larger than just the original post audience, and more

difficult to track and subsequently manage.661

Facebook has a larger audience than Twitter by several orders of magnitude, with approximately

2.7 billion monthly active users as of October 2020 according to Omnicore’s most recent update.662

Facebook has also been identified as a significant vehicle for disinformation: in particular, around

Russian disinformation campaigns that took place in relation to the US presidential campaigns and

election in 2016, and the midterm elections of 2018. Facebook CEO Mark Zuckerberg has been

strident in his objections against Facebook taking a role in the policing of information accuracy in

the political information ecology, stating that it is not up to Facebook to censor or otherwise

restrict what people chose to post or to read.663 While Zuckerberg may be correct in stating that it

manipulated-media.html. I note here that Twitter first started to really limit content with the rise of so-called Islamic state, and its use of Twitter to distribute violent content and ideas. See Conway et al., “Disrupting Daesh.” Roth and Achuthan, “Building Rules in Public.” I note here that Twitter first started to really limit content with the rise of so-called Islamic state, and its use of Twitter to distribute violent content and ideas. 659 Yoel Roth and Nick Pickles, “Updating Our Approach to Misleading Information,” Twitter, May 11, 2020, https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html. 660 Omnicore, “Twitter by the Numbers (2021): Stats, Demographics & Fun Facts,” January 6, 2021, http://www.omnicoreagency.com/twitter-statistics/. 661 European Parliament. Directorate General for Parliamentary Research Services., “Automated Tackling of Disinformation: Major Challenges Ahead.” (Euopean Parliamentary Research Service, 2019), https://data.europa.eu/doi/10.2861/368879; Kirill Meleshevich and Bret Schafer, “Online Information Laundering: The Role of Social Media,” Alliance For Securing Democracy (blog), January 9, 2018, https://securingdemocracy.gmfus.org/online-information-laundering-the-role-of-social-media/; Boris Toucas, “Exploring the Information-Laundering Ecosystem: The Russian Case,” Center for Strategic & International Studies, August 31, 2017, https://www.csis.org/analysis/exploring-information-laundering-ecosystem-russian-case. European Parliament. Directorate General for Parliamentary Research Services., “Automated Tackling of Disinformation”; Meleshevich and Schafer, “Online Information Laundering”; Toucas, “Exploring the Information-Laundering Ecosystem.” 662 Omnicore is a digital marketing agency. Omnicore, “Facebook by the Numbers (2021): Stats, Demographics & Fun Facts,” January 6, 2021, http://www.omnicoreagency.com/facebook-statistics/. 663 Cecilia Kang and Mike Isaac, “Defiant Zuckerberg Says Facebook Won’t Police Political Speech,” The New York Times, October 17, 2019, sec. Business, https://www.nytimes.com/2019/10/17/business/zuckerberg-facebook-free-speech.html; Tom McCarthy, “Zuckerberg Says Facebook Won’t Be ‘arbiters of Truth’ after Trump Threat,”

Chapter 6 – CCI in an Age of Disinformation

236

is not Facebook’s responsibility what information its users choose to engage with, I contend that

the social media platform does bear a responsibility to at least identify where disinformation may

be proliferating, in order to ensure the resilience of its users against influence or interference.664

Facebook representatives appearing before the Senate Select Committee on Intelligence (SSCI),

and before Congressional hearings about social media and disinformation, have been subject to a

scrutiny that until recent years social media platforms had been able to avoid. As a result of the

identified disinformation campaigns undertaken in part through Facebook relating to the 2016

Presidential election, Facebook now identifies articles and posts that may contain disinformation

or are other falsely representing whatever is communicated in the post.665 Arguably, this is too little

too late – Facebook and Twitter, and associated social media platforms such as Instagram, have

contributed as vehicles of disinformation to the already-declining public trust in institutions. The

public is now aware that not all information conveyed through these sites is accurate or a correct

representation of whatever content is being communicated.666 Over 2 billion people have been

operating within social media information ecologies wherein the information is not certifiably

accurate – and, because of the search and association algorithms, are often only exposed to news

and information sources that align with views already held. These are ascertained through the

immense volumes of data available to exploitation by these platforms.667

Because audiences have not been elevated as a security risk until recently, these individuals have

been vulnerable to both domestic and foreign influence operations, which are difficult to both

track and quantify. Facebook, for example, does not keep an archive of political advertising

displayed through its marketing portal, nor a record of the individuals targeted by specific ads. As

Carole Cadwalladr identified in her exhaustive investigation of the Cambridge Analytica episode

(discussed in the microtargeting and psychometric profile sections later in this chapter), Facebook

the Guardian, May 29, 2020, http://www.theguardian.com/technology/2020/may/28/zuckerberg-facebook-police-online-speech-trump. 664 For an examination of ethical frameworks for countering online propaganda, see Henschke and Reed, “Toward an Ethical Framework for Countering Extremist Propaganda Online.” 665 Mark Wilson, “Here Is Facebook’s First Serious Attempt To Fight Fake News,” Fast Company, December 15, 2016, https://www.fastcompany.com/3066630/here-is-facebooks-first-serious-attempt-to-fight-fake-news. 666 Janna Anderson and Lee Rainie, “The Future of Truth and Misinformation Online,” Pew Research Center, October 19, 2017, https://www.pewresearch.org/internet/2017/10/19/the-future-of-truth-and-misinformation-online/; West, “How to Combat Fake News and Disinformation.” 667 This quantity and quality of data is referred to by Shoshana Zuboff as behavioural surplus, and informs much modern advertising today. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. pp. 63-97.

Courteney O’Connor – PhD Thesis 2021

237

advertising is a black box – unless you are the individual targeted for specific advertising, you will

not see that content.668

The lack of accurate or available records from Facebook, especially given their volume of monthly

active users and global reach, is particularly concerning when trying to identify or trace foreign

interference operations. Disinformation campaigns can be difficult to trace and attribute without

considering the difficulty of gathering the examples of disinformation that are being accepted by

viewers as accurate news; a recent study indicated that a growing number of people get their news

solely from the large social media platforms.669 This narrows the channel of information they have

access to, but also increases the likelihood that they may not realise they have been deceived by an

information operation, the content of which may then go on to change their perceptions (for

example, on policies or politicians), and could then lead to behaviour modification. Over the last

few years, and as further information has been released about not only cyber-enabled

disinformation operations, but the role of the social media platforms as vehicles of disinformation

operations, the recognition of audiences as a security vulnerability by the social media platforms

has resulted in the elevation of audiences to a security risk – and attempts are being made to

manage that risk.

Despite its centrality to counterintelligence, countering disinformation online can be difficult. This

is in part because of the focus on assets, and the fact that audiences have not been elevated. The

analysis in chapter four of the UK’s elevation of cyberspace threat identified that while assets have

been riskified, there has been a failure to elevate the threat to and from audiences and this has

resulted in an uneven approach to CCI and understanding of cyber-enabled threats such as

disinformation. Acknowledging and understanding the role of the individual – the human factor

– in cyber security through the education and implementation of CCI societally will contribute to

the overall cyber security of the state, but this can only occur once governments comprehensively

elevate the role of the audience in cyber security as both a vulnerability (as a target of

disinformation campaigns) and a threat vector (in the case of successful disinformation

668 Cadwalladr first investigated the importance of this practice when tipped off by a Facebook user about advertising appearing on their feed, of which they had taken screenshots (pertaining to the Brexit referendum). Cadwalladr in Amer and Noujaim, The Great Hack. 669 Elisa Shearer, “Social Media Outpaces Print Newspapers in the U.S. as a News Source,” Pew Research Center (blog), December 10, 2018, https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/; Peter Suciu, “More Americans Are Getting Their News From Social Media,” Forbes, October 11, 2019, https://www.forbes.com/sites/petersuciu/2019/10/11/more-americans-are-getting-their-news-from-social-media/; Amy Watson, “Usage of Social Media as a News Source Worldwide 2020,” Statista, June 23, 2020, https://www.statista.com/statistics/718019/social-media-news-source/.

Chapter 6 – CCI in an Age of Disinformation

238

campaigns). Understanding methods of and approaches to CCI, as outlined in chapter five, will

aid in the analysis and elevation of cyber-enabled threats and contribute to the design of both

tactical and strategic response plans. Further, there are a range of practical issues, not only in terms

of technical capacity to locate and disable bot accounts or illegitimate accounts (foreign operatives

masquerading as locals), but in countering the ‘accepted knowledge’ or ‘accepted truth’ of

communities that have been successfully influenced by false information. The next section

considers the role and consequences of strategic narratives.

Countering Disinformation - Strategic Narratives

We have discussed the profusion and diffusion of disinformation through the social media

platforms that can be part of campaigns designed to influence domestic populations. But what is

the actual purpose of the disinformation? What are these operations trying to convince people of?

On the definition above, false information that is circulated without a specific purpose is just

misinformation – with attention, it can be countered and corrected. However, disinformation is

false information that is circulated with a specific, typically political, intent – there is a specific goal,

and, to accomplish this goal, it follows that there must be a strategic narrative that these operations

are working to enforce. As I argue, effective CCI recognises the centrality of strategic narratives

and acknowledges the necessity of managing or mitigating their effects. Strategic narratives must

be identified and countered by state security agencies to effectively engage in CCI against

disinformation. Further, security agencies must occasionally engage in proactive CCI by creating

and propagating counter-narratives of truth as effectively, efficiently, and quickly as possible to

manage the risk of the false information taking root in audience belief frameworks. The threat to

audience psychology here needs to be elevated in order to prevent a future destabilisation of

electoral integrity, wherein voter decisions are based in whole or in part on disinformation and are

neither free nor informed. In short, CCI, then, is an entirely necessary part of the state response

to foreign interference through strategic narratives and disinformation.

A strategic narrative is the overall story within which the disinformation content operates, the goal

toward which influence operatives work with respect to their target audiences. “Strategic narratives

are crafted by political actors with a specific intention in mind: influencing an audience.”670 A

disinformation operation designed to influence specific segments of a population (particularly in a

democratic nation) needs to have a number of elements in order to be successful: highly specific

670 Olivier Schmitt, “When Are Strategic Narratives Effective? The Shaping of Political Discourse through the Interaction between Political Myths and Strategic Narratives,” Contemporary Security Policy 39, no. 4 (October 2, 2018): 488, https://doi.org/10.1080/13523260.2018.1448925.

Courteney O’Connor – PhD Thesis 2021

239

targets; a particular goal or behavioural modification; and, importantly, a thorough understanding

of the domestic political, economic, and societal context, so that the campaign itself doesn't stand

out enough to be immediately identifiable to those targets or any observers. “For an ‘external’

strategic narrative to have a maximal effect in the targeted audience, it must contribute to the work on

local myths. In other words, to be effective, a strategic narrative must be able to resonate with local

political myths.”671 If a hostile party attempts to exert covert influence on a foreign population

without a thorough understanding of the context in which this influence is intended to operate,

there is a reasonable chance that it will not succeed because it does not resonate with the pre-

existing local myths or closely-held socio-political beliefs of the target population. The importance

of understanding the belief structures of the targeted populations cannot be overstated in the

design of successful disinformation campaigns, which is why the audience needs to be recognised

as vulnerable and a vulnerability and elevated to a security risk to induce greater investment in

protection and countermeasures.

The strength of a strategic narrative is not the sole guarantor of success in a disinformation

campaign – anything involving the influence of human perception or attempted (covert) behaviour

modification must deal with the vagaries of the human psyche, which are themselves inherently

unpredictable. However, success is more likely if the campaign leans into these pre-existing beliefs

and societal conditions, exacerbating social and partisan divides by feeding disinformation into the

local information ecology. The Protocols of the Elders of Zion, mentioned earlier, had the impact

it did due to the existing anti-Semitic beliefs that were popular throughout Europe. In the modern

information environment, the easiest way of doing this, as discussed in the previous section, is

through the large social media platforms – the ‘you might like this’ algorithm suggests groups or

pages that are akin to what you already show a proclivity toward. The echo chamber that this

creates reinforces pre-existing beliefs by surrounding your digital self with groups and pages that

hold to the same or similar beliefs and convictions, reinforcing them.

It is also important to note that disinformation campaigns rarely run along a single track – studies

of the Russian influence operations against the 2016 and 2018 elections in the US showed multiple,

often confounding operations being run simultaneously. For instance, following the Trump victory

in 2016, a series of pro-Trump and anti-Trump rallies were organised by the IRA for similar

671 Schmitt p. 492.

Chapter 6 – CCI in an Age of Disinformation

240

locations and times.672 While it may seem counterintuitive to have two operations with opposite

goals running at the same time, this has the effect of increasing the existing divisions between

groups within a society. The echo chamber adds to the entrenchment of views and beliefs, the lack

of confounding narratives within that echo chamber reducing empathy toward or even knowledge

of contrary views, except to reinforce that those alternate views are either wrong or the minority.

It is also worth noting that a strategic narrative is never only about the words used – ideas can be

communicated just as well on social media through symbols, and graphics as words. In fact,

Russian operatives targeted different groups with diverse forms of communication in their

operations. “If strategic communications are 'the use of words, actions, images, or symbols to

influence the attitudes and opinion of target audiences to shape their behaviour in order to advance

interests or policies, or to achieve objectives', it seems right to claim that...the Kremlin definitely

employs them.”673

While Russian disinformation operations are often the most widely publicised, in part due to media

coverage of the US 2016 and 2018 elections, as well as the Senate Select Committee on Intelligence

investigation into the attempted influence operations, the Russian Federation is by no means the

only government that is familiar with, and routinely uses, disinformation campaigns as a matter of

proactive security. Rid discusses the history of American and German operations in Active

Measures,674 and Cindy Otis uses a number of examples of historical disinformation in her volume

True or False.675 Disinformation campaigns can also just as easily be undertaken by domestic actors,

up to and including the governing administration of a country against its own audience, as seen

with the Trump administration's claims of electoral fraud and victory following the election of Joe

Biden as the 46th President of the US in November 2020. Regardless of the attack vector,

disinformation campaigns with a strong strategic narrative, which tie into local myths and reinforce

existing divisions, are a danger to democratic societies. The targeting of domestic audiences

672 “After the election of Donald Trump in or around November 2016, Defendants and their co- conspirators used false U.S. personas to organise and coordinate U.S. political rallies in support of then president-elect Trump, while simultaneously using other false U.S. personas to organise and coordinate U.S. political rallies protesting the results of the 2016 U.S. presidential election. For example, in or around November 2016, Defendants and their co-conspirators organised a rally in New York through one ORGANIZATION-controlled group designed to “show your support for President-Elect Donald Trump” held on or about November 12, 2016. At the same time, Defendants and their co-conspirators, through another ORGANIZATION-controlled group, organised a rally in New York called “Trump is NOT my President” held on or about November” Robert S. Mueller, United States of America v. Internet Research Agency LLC, No. 1:18-cr-00032-DLF (United States District Court for the District of Columbia 2018) p. 23 §57.Robert S. Mueller, United States of America v. Internet Research Agency LLC, No. 1:18-cr-00032-DLF (United States District Court for the District of Columbia 2018) p. 23 §57. 673 Ofer Fridman, “‘Information War’ as the Russian Conceptualisation of Strategic Communications,” The RUSI Journal 165, no. 1 (January 2, 2020): 44, https://doi.org/10.1080/03071847.2020.1740494. 674 Rid, Active Measures: The Secret History of Disinformation and Political Warfare, 17–144. 675 Cindy Otis, True or False: A CIA Analyst’s Guide to Spotting Fake News (New York: Feiwel and Friends, 2020).

Courteney O’Connor – PhD Thesis 2021

241

increases the attack surface available to hostile actors and requires significantly more investment

and attention from state security agencies.

In terms of the strength and efficacy of externally imposed strategic narratives, there are two fronts

with which CCI must engage: identification and management, or identification and mitigation.

Which of these become necessary will depend on the timing of the identification and the breadth

and reach of the actors introducing the disinformation into the targeted political information

ecology. Effective CCI by actors at all levels (state; organised group; individual) will enable the

identification of externally imposed narratives early in the operation. The earlier a narrative is

identified, the better the potential consequences can be managed. Efficient use of tactical CCI is

important at this stage, with the focus being on active practices – identifying the narrative in

question, actively removing, or negating the false information, and generally engaging in risk and

information integrity management, so that the targeted population does not have time to integrate

the false narrative into their belief structures. If the external narrative is not identified before this

stage, however, it becomes a matter of effective mitigation – identifying which parts of the external

narrative are affecting the local population, transforming the strategic approach to those affected

parties such that the false narrative can be rectified, and preventing the further transmission of the

incorrect or negatively framed information. The greater a percentage of the population in a state

that uses CCI in passive and active forms, the higher the resilience of that population to the

imposition of external narratives and infiltration of disinformation. In turn, the higher the

resilience, the more likely it is that external narratives will be identified before they can start to

integrate into local belief structures and affect the integrity of democratic institutions.

Countering a strong strategic narrative from a hostile actor, either foreign or domestic, also

requires significant breadth and depth of knowledge by counterintelligence actors; CCI comes into

play in the protection of domestic actors, the education of the same in increasing their own

security, as well as in the suppression of false information in favour of accurate facts and reporting

across cyber-enabled technologies and social media platforms. According to Oliver Schmitt,

[c]ountering the elements of strategic narratives that resonate with local and coherent myths strongly matter since those myths seem to have high resonance within an entire society...Crafting such counter-narratives will require individuals well acquainted with the local myths of specific political communities, and be able to work with the narrative structures and contents of such myths.676

676 Schmitt, “When Are Strategic Narratives Effective?” p. 506.

Chapter 6 – CCI in an Age of Disinformation

242

Counter-narratives, by definition and nature, need to work against the tide of disinformation

campaigns and strategic narratives already in play – they are reactionary, and so need to accurately

read the context of the information landscape and identify areas in which ground can be gained,

and false narratives overturned.677

Ongoing CCI and cyber security efforts can prevent future strategic narratives and disinformation

campaigns from gaining significant ground, but if a strategic narrative is already in play and has

been influencing individuals for a considerable time (and is both based on and complements local

or social myths), it is difficult to convince believers that their convictions are wrongly held.

According to Emily Thorson, “...even when misinformation is debunked, the marketplace [of

ideas] can fail in two ways. First, some people may not believe the correction, instead maintaining

their belief in the false information. This failure is called “belief persistence” ...a second failure of

the marketplace of ideas [is] the possibility that exposure to corrected misinformation may

create...” belief echoes,” or effects on attitudes that persist despite an effective correction.”678

Countering Disinformation – Belief Echoes

Belief persistence and belief echoes are vulnerabilities that can continue to affect perception and

action long after the inaccuracies have been identified and corrections made (or at least attempted)

– “...exposure to misinformation creates belief echoes: Lingering effects on attitudes that persist

even after the misinformation is effectively corrected.”679 Essentially, once a piece of information

has been accepted as true, even if that same information is then proved not to be, and is accepted

to be false, it is still possible that the initial impression created by that false information will

continue to affect an individual's perception of events, people, or information, which will then

affect that person's ongoing opinions and actions. This is a particularly dangerous and problematic

consequence of disinformation campaigns with strong strategic narratives; “...even if a correction

is effective in updating beliefs, it may not change individuals' related attitudes...; “belief echoes”

677 Part of proactively engaging in CCI in the disinformation space would be to identify possible threat narratives and design awareness campaigns that forewarn audiences of what those threat narratives might try to convey, pre-emptively negating future disinformation. This is in line with the Swedish response to foreign interference (particularly Russian active measures), which will be discussed further in chapter six. There are risks to this approach, however; there is no guarantee that the audience will not misremember the pre-emptively-identified disinformation as true, thus negating the counter-narrative operation entirely. 678 Emily Thorson, “Belief Echoes: The Persistent Effects of Corrected Misinformation,” Political Communication 33, no. 3 (July 2, 2016): p. 461, https://doi.org/10.1080/10584609.2015.1102187. 679 Thorson p. 476.

Courteney O’Connor – PhD Thesis 2021

243

[refer] to the situation where misinformation continues to influence related attitudes even when

people's belief is reverted back to the pre-misinformation-exposure state.”680

This presents a unique challenge for CCI – how can the tools of counterintelligence be adapted to

reduce and mitigate the effects of these belief echoes? If false information can be corrected, and

individuals convinced to accept the update to their accepted truths, but the false information

continues to affect their long-term behaviour and perception, then how should counterintelligence

actors proceed to ensure the greatest possible degree of security and resilience? This is assuming

that individuals can be persuaded that their accepted truths are, in fact, falsities – what of those

individuals who choose to maintain their convictions, despite being provided proof to the

contrary? What if the echo chambers of modern socio-political information ecologies have proven

too efficient at maintaining false beliefs? Counter-narratives, then, need to battle strategic

narratives, false information, belief persistence and belief echoes. These are significant challenges

for liberal democratic nations to maintain the integrity of their critical democratic infrastructure.

It is difficult to conceive a counter-narrative wide-ranging and convincing enough that it can be

deployed soon enough after the identification of a hostile actor’s disinformation campaign. It is

moreover difficult to imagine it how it would be able to shift the perception of truth and belief

away from the false narrative in time to prevent that false narrative from becoming embedded in

the identify and belief structures of that individual. A counter-narrative that could do this would

be a proactive CCI operation that would require the same targeting and access to personal

information used by the hostile actor in the initial campaign, which is difficult to both collect and

exploit by domestic intelligence agencies, noting that in democratic nations, these are usually

constrained in their capacity to operate on home soil. The utility of efficient CCI practices being

undertaken consistently in both passive and active form will be most useful in combatting the

potential consequences of belief echoes, as these echoes are based on the continual layering of

information and perspectives built over time, and thus have roots in deeply held beliefs of the way

in which the world works. Educating the audience to engage in passive and active CCI wherever

possible (and, eventually, as a matter of habit) will build the resilience of the audience to introduced

disinformation and strengthen the integrity of a given democracy through verification and

transmission of information that is demonstrably accurate. We see here the need to shift from

active CCI to passive CCI through time, by building resilience in individuals and communities such

680 Jianing Li, “Toward a Research Agenda on Political Misinformation and Corrective Information,” Political Communication 37, no. 1 (January 2, 2020): p. 129, https://doi.org/10.1080/10584609.2020.1716499.

Chapter 6 – CCI in an Age of Disinformation

244

that a greater percentage of false information is either rejected or mitigated before potentially

negative consequences can occur. A comprehensive societal approach to and use of active CCI

will allow the state to focus on disrupting the overarching externally imposed strategic narratives

and the overall (dis)information operation. Communicating accurate information and identifying

external narratives will also reinforce the trustworthiness of democratic institutions, like the

governing administration and the mainstream media, both of which are subject to degradation

when disinformation is accepted into belief structures.

The echo chamber effect reinforces the belief echoes, and I argue that short of removing an

individual from the social contexts from which they derive most of their information, it is entirely

possible that even after having false information successfully identified and countered, those

individuals may still be successfully targeted by subsequent disinformation operations because they

are already susceptible to believing that which aligns with their personal views. This may be in part

attributable to their belief echoes. The cycle of reinforcement and narrative strengthening, as well

as broadening in echo chambers, particularly on and through the social media platforms is difficult

to interrupt. This is especially true if the individuals that comprise these social groupings are

disinclined to believe that they or their information may be factually compromised or lacking in

evidence.

Belief echoes cannot be avoided. They are an intrinsic consequence of the human psyche and

information processing sequences. The CCI measures that need to be taken to avoid, insofar as is

possible, the potential negative consequences to democratic populations of belief echoes initiated

by hostile disinformation campaigns are multi-temporal, ongoing, and cannot be guaranteed of

success. Tactical CCI will help mitigate the consequences of cyber-attacks and exploitation in the

immediate and short-term response, but an overarching strategic approach is necessary to prevent

a cyclical exploit-mitigate-exploit situation. Because belief echoes are a fundamental part of the

human psyche, it is questionable whether or not recognition of incorrect information can or will

prevent the echoes of that mis- or disinformation, and how it will affect future perception or

action. It is likely that the longer a belief is held or the stronger the belief, there is a greater

probability that the individual will dismiss counterfactual information, or otherwise refuse to

relinquish the problematic behaviour or assumptions. We also need to recognise the challenge of

cognitive biases like the ‘backfire effect’, which is the “strengthening of an original belief in

Courteney O’Connor – PhD Thesis 2021

245

misinformation that is the subject of an attempted correction.”681 Individual engagement with

passive and, preferably, active CCI measures such as password security and verification of

information prior to transmission will help with this, but such practices need to be identified and

taught, societally.

Those in positions of authority, or with a significant social platform following, are capable of both

reinforcing those beliefs and introducing counter-narratives to combat them; it may be possible to

override the disinformation by communicating through a trusted figure or platform. Counter-

narratives introduced through cyber-enabled technologies and on the social media platforms need

to invite trust by using figures of authority (in the eyes of the target population, not necessarily the

general population), build on local myths and accepted beliefs, and appear or be shared in

significant quantity such that the disinformation is less likely to be seen or believed than the

accurate information. Unfortunately, recent studies have proven that false information is six times

more likely to be shared on social media platforms that accurate information, which makes the

manufacture and spread of factual counter-narratives difficult to both conceive and enact

efficiently.682 Counter-narratives as part of a CCI campaign would need the participation and

support of both state and corporate actors - disinformation campaigns are dangerous to the

democratic health and integrity of modern democracies, and few countermeasures could be

undertaken without the permission and participation of the private platforms through which much

of the disinformation is communicated.

Disinformation campaigns with strong strategic narratives, then, present a very real threat to the

integrity and resilience of modern democracies. In a 2020 interview with The Atlantic, former US

President Barack Obama stated that “[i]f we do not have the capacity to distinguish what’s true

from what’s false, then by definition the marketplace of ideas doesn’t work. And by definition our

democracy doesn’t work.”683 Former US Vice President Dick Cheney opined that disinformation

campaigns of the sort alleged to have been conducted by Russian against US elections constituted

681 Gregory J. Trevors et al., “Identity and Epistemic Emotions During Knowledge Revision: A Potential Account for the Backfire Effect,” Discourse Processes 53, no. 5–6 (July 3, 2016): p. 341, https://doi.org/10.1080/0163853X.2015.1136507. For more on the backfire effect, see Stephan Lewandowsky et al., “Misinformation and Its Correction: Continued Influence and Successful Debiasing,” Psychological Science in the Public Interest 13, no. 3 (n.d.): 106–31. 682 Dizikes, “On Twitter”; Greenemeier, “False News Travels 6 Times Faster”; Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of True and False News Online,” Science 359, no. 6380 (March 9, 2018): 1146, https://doi.org/10.1126/science.aap9559. 683 Barack Obama, The Atlantic Daily: Our Interview With Barack Obama, interview by Caroline Mimbs Nyce, November 17, 2020, https://www.theatlantic.com/newsletters/archive/2020/11/why-obama-fears-for-our-democracy/617121/.

Chapter 6 – CCI in an Age of Disinformation

246

an attempt “to interfere in major ways with… basic, fundamental democratic processes,”

continuing that “in some quarters that would be considered an act of war.”684 If democratic

populations are unable to ascertain which sources are trustworthy and which are not, the social

contract between governments and citizens will start to wear thin as distrust of institutions, already

low, spreads. Why, then, has it taken so long for audiences to be recognised as an acute security

vulnerability, as much at risk of attack as critical infrastructure, if not more so?

Elevating the Audience

According to the elevation theory framework, several elements need to be present in order for an

elevating act to be successful – in other words, for either riskification or securitisation to occur.

There must be an identified concern that can no longer be handled with normal political powers;

there must be an identified opponent; there must be a relationship of historical conflict or distrust

between the attacker and the target; there must be an observable record of increasing hostile action;

and the increase in risk or threat should be justified before, and accepted by, the general population

(being the audience of a democracy). Elevation theory also stipulates that rather than a

performative justification by a legitimate authority, that certain events and even images can be

considered elevating acts in and of themselves. Consider the 9/11 attacks and the well-known

image of the ruined twin towers, which immediately securitised international terrorism and kick-

started more than two decades of conflict in the Middle East.685 Alternately, the published

statements or policies of a governing administration can serve as elevating discourse, which is the

approach I undertook in chapter four. The analysis of the UK demonstrated that cyberspace is

being elevated as a threat to national security at the state level, and multiple measures of risk

management have been undertaken. The cyber security strategies analysed illustrate the growing

understanding that disinformation has the potential to negatively affect national security. In the

period analysed, however, the UK did not seem to develop a thorough understanding of the

potential consequences of cyber-enabled disinformation on the British audience. While there was

a continued focus on assets (infrastructure, data, systems), there was not a simultaneous elevation

of the threats to the audience.

684 Obama. Dick Cheney, quoted in Petra Cahill, “Cheney Says Russian Meddling in U.S. Election Possibly an ‘Act of War,’” NBC News, March 29, 2017, https://www.nbcnews.com/politics/white-house/dick-cheney-russian-election-interference-could-be-seen-act-war-n739391. 685 I consider a number of cyberspace-based elevating events in chapter three: Stuxnet; GhostNet; Flame; Estonia 2007; Georgia 2008; and the NSA PRISM revelations.

Courteney O’Connor – PhD Thesis 2021

247

I argue that the semi-covert nature of disinformation operations against foreign publics prevented

an elevation of foreign interference until such time as that interference and its constituent influence

operations (disinformation campaigns being one form of influence and interference) became

apparent and more widely recognised. Had there been a strategic approach to CCI and a society-

wide understanding of CCI measures in relation to personal as well as national security, these

campaigns may not have been as successful as they were. Even so, what we have seen over the

past several years in terms of foreign interference campaigns has contributed to event-based

elevation more than a lengthy political justification process. Following a brief overview of the

ongoing importance of audiences not just to elevating acts but to security, subsequent sections will

examine psychometric data and micro-targeting before considering two examples of elevation: one

successful, and one failed. The chapter will conclude with a discussion of threat elevation in

cyberspace and the related development and importance of CCI specific to political information

ecologies.

Overlooking Audiences

The human factor is as important to cyber security and counterintelligence as any technology. Why

then have decision makers continued to weigh infrastructure security more heavily than the

security and resilience of national audiences? The strategic vulnerability of domestic populations

in an era of high technologies and instantaneous communication is being recognised, and there

have been moves to elevate audiences in recent years. However, as we saw in the UK case study,

national security leaders have continued to emphasise threats to infrastructure and data over

threats to audiences from influence and interference campaigns. I posit that this is an error in

judgement which has contributed to the increase in hostile partisanship within nations, a decrease

in public trust in institutions, and the success of foreign strategic narratives over factual accuracy.

In democracies in particular, the perception, opinions and behaviour of the domestic population

have considerable influence over the national and international policies of a state, given they elect

their leadership according to a specific and temporally regulated cycle.

Despite this strategic importance and power, however, audiences have only just begun to receive

the recognition and attention required in highly connected democracies. This ease of connection,

the instantaneous flow of information from one side of the planet to the other, and the difficulty

of differentiating true from false in terms of both information and people on the Internet, have

increased the risks of an unprotected and (relatively) cyber-ignorant domestic population.

Accepting the strategic importance of the audience in a democracy and the necessity of protecting

Chapter 6 – CCI in an Age of Disinformation

248

the integrity and resilience of audience opinion, perception, and information, how are people being

targeted? How are domestic and foreign actors choosing which segments of the population to

target to achieve a higher degree of acceptance of their chosen strategic narrative? The answer lies

in the exploitation of the inordinately large volume of information produced each day – massive

quantities of data are being used in the construction of psychometric profiles, which are then used

for micro-targeting.686 The next sections will examine the collection of data across multiple

platforms including the Internet of Things, the concept and use of psychometrics, and the practice

of micro-targeting.

Data Collection and the Internet of Things

The ‘five Vs’ of big data that were identified earlier in this thesis – volume, variety, veracity,

velocity, and value — are useful to describe the ‘pile of needles’ problem. There is so much data

available, and increasing rapidly, that finding the data you need is becoming progressively more

difficult - locating a specific needle in a pile of needles, rather than the former problem of finding

a needle in a haystack. As cyber-enabled technologies proliferate and surveillance capitalism

becomes integrated into every facet of daily life, these problems are going to increase

astronomically. Gradually, the technologies that we surround ourselves with in everyday life –

Alexa, Siri, smart fridges, mobile phones, smart televisions, etc. – collect data on how that

technology is used, as well as on the different people using that technology. All this data is

collected, stored, and exploited, either by the maker of the technology or by third parties to whom

that data is sold – often consented to in the (never read) terms of service agreed to by new users.687

The increasing number of cyber-enabled technologies that depend on cyberspace, at least in part,

for full functionality and emit a continual stream of data, represent a continual growth in the global

appetite for data, which an article in The Economist has claimed now surpasses oil as the most

valuable asset in the world.688 Every part of society is becoming digitised, connected – even

children, unable to consent, are becoming contributing parts of the Internet of Things, as their

686 Borgesius et al., “Online Political Microtargeting”; John Rust and Susan Golombok, Modern Psychometrics: The Science of Psychological Assessment (New York: Routledge, 2009), https://www.routledge.com/Modern-Psychometrics-The-Science-of-Psychological-Assessment/Rust-Golombok/p/book/9780415442152. 687 David Berreby, “Click to Agree with What? No One Reads Terms of Service, Studies Confirm,” The Guardian, March 4, 2017, https://www.theguardian.com/technology/2017/mar/03/terms-of-service-online-contracts-fine-print; Caroline Cakebread, “You’re Not Alone, No One Reads Terms of Service Agreements,” Business Insider Australia, November 15, 2017, https://www.businessinsider.com.au/deloitte-study-91-percent-agree-terms-of-service-without-reading-2017-11. 688 The Economist, “The World’s Most Valuable Resource Is No Longer Oil, but Data,” The Economist, May 6, 2017, https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data. This point was also made by former Cambridge Analytica employee Brittany Kaiser – see Amer and Noujaim, The Great Hack. The Economist, “The World’s Most Valuable Resource Is No Longer Oil, but Data.” This point was also made by former Cambridge Analytica employee Brittany Kaiser – see Amer and Noujaim, The Great Hack.

Courteney O’Connor – PhD Thesis 2021

249

toys and educational applications and technologies provide information to not only their guardian

adults, but the owners of the devices and platforms they use.689

Many of these cyber-enabled technologies never have their default factory settings changed – these

are the easiest devices to target for malicious actors seeking to build a botnet (examined in chapter

five), because they can be hijacked en masse, with a single script able to locate and access each

device.690 Devices like televisions, speakers, fridges, and air conditioning units that are cyber-

enabled project data which is scooped up by the manufacturer of the device, but can also be

exploited by hostile actors. The botnets used against Estonian systems in 2007 have been estimated

to include anywhere between hundreds of thousands to two million IoT devices alongside actual

computers.691 It is not out of the realm of the reasonable to assert that there are few spaces in the

modern age which, despite being considered a ‘private’ space like the home, have no form of

surveillance or data collection being undertaken therein – the connected home is actually a source

of data about the individual/s that live/s there. This doesn't begin to consider the commercial

surveillance like CCTV, security cameras, or traffic monitoring systems that are so public and

accepted that they are often not observed or noticed at all – public spaces can be monitored by

legitimate authorities as a matter of course.692

Consider the mobile phone - portable, roaming personal surveillance and data production

platforms. These devices are sources of an incredible volume of psychometric and targeting data.

In addition to geolocation capabilities through modern tracking, individuals increasingly use their

mobile phones as an integral life platform – all things which can be done on the phone increasingly

are done on the phone, concentrating an inordinate volume of personal information on a singular

689 Kate Fazzini, “Toys and Apps Often Track Your Kids and Collect Information about Them — Here’s How to Keep Them Safe,” CNBC, November 24, 2018, https://www.cnbc.com/2018/11/23/connected-toys-privacy-risks.html; Chavie Lieber, “Big Tech Has Your Kid’s Data — and You Probably Gave It to Them,” Vox, December 5, 2018, https://www.vox.com/the-goods/2018/12/5/18128066/children-data-surveillance-amazon-facebook-google-apple; Cristina Miranda, “Buying an Internet-Connected Smart Toy? Read This.,” Consumer Information, December 6, 2018, https://www.consumer.ftc.gov/blog/2018/12/buying-internet-connected-smart-toy-read. 690 This was one of the functions of the Mirai malware, which “spreads to vulnerable devices by continuously scanning the Internet for IoT systems protected by factory default usernames and passwords.” Brian Krebs, “Who Makes the IoT Things Under Attack?,” Krebs on Security (blog), October 3, 2016, https://krebsonsecurity.com/2016/10/who-makes-the-iot-things-under-attack/. See also Josh Fruhlinger, “The Mirai Botnet Explained: How IoT Devices Almost Brought down the Internet,” CSO Online, March 9, 2018, https://www.csoonline.com/article/3258748/the-mirai-botnet-explained-how-teen-scammers-and-cctv-cameras-almost-brought-down-the-internet.html; Anthony Spadafora, “Mirai Botnet Returns to Target IoT Devices,” TechRadar, March 19, 2019, https://www.techradar.com/news/mirai-botnet-returns-to-target-iot-devices. 691 Davis, “Hackers Take Down the Most Wired Country in Europe”; Steve Mansfield-Devine, “Estonia: What Doesn’t Kill You Makes You Stronger | Elsevier Enhanced Reader,” Network Security, 2012, https://doi.org/10.1016/S1353-4858(12)70065-X. 692 Fritz Allhoff and Adam Henschke, “The Internet of Things: Foundational Ethical Issues,” Internet of Things 1–2 (September 2018): 55–66, https://doi.org/10.1016/j.iot.2018.08.005.

Chapter 6 – CCI in an Age of Disinformation

250

device which emits data to not only the device manufacturer but many, if not all, of the application

designers that provide the apps (those which organise, entertain, and communicate).693 The

individual creates and transmits an immense volume of data every day, and this data is exploitable

by a variety of actors. By engaging in active CCI practices such as the verification of information

before further transmission or concealment of PII, the individual can reduce their data footprint

and so reduce their targetability. Even passive CCI practices like deleting (unopened) emails from

suspicious or unrecognised addresses, or covering your password when entering it into a device in

a public space will reduce opportunities for malicious actors. That targetability reduction will also

contribute to the aggregate cyber security of the state, because individuals who are less targetable

for information operations represent a lower risk profile.

Psychometrics

Psychometrics is “the science of modern psychological assessment,”694 where characteristics of the

human psyche are measured, including knowledge, abilities, personality traits and attitudes. One

measure of such characteristics is the OCEAN profile – a person’s level of openness;

conscientiousness; extraversion; agreeableness; and neuroticism. Based on the results of the

OCEAN assessment, one can tailor advertising and information in order to influence specific

individuals, instead of massive segments of the population, as with more traditional advertising

campaigns.695 These traits can now be measured with the data that is collected online and through

IoT devices, in addition to any psychological profiles that individuals fill out themselves, and these

profiles become the foundation for psychological campaigns that are designed to influence

individual choice and action. The intent is to analyse data that will provide information on how an

individual thinks; what influences their choices, and how open the individual is to having their

mind changed, or their opinion reinforced on something. If someone is politically neutral and

relatively open to change, for example, it may be easier to sway their perception politically right

693 Nicole Kobie, “The Internet of Things: Convenience at a Price,” the Guardian, March 30, 2015, http://www.theguardian.com/technology/2015/mar/30/internet-of-things-convenience-price-privacy-security; Daniela Popescul and Mircea Georgescu, “Internet of Things - Some Ethical Issues,” The USV Annals of Economics and Public Administration 13, no. 2 (2013): 7; Rolf H. Weber, “Internet of Things: Privacy Issues Revisited,” Computer Law & Security Review 31 (2015): 618–27, https://doi.org/10.1016/j.clsr.2015.07.002. 694 John Rust and Susan Golombok, Modern Psychometrics: The Science of Psychological Assessment (New York: Routledge, 2009), https://www.routledge.com/Modern-Psychometrics-The-Science-of-Psychological-Assessment/Rust-Golombok/p/book/9780415442152 p. 4. 695 J. Isaak and M. J. Hanna, “User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection,” Computer 51, no. 8 (August 2018): 56–59, https://doi.org/10.1109/MC.2018.3191268; Charles Kriel, “Fake News, Fake Wars, Fake Worlds,” Defence Strategic Communications 3 (2017): 171–89; S. C. Matz et al., “Psychological Targeting as an Effective Approach to Digital Mass Persuasion,” Proceedings of the National Academy of Sciences 114, no. 48 (November 28, 2017): 12714, https://doi.org/10.1073/pnas.1710966114; PsyMetrics, “The OCEAN Profile,” PsyMetrics, 2013, https://www.psymetricsworld.com/ocean.html.

Courteney O’Connor – PhD Thesis 2021

251

with specific, targeted advertising or (dis)information. Someone who has a deep-set political

opinion is less likely to be swayed, and so may be passed over.696

Psychometric profiles, long in use among advertising actors, came to the fore in terms of political

campaigns and profiling with the 2016 US Presidential election and the revelation that Cambridge

Analytica, a political consulting company based in the UK, had undertaken microtargeted

campaigns on behalf of the Cruz and then Trump campaign teams.697 Cambridge Analytica worked

with conservative (right-wing) actors and claimed to have built profiles with between four

thousand and six thousand data points per profile on each voting-age American.698 Psychometrics

profiles, under proper analysis, do not just provide specifics on likes and dislikes – cohesively, they

can also provide information about values and beliefs.699 Once an actor has an in-depth

understanding (or even just a surface understanding in combination with sufficient personal

information) of individual values and beliefs, those values and beliefs can be targeted for

manipulation with a strategic application of disinformation. As Dr. Stelzenmuller proclaimed

before Congress, hostile actors aren’t looking to hack elections; they are looking to ‘hack people's

minds.’700 Psychometric profiles help actors to do exactly that. They contribute to the targeting of

information operations like disinformation campaigns by helping to identify which sections of the

population will be more open to accepting external strategic narratives, or which sections of the

696 Such psychometric targeting has been the province of advertisers and marketers for a while now, at times being highly invasive of customer’s privacy. “Imagine that you buy the following items: Cocoa-butter lotion, a large purse, vitamin supplements (zinc and magnesium) and a bright blue rug. Now imagine that these purchases tell someone that there is an 87 per cent chance that you are pregnant, and due in six months. The company Target did this – used these and other pieces of data to produce a ‘pregnancy score’ for its customers. This surveillance became public knowledge when Target sent a family a ‘pregnancy pack’, congratulating the shopper on their pregnancy. The shopper was a teenage girl, living at home and her parents were unaware that their child was sexually active, let alone pregnant.” Henschke, Ethics in an Age of Surveillance: Personal Information and Virtual Identities, 3. 697 Cambridge Analytica, “CA Political,” July 2, 2017, https://web.archive.org/web/20170702090555/https://ca-political.com/; Harry Davies, “Ted Cruz Campaign Using Firm That Harvested Data on Millions of Unwitting Facebook Users,” the Guardian, December 11, 2015, http://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data. 698 Working with the Trump campaign, Cambridge Analytica designed and released into the political information ecology a variety of false or misleading information and memes, in order to reduce trust in the Clinton campaign and either reduce the likelihood of Clinton receiving votes, or increase the likelihood of undecided voters swaying toward Donald Trump. Using their psychometric profiles and a microtargeting strategy (the software for which was restricted from export prior to 2015 due to being considered 'weapons-grade' communication software) specific to Cambridge Analytica, they identified those individuals and communities likely to be swayed to Donald Trump's campaign. These individuals, the 'persuadables,' those people Cambridge Analytica had identified as being open to persuasion in their voting preferences. Amer and Noujaim, The Great Hack; Cambridge Analytica - The Power of Big Data and Psychographics (Concordia, 2016), https://www.youtube.com/watch?v=n8Dd5aVXLC. 699 CB Insights, “What Are Psychographics?,” CB Insights Research, May 6, 2020, https://www.cbinsights.com/research/what-is-psychographics/. 700 Constanze Stelzenmüller, “The Impact of Russian Interference on Germany’s 2017 Elections,” Brookings (blog), June 28, 2017, https://www.brookings.edu/testimonies/the-impact-of-russian-interference-on-germanys-2017-elections/.

Chapter 6 – CCI in an Age of Disinformation

252

population already have embedded beliefs that will aid in the domestic transmission of

disinformation.

What makes modern targeting both more effective and (potentially) more invasive of privacy, and

thus dangerous to the electoral integrity and resilience of modern democracies, is that it is

increasingly difficult to protect sources of information, or indeed to identify when and where a

disinformation campaign is being carried out. This is in large part because the social media

platforms like Facebook have policies in place which prevent full disclosure of advertising. In

terms of the disinformation used by both domestic and foreign actors in the 2016 US election, for

example, Facebook is a black box, as previously mentioned. Unless you were targeted for influence,

you didn't see the advertising or disinformation, short of a targeted individual taking a screenshot

and sharing that further. The use of passive and active CCI measures such as password protection,

enabling device firewalls and privacy settings, concealment of PII and the verification of

information will reduce the individual’s data footprint, and that will reduce the overall pool of

individuals that can be easily targeted for disinformation operations.701

Microtargeting

Microtargeting is not itself a surprising concept – the more information you have on the individuals

within a population, the easier it is to ascertain whether the specific individuals that could make a

difference to your cause are persuadable. Microtargeting allows for a more efficient deployment

of resources relative to the likely gains of an operation – why spend hundreds of thousands

advertising to a large population, when only 10% of that population would ensure a positive

outcome for your campaign? Why take the risk of advertising to a smaller segment of the

population, but not capture the segment that would have made the most difference?

Microtargeting allows campaign initiators to apportion funds and capabilities as efficiently as

possible, to ensure the best possible outcome. When advertisers do this, it is a liberal capitalist

economy at work. When domestic political actors do this, it is strategic campaigning. When foreign

actors do this for political purposes, given sovereignty and non-interference laws and norms, this

is foreign interference in domestic affairs.

In the 2016 US presidential campaign, microtargeting allowed Cambridge Analytica to identify

specific individuals who could be persuaded to vote for the Trump campaign based on their values

701 A significant proportion of the audience would need to engage in these practices to make an observable difference given the amount of data in existence and being produced, but there must be a starting point. For a non-exhaustive list of CCI practices, see Error! Reference source not found..

Courteney O’Connor – PhD Thesis 2021

253

and beliefs, as well as their fears.702 Earlier work was undertaken by Cambridge Analytica for

Republic Senator Thom Tillis (North Carolina), who spent $345,000 for “’micro-targeting’ from

2014 through 2015, geared at helping Tillis defeat Democratic incumbent Kay Hagan in 2014, the

most expensive US senate race in history at the time.”703 Similar targeting identified those

individuals in the UK who, fearful of what continued or further integration with the European

Union could mean for British independence, could be persuaded to vote 'Leave' rather than 'Stay'

in the 2016 referendum on the activation of Article 31.704 The ongoing digitisation of expression

and embrace of highly networked and connected technologies as part of daily life is providing,

through cyberspace and IoT technologies, a continual and vast stream of data that Shoshana

Zuboff calls ‘behavioural surplus’. Behavioural surplus is data about our use of technology, which

provides companies like Google and Facebook with tremendous amounts of revenue.705 Without

education in terms of cyber security and the strengthening of personal resilience to outside

influence, individuals in modern democracies are proving susceptible to the manipulation of

perceptions and behaviour by outside forces, especially through the introduction and

communication of harmful disinformation into the socio-political information ecology that finds

expression across the massive social media platforms.

In addition to the personal responsibility of the individual in regard to verifying the accuracy of

information found online (which I would argue is neither widely understood, nor recognised as

necessary), there is a responsibility on the parts of both the private corporations that run these

platforms to ensure the integrity of information transmitted directly through their networks (such

as Facebook's newsfeed and Twitter's timeline), and, on the part of the state, to ensure that the

domestic population is not being adversely influenced by hostile actors. The difficulty lies in the

vast information attack surface presented by social media, and cyberspace in general. The current

international economic system is designed to take advantage of massive data surplus that users of

cyberspace technologies (like email and online shopping), IoT technologies and social media

platforms generate every single day.

The CCI challenge is to formulate methods of protection and repulsion of hostile actor attempts

at intelligence collection and exploitation, while allowing continued and unimpeded economic

702 Amer and Noujaim, The Great Hack. . 703 Maegan Vazquez and Paul Murphy, “Trump Isn’t the Only Republican Who Gave Cambridge Analytica Big Bucks,” CNN, March 21, 2018, https://www.cnn.com/2018/03/20/politics/cambridge-analytica-republican-ties/index.html. 704 Amer and Noujaim, The Great Hack. 705 Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, 63–97.

Chapter 6 – CCI in an Age of Disinformation

254

interactions that both support and contribute to the international economic, communications, and

political systems, and ensuring that government overreach is constrained.706 Consideration must

be taken of the millions of individuals that take part in these interactions and utilise these systems,

both hostile and not. Individual and organised group practices are integral to systemic resilience

of inference below the systemic level and need to be considered in any strategic plan or approach

to CCI; active (potentially proactive) and efficient CCI is required at the systemic level to balance

hostile state and non-state actors. The internal CIR response hierarchy needs to be known and

disseminated so that all actors (individuals, organised groups, states) know how to respond in the

case of an identified cyber incident, exploitation, or operation.

The next sections will consider examples of unsuccessful and successful threat elevation with

recognition of audiences in cases related to cyberspace and counterintelligence. The first example

of unsuccessful elevation is disinformation concerning the COVID-19 pandemic. The subsequent

example, of successful threat elevation and audience recognition, is disinformation and active

measures in Sweden.

Elevation and Audiences – COVID-19 Disinformation

The novel coronavirus pandemic that started in late 2019 and worsened throughout 2020 has

considerably and negatively impacted not only global health but national, regional, and

international stability as states experience a variety of consequences across the economic, political,

and social spectrums. Malicious actors in cyberspace have taken advantage of the unrest and

instability, with cyber-attacks increasing across the board, to the point of being called a cyber

pandemic;707 dark web marketplaces also started advertising COVID-19 discount codes on

706 While the state approach to CCI/avoidance of overreach might involve oversight mechanisms built into the operation and organizational structures of the intelligence community, there is very much an individual interest in engaging in CCI as a method of privacy assurance. If privacy can be considered a political necessity in liberal democracies, as Adam Henschke avers, then there must be a restriction in what information the state can access. Given the reach of the IoT, for example, there is the potential for an extensive and encompassing surveillance network built into people’s homes. To ensure the integrity and resilience of democracy, there needs to be a space for the citizens which the government cannot normally access. Knowledge of surveillance or even knowledge of the potential for state surveillance might have a chilling effect on individual actions and could induce behavioural change; any decision made under these conditions cannot be qualified as a decision freely made, a necessary condition of the democratic system. Henschke, “Privacy, the Internet of Things and State Surveillance.” 707 Interpol, “INTERPOL Report Shows Alarming Rate of Cyberattacks during COVID-19”; Joyce et al., “How to Protect Your Companies from Rising Cyber Attacks and Fraud amid the COVID-19 Outbreak”; Dan Lohrmann, “2020: The Year the COVID-19 Crisis Brought a Cyber Pandemic,” Government Technology, December 12, 2020, https://www.govtech.com/blogs/lohrmann-on-cybersecurity/2020-the-year-the-covid-19-crisis-brought-a-cyber-pandemic.html; Cedric Nabe, “Impact of COVID-19 on Cybersecurity,” Deloitte Switzerland, December 15, 2020, https://www2.deloitte.com/ch/en/pages/risk/articles/impact-covid-cybersecurity.html.

Courteney O’Connor – PhD Thesis 2021

255

malware.708 Disinformation has proliferated and diffused widely and wildly, tying together a

number of different conspiracy theories and communities, from those opposed to vaccinations

(‘anti-vaxxers’) to QAnon.709 Both state and non-state actors are contributing to the hostile and

polluted information ecology surrounding the novel coronavirus, making counter-narratives and

counterintelligence actions and operations difficult to design and deploy effectively, and vitally

important – failures in CCI here can literally lead to increased infection and death rates. One of

the serious concerns surrounding COVID-19 disinformation is not only the immediate impact to

the individuals who are susceptible to such false information, but the long-term impacts of the

rejection of accurate information about a global health threat in favour of 'fake news.'

Conspiracy Theories and COVID-19

COVID-19 is far from the first event or series of circumstances to become the subject of

conspiracy theories but it does serve as a recent illustration of the co-optation of a contemporary

event for information operations and manipulation of local media ecologies. In addition to the

increasing political partisanship and distrust in democratic and international institutions, the novel

coronavirus is producing an in-group/out-group dynamic among those that believe COVID-19 to

be real, and those who believe it to be a fabrication of hostile adversaries. Among the latter group,

there are a number of 'reasons' for their disbelief in the existence of the virus – Former US

President Donald Trump claiming that the virus was a hoax, Brazilian President Jair Bolsonaro

calling COVID-19 ‘the little flu,’ or that it is a creation of a secretive sect trying to control the

world.710 Parallel was the early false messaging from Former President Trump that the US chief

medical advisor Anthony Fauci did not recommend wearing masks;711 that human rights are being

708 Zak Doffman, “New Coronavirus Warning: These ‘COVID-19 Discounts’ Are Now The Most Dangerous Deals Online,” Forbes, March 19, 2020, https://www.forbes.com/sites/zakdoffman/2020/03/19/new-coronavirus-warning-beware-these-covid-19-discounts-the-most-dangerous-deals-online/; Anthony Spadafora, “Hackers Use Covid-19 ‘special Offers’ to Spread Malware,” TechRadar, March 23, 2020, https://www.techradar.com/au/news/hackers-use-covid-19-special-offers-to-spread-malware. 709 Sophie Aubrey, “The Curious Marriage of QAnon and Wellness,” The Sydney Morning Herald, September 26, 2020, https://www.smh.com.au/lifestyle/health-and-wellness/playing-with-fire-the-curious-marriage-of-qanon-and-wellness-20200924-p55yu7.html; E. J. Dickson, “Wellness Influencers Are Calling Out QAnon Conspiracy Theorists for Spreading Lies,” Rolling Stone (blog), September 15, 2020, https://www.rollingstone.com/culture/culture-news/qanon-wellness-influencers-seane-corn-yoga-1059856/; Michael McGowan, “How the Wellness and Influencer Crowd Serve Conspiracies to the Masses,” the Guardian, February 24, 2021, http://www.theguardian.com/australia-news/2021/feb/25/how-the-wellness-and-influencer-crowd-served-conspiracies-to-the-masses. 710 Nick Paton Walsh et al., “Bolsonaro Calls Coronavirus a ‘little Flu.’ Inside Brazil’s Hospitals, Doctors Know the Horrifying Reality,” CNN, May 25, 2020, https://edition.cnn.com/2020/05/23/americas/brazil-coronavirus-hospitals-intl/index.html. 711 Simon Romero, “Fauci Pushes Back against Trump for Misrepresenting His Stance on Masks.,” The New York Times, October 1, 2020, sec. World, https://www.nytimes.com/2020/10/01/world/fauci-pushes-back-against-trump-for-misrepresenting-his-stance-on-masks.html.

Chapter 6 – CCI in an Age of Disinformation

256

abrogated by lockdowns;712 that we should just learn to live with the virus and it will burn itself

out eventually.713 Whatever the reasons or beliefs, a number of different conspiracy theories about

the virus have surfaced and picked up momentum, proving the vulnerability of the public to hostile

information operations, and the importance of instituting efficient counter-narrative operations as

essential parts of CCI. The next several sections will examine three of the conspiracy theories

surrounding the novel coronavirus, considering the importance of elevating the threat to the

perception of individuals, and the security of the democratic state through affected resilience and

integrity of democratic institutions.

The 5G Theory

Another COVID-19 conspiracy theory states that the novel coronavirus either doesn't exist and it

is 5G towers that are making people sick, or that 5G towers are transmitting the virus and people

are catching COVID-19 that way.714 While the World Health Organisation has affirmed that no

research exists that linked negative health effects to exposure to wireless technology,

misinformation and disinformation abounds concerning 5G technologies.715 In the UK, "the false

claim that radio waves emitted by 5G towers make people more vulnerable to COVID-19 has

resulted in over 30 acts of arson and vandalism against telecom equipment and facilities, as well as

around 80 incidents of harassment against telecom technicians."716 Four 5G towers have been

subject to arson in Quebec, Canada.717 Stephanie Carvin, formerly of the Canadian Security

Intelligence Service and currently a professor at Carlton University, assesses Russia as the likely

instigator of the 5G disinformation, with a greater reach being enabled by botnets.718 The 5G

conspiracy theory has enjoyed some success among the anti-vaxxer community, creating links

between previously disconnected conspiracy theory communities. Carvin theorises that Russian

712 Reuters, “Australian state Violated Human Rights in COVID Lockdown-Report,” Reuters, December 17, 2020, https://www.reuters.com/article/health-coronavirus-australia-towers-idUSKBN28R0EC; Mirko Bagaric, “High Court Likely to ‘Free’ Covid’s Political Prisoners,” The Weekend Australian, September 22, 2020, https://www.theaustralian.com.au/commentary/high-court-likely-to-free-covids-political-prisoners/news-story/68b63c9c4fe6a7bb78a0cba04ff294e5. 713 Daniel Victor, Lew Serviss, and Azi Paybarah, “In His Own Words, Trump on the Coronavirus and Masks,” The New York Times, October 2, 2020, sec. U.S., https://www.nytimes.com/2020/10/02/us/politics/donald-trump-masks.html; Christian Paz, “All of Trump’s Lies About the Coronavirus - The Atlantic,” The Atlantic, November 2, 2020, https://www.theatlantic.com/politics/archive/2020/11/trumps-lies-about-coronavirus/608647/. 714 Reuters Staff, “False Claim: 5G Networks Are Making People Sick, Not Coronavirus,” Reuters, March 16, 2020, https://www.reuters.com/article/uk-factcheck-coronavirus-5g-idUSKBN2133TI. 715 Reuters Staff. 716 OECD, “Combatting COVID-19 Disinformation on Online Platforms.” 717 Jeremy Cohen, “Cell Tower Vandals and Re-Open Protestors — Why Some People Believe in Coronavirus Conspiracies,” The Conversation, May 22, 2020, http://theconversation.com/cell-tower-vandals-and-re-open-protestors-why-some-people-believe-in-coronavirus-conspiracies-138192. 718 Sam Cooper, “Coronavirus Conspiracies Pushed by Russia, Amplified by Chinese Officials: Experts,” Global News, April 28, 2020, https://globalnews.ca/news/6793733/coronavirus-conspiracy-theories-russia-china/.

Courteney O’Connor – PhD Thesis 2021

257

disinformation surrounding 5G technologies is intended to delay planned rollout in Western

democracies, given their presumed crucial importance to state security.719 The disinformation

campaign is intended to further damage public trust in scientific authorities and public institutions,

particularly those attempting to curb the pandemic.720 In this instance, the disinformation

campaign and prevalence of transmission has resulted in physical damage to both infrastructure

and people in several locations globally. While there are fact-checks available and public

institutions such as government health agencies and the WHO issuing correct information, there

is not presently a strong enough counter-narrative properly targeted at the affected individuals for

there to be a significant chance at combatting the external strategic narrative.

The Microchip Theory

The third of the conspiracy theories to be discussed here is the Bill Gates theory, or the microchip

theory. According to this narrative, the coronavirus vaccine is being developed by Bill Gates and

will contain a microchip intended to provide tracking on all citizens that receive said vaccination.721

This disinformation also contains threads of truth, making it more difficult to counter: the Gates

Foundation has funded COVID-19 vaccine and treatment research, but neither of these have

anything to do with microchips.722 In a Q&A session, Gates referred to the possibility of digital

trust certificates in relation to home or kiosk testing for COVID-19 – this was perceived as an

admission of intent to use something like microchips or ‘quantum-dot tattoos’ in order to track

people.723 Research published in December of 2019 that was funded by the Gates Foundation

related to the potential use of a long-lasting skin dye that could be read by purpose-adapted or –

designed technologies that would identify the subjects vaccination history; certainly not a

technology that could be used to track someone.724 The technology is intended for use in areas of

the world suffering extreme poverty and conflict, where it may be difficult to ascertain whether or

not an individual has been vaccinated and if so, what for.725 This research is not related to COVID-

19, but is the link that conspiracy theorists have used as an explanation of the COVID-19

719 Stephanie Carvin in Cooper. 720 Cooper. 721 Saranac Hale Spencer, “Conspiracy Theory Misinterprets Goals of Gates Foundation,” FactCheck.Org (blog), April 14, 2020, https://www.factcheck.org/2020/04/conspiracy-theory-misinterprets-goals-of-gates-foundation/; Elise Thomas and Albert Zhang, “ID2020, Bill Gates and the Mark of the Beast: How Covid-19 Catalyses Existing Online Conspiracy Movements” (Australian Strategic Policy Institute International Cyber Policy Centre, 2020). 722 Spencer, “Conspiracy Theory Misinterprets Goals of Gates Foundation.” 723 Spencer. 724 Spencer. See the referenced original research on quantum dots: Kevin J. McHugh et al., “Biocompatible Near-Infrared Quantum Dots Delivered to the Skin by Microneedle Patches Record Vaccination,” Science Translational Medicine 11, no. 523 (December 18, 2019), https://doi.org/10.1126/scitranslmed.aay7162. 725 McHugh et al., “Biocompatible Near-Infrared Quantum Dots Delivered to the Skin by Microneedle Patches Record Vaccination.”

Chapter 6 – CCI in an Age of Disinformation

258

pandemic in a way that makes sense according to the information they are receiving or to which

they have access. Past experience and research have shown that engaging with anti-vaxxers is

difficult and often counterproductive, with counter-narratives and the provision of verifiable and

accurate information actually entrenching the beliefs of the anti-vaccination community.726 Belief

echoes are difficult to combat because they operate on a subconscious level. Even when an

individual is informed of the false nature of information, the backfire effect could result in the

accurate information being rejected in favour of the pre-existing false information that has become

part of the individual’s belief structure, resulting in belief persistence. In these contexts, the

fundamental utility of CCI measures and the elevation of the threat to audience psychology cannot

be overstated. In order to successfully engage in risk management in terms of disinformation as a

threat to democratic integrity, there must be a recognition of the importance and vulnerability of

audience belief structures. The human factor is an irrevocably important element of CCI, and the

risk to the audience needs to be elevated accordingly.

Further, resistance to vaccination has increased in recent years, such that general immunity to

‘defeated’ diseases like the measles has reduced below the official level of 'herd' immunity in

developed nations like the UK. This is in part due to conspiracy theories linking conditions like

autism to vaccinations (unfounded, demonstrably false, and debunked).727 External strategic

narratives in addition to domestic opposition to vaccinations and distrust of federal authorities is

an increasing global health risk and could put in jeopardy the vaccination and response plan to the

COVID-19 pandemic. Counter-messaging may prove inefficient and vaccination mandates may

further entrench distrust and resistance. It is possible that counter-narratives and

counterintelligence operations will only prove fruitful if used before external narratives can gain a

significant foothold in socio-political information ecologies, which is itself difficult in an era of

instantaneous transmission and multi-actor disinformation campaigns.

726 Erica Weintraub Austin and Porismita Borah, “How to Talk to Vaccine Skeptics so They Might Actually Hear You,” The Conversation, 2020, http://theconversation.com/how-to-talk-to-vaccine-skeptics-so-they-might-actually-hear-you-143794; Amy Corderoy, “Vaccine Expert Julie Leask Says Internet Outrage about Vaccination Risks Make Problem Worse,” The Sydney Morning Herald, March 30, 2015, https://www.smh.com.au/national/nsw/vaccine-expert-julie-leask-says-internet-outrage-about-vaccination-risks-make-problem-worse-20150330-1mb3gp.html; Claire Hooker, “How to Cut through When Talking to Anti- Vaxxers and Anti-Fluoriders,” The Conversation, February 17, 2017, 3. 727 Talha Burki, “Vaccine Misinformation and Social Media,” The Lancet Digital Health 1, no. 6 (October 1, 2019): e258–59, https://doi.org/10.1016/S2589-7500(19)30136-0; Centers for Disease Control and Prevention, “Autism and Vaccines | Vaccine Safety | CDC,” Autism and Vaccines, January 26, 2021, https://www.cdc.gov/vaccinesafety/concerns/autism.html.

Courteney O’Connor – PhD Thesis 2021

259

The Fatal Vaccine Theory

The last of the enduring COVID disinformation narratives to be briefly examined here is the fatal

vaccine theory. According to this theory, one of the vaccines being trialled is actually more

dangerous than the virus itself and is killing the volunteers injected with it. According to the

Australian Strategic Policy Institute, this conspiracy theory appeared around July and is centred

upon American scientists testing vaccines on Ukrainian people in the city of Kharkiv, because the

vaccine was too dangerous to test on Americans.728 This is total fiction, published originally on the

website of a separatist region in Eastern Ukraine that is known to be pro-Russia and anti-American.

The narrative was picked up and laundered, however, by Russian media and eventually filtered into

the international information ecology and by August 2020 had been integrated into the larger

schema of coronavirus disinformation available online to anyone with access to the Internet. This

conspiracy theory has also found a home within the antivax community and is an abiding feature

of the COVID-19 disinformation landscape. We have seen claims that Russia is trying to spread

disinformation about US and European developed vaccines in order to promote the Russian

‘Sputnik V’ vaccine, a continuation of ongoing intent to sow chaos and confusion over vaccines

in order to undermine Russia’s adversaries.729

One of the primary problems with disinformation like a fatal vaccine is that now the true vaccines

are ready for distribution and use; it is more difficult to engage large swathes of the audience that

are already convinced it will do more harm than good. To achieve community resilience against

the novel coronavirus such that contemporary society can find a level of post-COVID normal

(with some semblance of pre-pandemic levels of economic and social stability), as many people as

possible will need to be vaccinated globally. When there are actors engaging in disinformation

creation and communication that result in certain communities refusing vaccination on a variety

of grounds, the ongoing resilience and security of the state becomes questionable.

The international community will need to invest significantly in COVID-19-related CCI; for this

to happen, there needs to be a widespread acknowledgement of the vulnerability of the audience

to imposed strategic narratives and the select introduction of false information into socio-political

information ecologies. This is a matter for the counterintelligence communities of the state as

much as it is an education and resilience issue. As of writing, coronavirus-related disinformation

728 Thomas, “Covid-19 Disinformation Campaigns Shift Focus to Vaccines.” 729 Jessica Glenza, “Russian Trolls ‘spreading Discord’ over Vaccine Safety Online,” the Guardian, August 23, 2018, http://www.theguardian.com/society/2018/aug/23/russian-trolls-spread-vaccine-misinformation-on-twitter; Scott, “In Race for Coronavirus Vaccine, Russia Turns to Disinformation.”

Chapter 6 – CCI in an Age of Disinformation

260

was still a significant feature of the international information landscape and no significant or

coordinated active countermeasures were in place. This shows that despite developments in the

elevation of cyberspace as threat, the focus remains on assets rather than audiences, which is

negatively impacting the integrity of democratic societies and causing communities to fracture.

CCI is fundamental, not only to increasing resilience to the adversarial operations of hostile actors

but, in this case, to the ongoing health of the global audience. By engaging in active CCI at the

state level (verifying that accurate information is being disseminated and signal-boosting that

information) and promoting the utilisation of similar, active CCI measures among the population,

governments can restructure the narrative surrounding COVID-19 vaccination efficacy and virus

origins at the same time as restoring audience trust in democratic governance. At the proactive

CCI level, governments could identify and disrupt disinformation narratives in a co-ordinated

fashion. There needs to be a higher degree of communication and coordination between states,

organised groups such as corporations (i.e., the large communications and discovery platforms like

Facebook, Twitter, and Google) to proactively engage in CCI to disrupt the communication and

transmission of disinformation to and between individuals. It is an absolute necessity for the

successful undertaking of CCI that the threats to audience psychology and belief frameworks be

adequately elevated and the consequent risks to democratic integrity managed accordingly. A

failure in risk management in terms of applying CCI to COVID-19 disinformation campaigns

could result in the further elevation of the novel coronavirus through a securitisation by extreme

necessity. Elevation of the threats to audience behaviour and decision-making based on

disinformation and adequate engagement of CCI measures and practices will contribute to the

overall security of the audience, as well as to the international system of states.

Recognition and Resilience: Successful Threat Elevation in Sweden

Where various actors of the international community have failed to elevate the threats to the

general public of disinformation and imposed external strategic narratives in relation to the novel

coronavirus pandemic, Sweden stands out as a state with a recent history of proactive

countermeasures to disinformation operations. Sweden effectively elevated both assets and

audiences, and it engaged in a sustained, rigorous, and nuanced CCI campaign to build resilience in

those audiences.

As early as the 1970s, Sweden had identified the dangers posed to their general public of

disinformation circulated in order to influence public perception, particularly in regards to the

Courteney O’Connor – PhD Thesis 2021

261

then-Soviet Union.730 As a result, the Swedish government has been attentive to disinformation

campaigns and the introduction of false information into Swedish socio-political information

ecologies generally. For example, both the Swedish Media Council and the Civil Contingencies

Agency have teams and publications dedicated to the education of both the professional journalism

and public service sectors, as well as the general public, around the identification of, and resilience

to, disinformation and foreign influence campaigns.731 Sweden has successfully recognised and

elevated the disinformation threat to the general public, identifying the threat to democratic

institutions of a disinformed public operating according to imposed external strategic narratives.

The next sections will briefly examine the relationship between the Swedish public, foreign

influence campaigns (with specific reference to Russian active measures), the Swedish concept of

modern total defence, and the countermeasures utilised in order to ensure community resilience

to external influence.

Audiences and Defence in Sweden

The audience is already recognised as an integral element of modern defence in Sweden, both as a

contributing sector but also as a strategic vulnerability. It is possible that geographic proximity to

the former Soviet Union and then the Russian Federation has impressed upon successive Swedish

governments the dangers posed to democratic institutions by external influence and imposed

strategic narratives – “…in the eyes of Sweden’s security police Säpo the country [Russia] is one

of the biggest intelligence threats to Sweden. It has accused Russia both of spying and trying to

influence the Swedish public debate.”732 Whatever the causes, the fact of the matter is that there is

an overt acknowledgement within Swedish defence and government bodies that in order to

properly secure Sweden from modern threats, there is a twofold state responsibility.733 One, to

730 Martin Kragh and Sebastian Åsberg, “Russia’s Strategy for Influence through Public Diplomacy and Active Measures: The Swedish Case,” Journal of Strategic Studies 40, no. 6 (September 19, 2017): 780–81, https://doi.org/10.1080/01402390.2016.1273830; Rid, Active Measures: The Secret History of Disinformation and Political Warfare, 252–53. 731 Kristine Berzina, “Sweden — Preparing for the Wolf, Not Crying Wolf: Anticipating and Tracking Influence Operations in Advance of Sweden’s 2018 General Elections,” The German Marshall Fund of the United States, September 7, 2018, https://www.gmfus.org/blog/2018/09/07/sweden-preparing-wolf-not-crying-wolf-anticipating-and-tracking-influence; EU vs Disinfo, “In Sweden, Resilience Is Key to Combatting Disinformation,” EU vs DISINFORMATION, July 16, 2018, https://euvsdisinfo.eu/in-sweden-resilience-is-key-to-combatting-disinformation/; Government Offices of Sweden, “A Practical Approach on How to Cope with Disinformation,” Government Offices of Sweden, October 6, 2017, https://www.government.se/articles/2017/10/a-practical-approach-on-how-to-cope-with-disinformation/. 732 Emma Löfgren, “How Sweden’s Getting Ready for the Election-Year Information War,” The Local, April 11, 2018, https://web.archive.org/web/20180411113218/https://www.thelocal.se/20171107/how-swedens-getting-ready-for-the-election-year-information-war. 733 Government Offices of Sweden, “Kommittédirektiv - En ny myndighet för psykologiskt försvar (Committee Directive: A New Authority for Psychological Defence)” (Stockholm, 2018), https://perma.cc/GCF8-Z6HV - machine translation; Elin Hofverberg, “Government Responses to Disinformation on Social Media Platforms,” Web page, September 2019, https://www.loc.gov/law/help/social-media-disinformation/sweden.php#_ftn44.

Chapter 6 – CCI in an Age of Disinformation

262

protect the audience insofar as possible from the influence and interference attempts made by

external actors with the intent of socio-political manipulation.734 Second, to educate the audience

such that individuals are capable of identifying disinformation and influence attempts themselves,

increasing individual resilience to such efforts and, in so doing, increasing community resilience to

false information and external strategic narratives.735

According to Swedish Prime Minister Stefan Löfven, the free exchange of ideas and information

is a “precondition for democracy and the rule of law,”736 and, as such, information, and the public's

ability to trust in the accuracy and integrity of that information must be protected. This places both

the importance and the vulnerability of the general public, the democratic audience, squarely in

the centre of state considerations of modern national security. This has resulted in the

Scandinavian state being both highly resilient to, and more aware of, the dangers of disinformation

than other Western democracies. Trust in the media is relatively high and steady in Sweden, and

the Swedish mainstream media are utilised by the Swedish public more so than is news sourced

from social media platforms.737 Mainstream media sources in other Western nations like the US

and Australia, for example, have been suffering an ongoing decrease in public trust in favour of

news sourced through social media, along with increasing distrust in other democratic and

governmental institutions.738

Russian Active Measures in Sweden

While not as widely reported in the international media as campaigns against the more prominent

Western democracies like the UK and the US, Sweden has been the target of multiple Soviet and

then Russian active measures campaigns.739 Previous examples of Russian active measures in

Sweden are akin to traditional measures seen in Cold War operations, such as forgeries. Forged

734 Government Offices of Sweden Ministry of Defence, “Main Elements of the Government Bill Totalförsvaret 2021–2025: Total Defence 2021-2025,” October 15, 2020, 141–42, https://www.government.se/government-policy/defence/objectives-for-swedish-total-defence-2021-2025---government-bill-totalforsvaret-20212025/. 735 Government Offices of Sweden, “Kommittédirektiv - En ny myndighet för psykologiskt försvar (Committee Directive: A New Authority for Psychological Defence)”; Hofverberg, “Government Responses to Disinformation on Social Media Platforms.” 736 Stefan Löfven in The Local, “Sweden to Create New Authority Tasked with Countering Disinformation,” The Local Sweden, January 15, 2018, https://www.thelocal.se/20180115/sweden-to-create-new-authority-tasked-with-countering-disinformation/. 737 Pew Research Center, “Facts on News Media and Political Polarization in Sweden,” Pew Research Center’s Global Attitudes Project (blog), May 17, 2018, https://www.pewresearch.org/global/fact-sheet/news-media-and-political-attitudes-in-sweden/. 738 Henschke, Sussex, and O’Connor, “Countering Foreign Interference: Election Integrity Lessons for Liberal Democracies”; Shearer, “Social Media Outpaces Print Newspapers in the U.S. as a News Source.” 739 Jon Henley, “Russia Waging Information War against Sweden, Study Finds,” the Guardian, January 11, 2017, http://www.theguardian.com/world/2017/jan/11/russia-waging-information-war-in-sweden-study-finds.

Courteney O’Connor – PhD Thesis 2021

263

documents can be inserted into information ecologies to serve as elements of disinformation

campaigns; a modern equivalent might be a spoofed website. A forgery is a concrete form of

disinformation, created specifically to deceive the reader as to the accuracy of its content.740

Forgeries became an art form during the Cold War, with both Eastern and Western intelligence

utilising them extensively.741 While Cold War forgeries reached an impressive level of

sophistication, however, the ease of use of cyberspace and the facility with which one can introduce

false information to a target society seems to have reduced the believability and sophistication

requirements. While forgeries have been used in (and against) Sweden as part of influence and

disinformation operations, they have so far been easily identified and debunked, though like with

all disinformation on the Internet, the documents (or articles about them) do occasionally

resurface, either decontextualised or as 'proof' of a point somebody is trying to make.742

The forgeries identified so far usually involve in some way Swedish defence personnel, be they

military or civilian. Sweden, while not a member of NATO, does cooperate with the defence

alliance and it remains in the interests of the Russian Federation for Sweden to remain outside the

Alliance. Forged documents are usually designed to worsen the relationship between NATO and

Sweden, or NATO's constituent states and Sweden.743 Lacking the attention to detail of their

forebears, however, modern forgeries have not achieved the sophistication or effect of previous

decades' examples. It is also worth noting that even when these forgeries do filter into Swedish

media, they are unlikely to make the intended impact because the Swedish public have been primed

to expect information operations and so are more resilient to attempted active measures than are

populations where governments have not accurately gauged and responded to audience

vulnerability in the information space.

740 For examples of forgeries used as disinformation in deception operations, see Rid, Active Measures: The Secret History of Disinformation and Political Warfare. 741 Rid. 742 Henley, “Russia Waging Information War against Sweden, Study Finds”; Kragh and Åsberg, “Russia’s Strategy for Influence through Public Diplomacy and Active Measures”; The Local, “Russia Spreading Fake News and Forged Docs in Sweden: Report,” The Local Sweden (blog), January 7, 2017, https://www.thelocal.se/20170107/swedish-think-tank-details-russian-disinformation-in-new-study/. 743 Neil MacFarquhar, “A Powerful Russian Weapon: The Spread of False Stories,” The New York Times, August 28, 2016, sec. World, https://www.nytimes.com/2016/08/29/world/europe/russia-sweden-disinformation.html; Ralph Schroeder, “Even in Sweden?: Misinformation and Elections in the New Media Landscape,” Nordic Journal of Media Studies 2, no. 1 (June 7, 2020): 97–108, https://doi.org/10.2478/njms-2020-0009; Margaret L. Taylor, “Combating Disinformation and Foreign Interference in Democracies: Lessons from Europe,” Brookings (blog), July 31, 2019, https://www.brookings.edu/blog/techtank/2019/07/31/combating-disinformation-and-foreign-interference-in-democracies-lessons-from-europe/; The Local, “Russia Spreading Fake News and Forged Docs in Sweden.”

Chapter 6 – CCI in an Age of Disinformation

264

Modern Total Defence: Recognition and Audience Resilience

Sweden currently organises national defence and security according to the concept of ‘modern

total defence,’ an idea that explicitly recognises psychological defence, the defence of the mind.744

Swedish governments have recognised that in the modern era, it is as necessary to protect and

improve the resilience of the mind to foreign interference as it is to protect physical infrastructure

and territory. In order to protect the mind, especially in democracies, it is important to protect the

integrity of information and the ability of the population to access that information; as Löfven

stated, such is the precondition for a functional democracy.745 In short, not only did Sweden

effectively elevate audiences in response to the challenges posed by cyber technologies like social

media etc., but they also engaged in population wide CCI campaigns.

In recognising the growing risk to the democratic public as a national security issue, Sweden has

been able to institute risk management measures and thus far avoided any event that may have

necessitated further elevation of audience vulnerability. In managing the risk to the audience,

Sweden has managed to attenuate the risk level – it is now a matter of fact in Sweden that

disinformation is likely to appear, and the Swedish public are both being prepared for that to occur

and educated about how to recognise and confirm disinformation before it has a chance to

significantly affect their perceptions.746 Their education campaigns, as well as their ongoing

commitment to psychological defence, discussed in the next section, are an example to all highly

networked societies of both measures that should be taken in response to successful threat

elevation, and the success of those measures when deployed efficiently and in a timely manner.

Countermeasures – Public Education and Communication as CCI

The Swedish government has a strong contemporary history of recognising the potential risk of

disinformation to democratic integrity. Swedish authorities have recognised the necessity of

744 Ministry of Defence, “Main Elements of the Government Bill Totalförsvaret 2021–2025: Total Defence 2021-2025”; Björn von Sydow, “NATO Review - Resilience: Planning for Sweden’s ‘Total Defence,’” NATO Review, April 4, 2018, https://www.nato.int/docu/review/articles/2018/04/04/resilience-planning-for-swedens-total-defence/index.html. 745 Löfven, “Statsminister Stefan Löfven (S): ”Så ska vi skydda valrörelsen från andra staters påverkan”,” DN. debatt, March 21, 2017, https://web.archive.org/web/20170321054938/https://www.dn.se/debatt/sa-ska-vi-skydda-valrorelsen-fran-andra-staters-paverkan/ - machine translation; Löfven in The Local, “Sweden to Create New Authority Tasked with Countering Disinformation.” 746 EU vs Disinfo, “Building Swedish Resilience,” EU vs DISINFORMATION, March 28, 2017, https://euvsdisinfo.eu/building-swedish-resilience/; Government Offices of Sweden, “Kommittédirektiv - En ny myndighet för psykologiskt försvar (Committee Directive: A New Authority for Psychological Defence)”; Myndigheten för samhällsskydd och beredskap, “MSB i Almedalen: Frukostseminarium Om Informationspåverkan,” Myndigheten för samhällsskydd och beredskap, January 26, 2019, https://web.archive.org/web/20190126040104/https://www.msb.se/sv/Om-MSB/Nyheter-och-press/Nyheter/Nyheter-fran-MSB/MSB-i-Almedalen-Frukostseminarium-om-informationspaverkan/.

Courteney O’Connor – PhD Thesis 2021

265

teaching the audience, the general public, to recognise, analyse, and dismiss false information when

it appeared. The Civil Contingencies Agency (MSB) was mandated with this task and has been

producing materials and broadcasts that educate the individual about false information and how

to identify it.747 The Swedish government has been building the resilience of the Swedish public to

disinformation campaigns, and attempted information operations. As a result, the potential

dangers of disinformation are more widely known than in the larger Western democracies, which

have suffered the consequences of ongoing information campaigns in recent years. The MSB is

also responsible for producing material to educate officials, and, in 2018, produced a handbook

for communications on public education in relation to identifying and countering

disinformation.748

While not a direct method of public education, Sweden has also taken steps to ensure the integrity

of publicly available information by criminalising the production and dissemination of false

information.749 More to the point, it is also illegal to disseminate information that could negatively

affect the national security of Sweden, or to accept remuneration for disseminating foreign

propaganda within the country.750 Swedish journalists are also compelled to maintain certain ethical

standards, and while they would typically “be prosecuted for misinformation,” can be investigated

by the Press Ombudsman for violation of good publishing practices, including a failure to provide

“evidence to substantiate the information…” used in a story.751 There are incentives for individuals

and organised groups to engage in passive and active CCI, because to do otherwise is to risk legal

censure and punishment. By instituting ethical and legal standards surrounding the information

that is available in the official information ecology, and in combination with the MSB’s public

education campaigns, the Swedish government has contributed significantly to the integrity of

public information and their audience’s resilience to the recognised potential threat of

disinformation. By instituting a strategic approach to CCI engagement and education, and

requiring certain standards of individuals and organised groups, the Swedish government has

reduced the number of exploit situations in which they will have to intervene. Contributing to the

747 Hofverberg, “Government Responses to Disinformation on Social Media Platforms”; Myndigheten för samhällsskydd och beredskap, “Om Krisen Eller Kriget Kommer (Whether the Crisis or War is Coming)” (Myndigheten för samhällsskydd och beredskap, 2018), 6, https://perma.cc/4PHH-ZCQG. 748Myndigheten för samhällsskydd och beredskap, “Countering information influence activities : A handbook for communicators,” Myndigheten för samhällsskydd och beredskap, 2018, https://www.msb.se/sv/publikationer/countering-information-influence-activities--a-handbook-for-communicators/.. 749 Hofverberg, “Government Responses to Disinformation on Social Media Platforms.” 750 Hofverberg. 751 Hofverberg; Elin Hofverberg, “Initiatives to Counter Fake News,” Web page, Library of Congress, April 2019, https://www.loc.gov/law/help/fake-news/sweden.php.

Chapter 6 – CCI in an Age of Disinformation

266

assurance of accurate information (active CCI) reduces the vulnerability of the population to

disinformation campaigns by virtue of it being less easy for trusted institutions to disseminate

potentially false information.

Countermeasures – Psychological Defence Agency as CCI

The most prescient element of the Swedish modern total defence concept is the incipient creation

of the Psychological Defence Authority, slated to launch officially in 2022.752 The Authority will

consolidate the knowledge and policy of the Swedish government and security services in an

unprecedented agency specific to the defence of the mind. The Authority is mandated to “discover,

counter, and prevent influence campaigns, and disinformation, both national and internationally.

It shall also strengthen the populations resistance [so that people] themselves discover influence campaigns and

disinformation. In addition, the psychological defence must be able to act both in the short term and

in the long term” (emphasis added).753 The Swedish defence research agency (FOI) identified

countering deception, disinformation and propaganda as “one of the three essential components

of psychological defence.”754

There are a couple of notable elements to the mission and purpose of the Psychological Defence

Authority. The first, notable for the purposes of this thesis, is the explicit centering and elevation

of risk to the general audience: one of the very specific reasons for which the Authority was created

was to aid in strengthening the resilience of people to influence operations, recognising and

emphasising the importance of the human element and the individual to modern CCI and overall

security. As of writing, this remains the first instance of such an explicit threat elevation relevant

to the audience, in this author’s knowledge. In so elevating the risk to the audience, and engaging

in continuous risk management measures and practices, Sweden has become part of the vanguard

of states operating within or constructing CCI frameworks according to a threat elevation

calculation.

752 Peter Hultqvist, “Sweden’s Defense Minister: Additional Resources Are Coming to Bolster National Security, Alliances,” Defense News, January 11, 2021, https://www.defensenews.com/outlook/2021/01/11/swedens-defense-minister-additional-resources-are-coming-to-bolster-national-security-alliances/; The Local, “Sweden to Create New Authority Tasked with Countering Disinformation.” 753 Hofverberg, “Government Responses to Disinformation on Social Media Platforms”; Myndigheten för samhällsskydd och beredskap, “Om Krisen Eller Kriget Kommer (Whether the Crisis or War is Coming),” 5 - machine translation. 754 Hofverberg, “Government Responses to Disinformation on Social Media Platforms”; Government Offices of Sweden, “Kommittédirektiv - En ny myndighet för psykologiskt försvar (Committee Directive: A New Authority for Psychological Defence),” 5 - machine translation.

Courteney O’Connor – PhD Thesis 2021

267

The second element is the creation of the Authority itself. The creation of a new government

agency is no simple or inexpensive feat. For the Swedish government, or any government, to invest

in a psychological defence agency, then that state’s strategic planners must be reasonably certain

that influence operations and disinformation campaigns will be a feature of the national and

international security landscape for a considerable time to come. While there are parallels in the

UK in terms of the agencies that have been stood up to deal with national security threats like

terrorism, and the variety of risks associated with cyberspace, there has been no equivalent agency

created in the UK that performs the explicit function of psychological defence. In this regard,

Sweden has not only more successfully elevated the threat to the audience posed by adversarial

disinformation campaigns and, in doing so, has contributed to the aggregate security of the state,

but then sought to counter that threat with a holistic and comprehensive CCI effort.

The efforts made by Sweden to counter adversary disinformation operations are, in large part,

generalisable to other states. It is possible — and recommended here — that states engage in

counter-disinformation operations and education. Standing up a new agency may not be necessary,

though it is certainly an option. Designating a specific body within the government that has the

mandate to engage in counter-disinformation operations would enhance the resilience of the state

and its citizens to adversary disinformation operations. This can be achieved by both active and

aggressive cyber counterintelligence on the part of that designated body, as well as the active

education of the polity as to the identification and dismissal of disinformation. Templates for both

the designated body, and the education campaigns, can be taken from Sweden. The justification

for this risk management measure can be accomplished through a comprehensive elevation of the

dangers posed by disinformation to the audience, their perception and subsequent behaviour, and

the adverse consequences of a democracy in which the integrity of information cannot be assured

and in which public trust in institutions has degraded. Sweden has a longer historical record of

counter-disinformation practices (and the information operations that forced their necessity) that

reduces the burden of proof for a successful elevation, but recent events globally provide an

evidentiary foundation upon which to base an elevation.

Conclusion: Elevating the Threat of Disinformation

Disinformation has the potential to be extremely damaging to the function and fundamental

concept of the modern liberal democracy. While the elevation of risks and threats to assets and

infrastructure has been successful in the UK and elsewhere, the risks and threats to the audience

that are enabled and transmitted through cyberspace remain unresolved and insufficiently elevated.

Chapter 6 – CCI in an Age of Disinformation

268

While there is no doubt that there are and will be ongoing threats to the assets and infrastructure

that underpin and feed the cyber structures upon which modern life has come to depend, it is also

true that greater attention needs to be paid to the democratic audience, to the individuals that both

produce that data and use those systems, and who are susceptible to disinformation that can affect

their current and future perceptions and behaviour. Counterintelligence requirements are evolving

in the cyber-enabled age, and states have both a larger remit for countermeasures and a greater

attack surface to protect, including the psychology of their citizens. The case study in chapter four

and the understanding of CCI developed throughout this thesis has identified that there is a state

responsibility to engage in counter-disinformation practices as a form of CCI in order to protect

information integrity and audience perception; identify, trace, and subvert adversary operations

designed to affect the same; and educate the audience on both the dangers of information warfare

and the methods with which they can engage in CCI such that they contribute to the overall cyber

security posture of the state.

Disinformation as a form of information warfare is not new, but it is increasingly easy to deploy

in the contemporary era, especially with the relatively unregulated nature of the massive social

media and communications platforms, though steps are being taken to introduce restrictions on

those enterprises.755 It is not only necessary to restrict the free movement and policy of these major

commercial actors, however. Disinformation is being utilised by both domestic and foreign actors

to pollute the information ecology of the voting public through a variety of media, and this is

affecting the perceptions those individuals have of not just actors and policies but of information

and the integrity of information producers. When trust is lost in public institutions such as the

media, and individuals within a democracy no longer have faith that the information which

legitimate authorities produce about topics of national and global concern (such as COVID-19, its

origins, and the vaccines being created to treat the virus), then the integrity of the democratic state

itself is called into question. If a voting public is making decisions based on demonstrably false

information, have those individuals in fact made a free and informed decision? If the individual

has cast a vote based on disinformation introduced to them by an adversarial actor with the intent

of manipulating their perception such that they vote for a candidate for whom they may otherwise

755 Daniel Funke and Daniela Flamini, “A Guide to Anti-Misinformation Actions around the World,” Poynter (blog), accessed May 22, 2021, https://www.poynter.org/ifcn/anti-misinformation-actions/; Reuters Staff, “Factbox: ‘Fake News’ Laws around the World,” Reuters, April 2, 2019, https://www.reuters.com/article/us-singapore-politics-fakenews-factbox-idUSKCN1RE0XN; Tomoko Nagasako, “Global Disinformation Campaigns and Legal Challenges,” International Cybersecurity Law Review 1, no. 1 (October 1, 2020): 125–36, https://doi.org/10.1365/s43439-020-00010-7; Scott Shackelford, “The Battle against Disinformation Is Global,” The Conversation, accessed May 22, 2021, http://theconversation.com/the-battle-against-disinformation-is-global-129212.

Courteney O’Connor – PhD Thesis 2021

269

not have voted, has that person made their own decision? Or, given the (successful) attempt at

manipulation through disinformation, has a foreign actor indirectly or directly intervened in the

political sovereignty of the state in question? How is a state to avoid the crumbling of public faith

and trust in institutions? How can a state shore up the security and resilience of the audience to

foreign interference, as well as they have shored up the security and resilience of critical national

infrastructure?

One answer is CCI. This may be to identify the psychological defence of the audience as a critical

national interest; that is to say, an element of critical democratic infrastructure. The UK has failed

to recognise the audience as a critical security vulnerability, and so has not developed their concept

of CCI in such a manner that public education campaigns on the methods and practices would

allow individuals to increase their own security, and contribute to that of the state. Lessons could

here be taken from the Swedish approach; while they have an extensive history of responding to

Russian active measures campaigns, with further engagement, it would be possible to model public

campaigns around personal CCI practices on their awareness and critical thinking frameworks,

especially for the identification and dismissal of disinformation as individuals come across it. There

needs to be a significant investment in understanding and managing the risk to audience

psychology and belief systems in order for modern democracies to both be able to trust in the

infrastructure underpinning modern life, and also be able to trust that the decision-making and

behaviour of the audience is not being (adversely or overly) affected by external strategic narratives

imposed, at least in part, through the infiltration of disinformation that the audience is not capable

of detecting or dismissing out of hand. The risks to the audience of disinformation, and the

necessity of employing CCI measures from the individual through to the state level, need to be

elevated and acted upon as a matter of critical national interest and concern.

Chapter 7 - Conclusion

270

7 - Conclusion This thesis has evaluated the threat elevation of cyberspace in a modern democracy through an

analysis of the United Kingdom (UK), and assessed whether that perception has influenced the

evolution of cyber counterintelligence (CCI) as a response to cyber-enabled threats such as

disinformation. There were two secondary lines of research that contributed to this discussion.

The first was to identify or develop the theoretical framework through which an evolution in state

threat perception could be identified and analysed. That framework would then affect the

conclusions drawn from the subsequent line of research, being the identification and development

of CCI as a response to cyber-enabled threats. To limit the scope of the thesis I chose to examine

a particular security problem, being disinformation conducted via cyberspace, rather than attempt

a generalised analysis of the CCI response to any or all cyber-enabled threats.

This thesis makes a theoretical contribution to security studies and strategic analysis by articulating

and developing the threat elevation analysis framework, which informed the analysis conducted

throughout this research, and the conclusions reached herein. Threat elevation analysis was then

applied to an examination of the evolution of the threat perception of cyberspace in the UK. This

analysis was facilitated by the examination of national security documentation published by

successive British administrations in the period 2008-2018, contributing to the existing literature

on the national security of the UK. The subsequent chapters developed the concept and typology

of CCI in order to further the academic discourse and contribute to the nascent body of literature

in a relatively novel branch of intelligence and counterintelligence studies, and to illustrate the

assets vs. audiences dilemma that has typified the UK’s approach to the development of CCI,

particularly in relation to cyber-enabled disinformation.

Following an overview of existing literature across the fields of securitisation theories,

counterintelligence, and disinformation in chapter two, the third chapter of this thesis articulated

threat elevation analysis, a theoretical framework for ex post facto analysis that was developed

specifically for this research but was designed to be used broadly and in a variety of sectors beyond

cyberspace and counterintelligence. Threat elevation analysis is the framework through which

political concerns are transformed into risks through riskification, and risks then becoming threats

through securitisation. Threat elevation analysis is not intended to be prescriptive or proscriptive

but is designed to identify the stressors and developments that contribute to the risk and threat

perception of certain security issues. I found that while there have been several elevating events

Courteney O’Connor – PhD Thesis 2021

271

that contributed to the overall increase in the threat perception of cyberspace, it is not yet justifiable

to state that cyberspace has been securitised. There has been no articulation of existential threat in

relation to a cyber-attack or exploitation, and no indication that democratic audiences would accept

an argument of that nature. What has, however, occurred, is the successful riskification of

cyberspace, and the deployment of risk management measures designed to reduce the overall

security risks associated with cyberspace, but do not justify the extraordinary powers of threat

mitigation.

In chapter four, I applied the theoretical lens of threat elevation analysis to the national security

documentation of the UK as a case study for the elevation of the threat perception of cyberspace.

I engaged in process tracing, examining a selection of national security documents from the UK

to ascertain the development of the threat perception of cyberspace, as well as the development

of CCI as a concept. In the period examined, my conclusion was that the UK had succeeded in

elevating the threat to infrastructure posed by cyberspace and cyber-enabled technologies but had

failed to engage in a holistic elevation by insufficiently recognising the threat to the democratic

audience of disinformation enabled by cyberspace technologies and communication platforms. In

addition, and potentially due to the sensitivities surrounding discussion of intelligence and

counterintelligence capabilities in a public forum, there was little overtly articulated information

surrounding the use or development of counterintelligence capacities specific to cyberspace. By

engaging in documentary analysis, I conclude that while the UK is engaged in strengthening CCI

skill and capability, there has been a split between assets and audiences that has resulted in an

unbalanced approach to cyber security and resilience, despite the overall increase in risk perception

of cyberspace. There is significant scope for future research on the British approach to CCI and

cyberspace security generally, particularly with the publication of the Integrated Review and

Defence Command Paper in March 2021.

Accepting as a starting point the definition of CCI examined in chapter two and in the context of

threat elevation analysis and a modern state’s approach to the threat elevation of cyberspace

(chapters three and four respectively), in chapter five I articulated and developed a dual-pronged

typology of CCI. I identified and defined passive, active, and proactive CCI, engaging in conceptual

development and providing case-use examples of each. I then identified and articulated strategic

and tactical understandings of CCI, engaging in further conceptual development, and providing

case-use examples. CCI practices and measures are not mutually exclusive, and some practices

straddle the line between passive and active, or active and proactive. There is also crossover

Chapter 7 - Conclusion

272

between tactical CCI and strategic CCI – there is considerable scope for further research in the

development of this typology, as well as an exploration of what CCI looks like beyond the UK (as

an example of a mature and connected democracy), in non-Anglo democracies, and indeed what

CCI looks like in non-democratic political systems. I also examined the groups of actors that are

most likely to engage in CCI, and those who should engage in CCI as a matter of ongoing personal

and national security and resilience. These informed both the subsequent chapter and my

conclusions on the assets vs. audiences dilemma identified in earlier sections.

Engaging with these results, in chapter six, I applied my findings to the rise of disinformation as a

form of foreign interference and a threat to democratic integrity. I articulated more specifically the

assets vs. audience schism, which describes the uneven and inefficient approach to CCI and overall

cyber security of the UK, and which can feasibly be generalised to overall approaches to

disinformation as a counterintelligence problem. I articulated the importance of acknowledging

and promoting the importance of the human element in CCI, and the dangers of disinformation

to audience beliefs, belief frameworks, and future behaviours in relation to democratic integrity.

Particularly in democracies like the UK and their close allies in the Five Eyes alliance and NATO

as well as the EU, it is crucial to recognise that the threat to domestic audiences of foreign

interference using disinformation as a vehicle of psychological manipulation is increasing.

Effective disinformation campaigns – and ineffective CCI – have the potential to destabilise

democratic institutions through degrading the integrity of information available to domestic

audiences, and reducing the capacity of individuals to access, believe in, and make informed

decisions based on verifiable information from trustworthy sources. Using an example of failed

elevation (COVID-19 disinformation), and successful elevation (Sweden’s approach to Russian

active measures), I demonstrate both the analytical integrity of the threat elevation analysis

framework and the importance of the human factor in a holistic approach to CCI, highlighting

some of the Swedish successful countermeasures that could be employed by democratic states

more broadly.

Developments in UK Threat Elevation: the 2021 Integrated Review and Defence Command Paper

It would be remiss not to address the two national security documents released by the UK in the

first quarter of 2021. While these documents fall outside the temporal bounds of the case study

presented in chapter four, a brief analysis of the documents will reveal whether there has been an

elevation of the threat of disinformation to audiences and democratic integrity in the intervening

three years between the end of the case study and the publication of these documents. “Global

Courteney O’Connor – PhD Thesis 2021

273

Britain in a Competitive Age: The Integrated Review of Security, Defence, Development and

Foreign Policy” (hereafter “Integrated Review”) was released in tandem with “Defence in a

Competitive Age” (hereafter “Defence Command Paper”) in March 2021. At 111 pages and 69

pages respectively, they provide an update on the national security and defence posture of the UK.

It is apparent in the foreword of the Integrated Review that the disinformation campaigns of recent

history have impressed upon British decisionmakers the potential consequences of such

operations, with Prime Minister Boris Johnson articulating the necessity of defending “the integrity

of the nation against state threats, whether in the form of illicit finance or coercive economic

measures, disinformation, cyber-attacks, [or] electoral interference…”756 In perfect alignment with the

terminology and the threat perception elevation identified as necessary throughout this thesis, the

UK has overtly elevated the risks of disinformation. I note that, at least in this initial section, the

explicit association of these dangers are related to state action specifically. However, this is the

most explicit elevation of the threat of disinformation so far identified in the national security

documentation of the UK, and it aligns with the expectations of this research. Further to this

elevation is the introduction of the National Cyber Force, a further indication of the ongoing

elevation of the threat perception of cyberspace in the UK.757 Moreover, there is an explicit

recognition of the necessity, for the integrity of British democracy, to protect “the ability of the

British people to elect their political representatives democratically in line with their constitutional

traditions, and to do so free from coercion and manipulation.”758 This is a significant elevation from

previous national security documentation in relation to the recognition of the dangers of

manipulation of the democratic audience, even without an explicit reference to disinformation.

Further, the paragraph immediately following iterates the need to secure democratic institutions

as one of the interests identified by the government. While this does not yet demonstrate parity of

riskification with the long-acknowledged risks to assets, this is an overt riskification of dangers

posed to audiences.759

There is also an assertion of the intent to adopt a “comprehensive cyber security strategy…and

make much more integrated, creative and routine use of the UK’s full spectrum of levers –

including the National Cyber Force’s offensive tools – to detect, disrupt, and deter our

adversaries.”760 This is particularly interesting not only in terms of the threat elevation analysis and

756 Cabinet Office of the United Kingdom, “Global Britain in a Competitive Age,” 4, emphasis added. 757 Cabinet Office of the United Kingdom, 4. 758 Cabinet Office of the United Kingdom, 13, emphasis added. 759 There is no recognition of an existential threat to audiences, and so cannot be considered a securitising act. 760 Cabinet Office of the United Kingdom, 21.

Chapter 7 - Conclusion

274

the successful riskification of cyberspace by the UK, but in the relationship identified between the

detection and disruption of adversarial activities and the use of offensive cyber tools. I infer from

this an admission of the willingness to employ proactive CCI practices as deemed necessary for

the cyber resilience and security of the UK. The challenges to democratic governance posed by

disinformation, which necessitate the ongoing implementation of CCI at both state and individual

levels, is highlighted as a potential risk as disinformation sowed by malign actors has the potential

to undermine public trust in government.761 The Integrated Review elevates the threats to the

democratic audience posed by disinformation exactly according to the threat elevation analysis

framework offered in chapter three and employed throughout this thesis, verifying its analytical

accuracy and overall utility. Lacking an identification of existential risk, these elevating acts further

entrench the riskification, rather than securitisation, of danger to the audience.

The Defence Command Paper strikes many of the same notes as the Integrated Review in terms

of the recognition that the rapid pace of technological change has resulted in a rapidly changing

security environment. Particularly relevant to this thesis is the admission that

long established techniques of influence and leverage such as economic coercion, propaganda, intellectual property theft and espionage, have been supercharged by ubiquitous information and technological transformation.762

Between the Integrated Review and Defence Command Paper, the UK is making the clearest case

thus far for the riskification of cyberspace and the associated threat potential of disinformation as

a form of interference and coercion. The Defence Command Paper also connects that recognition

of the risk vector of cyberspace with the imposition of strategic narratives as discussed in chapter

six, considering the rise in security challenges and conflicts that do not rise to the threshold of

conflict and the importance of the ‘battle of narratives’ to future influence and the attainment of

security objectives.763

Perhaps the most eloquently articulated vindication of this research is contained within paragraph

2.7 of the Defence Command Paper, which states that

cyberspace threats will emanate from state, state-sponsored and criminal groups with personnel and capabilities moving seamlessly between them. As with other domains, cyberspace activity is often leveraged as part of a wider, coordinated and integrated attack. Cyberspace espionage can and will be used as part of

761 Cabinet Office of the United Kingdom, 27. 762 Ministry of Defence, “Defence in a Competitive Age” (The APS Group, 2021), 5, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/971859/_CP_411__-_Defence_in_a_competitive_age.pdf. 763 Ministry of Defence, 5.

Courteney O’Connor – PhD Thesis 2021

275

wider influence and propaganda campaigns, as well as in support of wider hostile activity up to and including conventional warfare.764

In no uncertain terms, the Defence Command Paper both elevates the risk potential of cyberspace

and identifies the ongoing requirement for cyber counterintelligence as defined in this thesis.765

This also aligns with my view (articulated in chapter one) that while cyberspace activities can be

conducted in isolation from activities in the other domains of land, sea, air, and space, cyberspace

also functions as an overlay that connects and forms networks between and among the traditional

warfare domains. The integration of MOD, GCHQ and Secret Intelligence Service (SIS)

capabilities into the National Cyber Force is clear evidence for the growing importance of

cyberspace and cyber counterintelligence, given that its stated purpose is to “deceive, degrade,

deny, disrupt, or destroy targets in or through cyberspace in pursuit of our national security

objectives.”766 Straddling the line between proactive cyber counterintelligence and cyber warfare

(articulated as a possibility in chapter five), the UK has identified the need for further elevation of

CCI capabilities and produced policy intended to ensure those capabilities to enable the resilience

and security of the state.

Directions for future research

There are multiple avenues for further research based on the theoretical and conceptual

developments offered in this thesis, and the findings of the case study and problem analysis. I will

offer several immediate options for further research, but this is not an exhaustive list and there

remains significant space for development. In terms of threat elevation analysis, there was not

sufficient scope to engage fully with the threat attenuation process, and this deserves further

development and testing. There is also significant potential to further apply the threat elevation

framework in alternate security sectors (non-cyber sectors), and against problems unrelated to the

field of CCI. Further, I suggest that engaging with CCI in relation to both disinformation and more

traditional security risks and threats would strengthen the concepts laid out in this thesis and

contribute to the nascent body of literature in the field. There was no scope in this thesis to

consider the ethical and moral elements of CCI as utilised by all actors identified in chapter five,

and there is considerable room for application and development here. Finally, there are

opportunities for further analysis on the development of British approaches to audience threat and

vulnerability in light of the Integrated Review and Defence Command Paper.

764 Ministry of Defence, 10. 765 Cyber counterintelligence is examined in depth in chapter five. 766 Ministry of Defence, “Defence in a Competitive Age,” 44.

Chapter 7 - Conclusion

276

Closing thoughts

In a rapidly changing and fast-evolving security environment, the resilience and security of the

state is increasingly hard to ensure with the sole power of a representative government. While the

responsibility for overall national security falls to the sovereign state, some of the responsibility

for security in cyberspace should fall to the individual, who, in engaging with CCI for their own

purposes also contributes to the aggregate security of the state. However, in racing to secure assets

– data, infrastructure, and systems – from attack and exploitation in cyberspace, states have failed

to adequately recognise and elevate the threat to their audiences – the democratic public – of

malign disinformation operations that are undertaken through and enabled by cyberspace

technologies. To engage with CCI in an efficient and effective way, states need to bridge the assets

vs. audiences divide and appropriately elevate the threat to both pillars of modern democracy. It

is important to note that while states have not fully engaged in adequate threat elevation of

audience vulnerability, based on the case study of the UK and the brief analysis of recent

development presented above, we are moving in the right direction. The audience as a vulnerability

and a threat vector is being elevated, being riskified, and management measures are being designed

and enacted to increase the resilience and security of democracies in the cyber era. This thesis has

offered an analytical framework and the conceptual development of CCI and applied these to a

state case and a specific national security concern. It is my hope that the contributions made in

this thesis and the conclusions drawn from the resulting analysis will aid in progress toward greater

resilience and security in cyberspace, the further development of the field of cyber

counterintelligence and the ongoing democratic integrity project.

Courteney O’Connor – PhD Thesis 2021

277

Bibliography 350.org. “350.Org: A Global Campaign to Confront the Climate Crisis.” 350.org, 2021.

https://350.org. Abdulla, Hisham. “Locutionary, Illocutionary and Perlocutionary Acts Between Modern

Linguistics and Traditional Arabic Linguistics.” Al-Mustansiriya Literary 55 (2012): 59–76. Alexander, Christian. “A Year After ‘Day Zero,’ Cape Town’s Drought Is Over, But Water

Challenges Remain.” Bloomberg.Com, April 12, 2019. https://www.bloomberg.com/news/articles/2019-04-12/looking-back-on-cape-town-s-drought-and-day-zero.

Allan, Collin S. “Attribution Issues in Cyberspace.” Chicago-Kent Journal of International and Comparative Law 13, no. 2 (2013 2012): 55–83.

Allhoff, Fritz, and Adam Henschke. “The Internet of Things: Foundational Ethical Issues.” Internet of Things 1–2 (September 2018): 55–66. https://doi.org/10.1016/j.iot.2018.08.005.

Amer, Karim, and Jehane Noujaim. The Great Hack. Documentary, 2019. https://www.netflix.com/au/title/80117542.

American Society of International Law. “Beyond National Jurisdiction: Cyberspace.” American Society of International Law, 2017. /topics/signaturetopics/BNJ/cyberspace.

Anderson, Janna, and Lee Rainie. “The Future of Truth and Misinformation Online.” Pew Research Center, October 19, 2017. https://www.pewresearch.org/internet/2017/10/19/the-future-of-truth-and-misinformation-online/.

Anderson, Nate. “Confirmed: US and Israel Created Stuxnet, Lost Control of It.” Ars Technica, June 1, 2012. https://arstechnica.com/tech-policy/2012/06/confirmed-us-israel-created-stuxnet-lost-control-of-it/.

Andress, Jason, and Steve Winterfeld. Cyber Warfare: Techniques, Tactics and Tools for Security Practitioners. Waltham: Elsevier, 2011.

Andrews, Christopher. The Secret World: A History of Intelligence. New York: Penguin Books, 2019. https://www.penguin.com.au/books/the-secret-world-9780140285321.

Angwin, Julia. Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance. New York: Henry Holt and Company, 2014.

Aral, Sinan, and Dean Eckles. “Protecting Elections from Social Media Manipulation.” Science 365, no. 6456 (August 30, 2019): 858–61. https://doi.org/10.1126/science.aaw8243.

Archetti, Christina. “Terrorism, Communication and New Media: Explaining Radicalization in the Digital Age.” Perspectives on Terrorism 9, no. 1 (2015): 49–59.

Arrigo, Bruce A. The SAGE Encyclopedia of Surveillance, Security, and Privacy. Vol. 3. SAGE Publications, 2016.

Associated Press. “Colonial Pipeline Confirms It Paid $4.4m Ransom to Hacker Gang after Attack.” the Guardian, May 20, 2021. http://www.theguardian.com/technology/2021/may/19/colonial-pipeline-cyber-attack-ransom.

Aubrey, Sophie. “The Curious Marriage of QAnon and Wellness.” The Sydney Morning Herald, September 26, 2020. https://www.smh.com.au/lifestyle/health-and-wellness/playing-with-fire-the-curious-marriage-of-qanon-and-wellness-20200924-p55yu7.html.

Austin, Erica Weintraub, and Porismita Borah. “How to Talk to Vaccine Skeptics so They Might Actually Hear You.” The Conversation, 2020. http://theconversation.com/how-to-talk-to-vaccine-skeptics-so-they-might-actually-hear-you-143794.

Austin, J. L. How to Do Things with Words. 2nd ed. London: Oxford University Press, 1980. Australian Cyber Security Centre. “Dark Web.” Australian Cyber Security Centre, 2021.

https://www.cyber.gov.au/acsc/view-all-content/glossary/dark-web. ———. “In the Wild.” Australian Signals Directorate | Australian Cyber Security Centre, 2021.

https://www.cyber.gov.au/acsc/view-all-content/glossary/wild.

Bibliography

278

———. “Script Kiddie.” Australian Signals Directorate | Australian Cyber Security Centre, 2021. https://www.cyber.gov.au/acsc/view-all-content/glossary/script-kiddie.

———. “The Commonwealth Cyber Security Posture in 2019.” Australian Signals Directorate | Australian Cyber Security Centre, 2019. https://www.cyber.gov.au/acsc/view-all-content/reports-and-statistics/commonwealth-cyber-security-posture-2019.

Awan, Imran. “Cyber-Extremism: Isis and the Power of Social Media.” Society 54, no. 2 (April 1, 2017): 138–49. https://doi.org/10.1007/s12115-017-0114-0.

Bacevich, Andrew. “Endless War in the Middle East.” Cato Institute, June 13, 2016. https://www.cato.org/catos-letter/endless-war-middle-east.

Bagaric, Mirko. “High Court Likely to ‘Free’ Covid’s Political Prisoners.” The Weekend Australian, September 22, 2020. https://www.theaustralian.com.au/commentary/high-court-likely-to-free-covids-political-prisoners/news-story/68b63c9c4fe6a7bb78a0cba04ff294e5.

Balzacq, Thierry. “A Theory of Securitization: Origins, Core Assumptions, and Variants.” In Securitization Theory: How Security Problems Emerge and Dissolve, 1–30. PRIO, New Security Studies. New York: Routledge, 2011.

———. “Legitimacy and the ‘Logic’ of Security.” In Contesting Security: Strategies and Logics, 1–10. PRIO New Security Studies. New York: Routledge, 2015.

———. “Securitization Theory: Past, Present, and Future.” Polity 51, no. 2 (April 2019): 331–48. https://doi.org/10.1086/701884.

———. “The ‘Essence’ of Securitization: Theory, Ideal Type, and a Sociological Science of Security.” International Relations 29, no. 1 (March 1, 2015): 103–13. https://doi.org/10.1177/0047117814526606b.

———. “The Three Faces of Securitization: Political Agency, Audience and Context.” European Journal of International Relations 11, no. 2 (June 1, 2005): 171–201. https://doi.org/10.1177/1354066105052960.

Balzacq, Thierry, Sarah Léonard, and Jan Ruzicka. “‘Securitization’ Revisited: Theory and Cases.” International Relations 30, no. 4 (December 1, 2016): 494–531. https://doi.org/10.1177/0047117815596590.

Barnett, Jon. The Meaning of Environmental Security: Ecological Politics and Policy in the New Security Era. London: Zed Books, 2001.

Barrett, Brian. “DNC Lawsuit Against Russia Reveals New Details About 2016 Hack.” Wired, April 20, 2018. https://www.wired.com/story/dnc-lawsuit-reveals-key-details-2016-hack/.

BBC. “Cyber Attacks Blamed on China.” BBC News, January 31, 2013, sec. China. https://www.bbc.com/news/world-asia-china-21272613.

BBC News. “Manchester Arena Blast: 19 Dead and More than 50 Hurt.” BBC News, May 23, 2017, sec. Manchester. https://www.bbc.com/news/uk-england-manchester-40007886.

———. “Parsons Green: Underground Blast a Terror Incident, Say Police.” BBC News, September 15, 2017, sec. UK. https://www.bbc.com/news/uk-41278545.

———. “Sony Pays up to $8m over Employees’ Hacked Data.” BBC News, October 21, 2015, sec. Business. https://www.bbc.com/news/business-34589710.

Beauchemin, Molly. “17 Celebrities Who Actively Work to Protect the Environment.” Garden Collage Magazine, November 8, 2017. https://gardencollage.com/change/climate-change/celebrities-care-environment-want-know/.

Belinchón, Fernando, and Qayyah Moynihan. “25 Giant Companies That Earn More Than Entire Countries.” Business Insider, July 25, 2018. https://www.businessinsider.com/25-giant-companies-that-earn-more-than-entire-countries-2018-7?r=AU&IR=T.

Bellingcat. “MH17.” Bellingcat, 2021. https://www.bellingcat.com/tag/mh17/. Bender, Jeremy. “FBI: A Chinese Hacker Stole Massive Amounts Of Intel On 32 US Military

Projects.” Business Insider Australia, July 17, 2014. https://www.businessinsider.com.au/chinese-hackers-stole-f-35-data-2014-7.

Courteney O’Connor – PhD Thesis 2021

279

Benthem van den Bergh, Godfried van. “The Taming of the Great Nuclear Powers.” Policy Outlook. Carnegie Endowment for International Peace, 2009.

Beran, Dale. “The Return of Anonymous.” The Atlantic, August 11, 2020. https://www.theatlantic.com/technology/archive/2020/08/hacker-group-anonymous-returns/615058/.

Bergin, Tom, and Nathan Layne. “Special Report: Cyber Thieves Exploit Banks’ Faith in SWIFT Transfer Network.” Reuters, May 20, 2016. https://www.reuters.com/article/us-cyber-heist-swift-specialreport-idUSKCN0YB0DD.

Berreby, David. “Click to Agree with What? No One Reads Terms of Service, Studies Confirm.” The Guardian, March 4, 2017. https://www.theguardian.com/technology/2017/mar/03/terms-of-service-online-contracts-fine-print.

Berzina, Kristine. “Sweden — Preparing for the Wolf, Not Crying Wolf: Anticipating and Tracking Influence Operations in Advance of Sweden’s 2018 General Elections.” The German Marshall Fund of the United States, September 7, 2018. https://www.gmfus.org/blog/2018/09/07/sweden-preparing-wolf-not-crying-wolf-anticipating-and-tracking-influence.

Bessi, Alessandro, and Emilio Ferrara. “Social Bots Distort the 2016 U.S. Presidential Election Online Discussion.” First Monday 21, no. 11 (November 3, 2016). https://doi.org/10.5210/fm.v21i11.7090.

Bhargava, Rishi. “Human Error, We Meet Again.” Security Magazine, December 6, 2018. https://www.securitymagazine.com/articles/89664-human-error-we-meet-again?v=preview.

Bickert, Monika, and Brian Fishman. “Hard Questions: How Effective Is Technology in Keeping Terrorists off Facebook?” About Facebook (blog), April 23, 2018. https://about.fb.com/news/2018/04/keeping-terrorists-off-facebook/.

Bisson, David. “‘123456’ Remains the World’s Most Breached Password.” Security Boulevard, April 22, 2019. https://securityboulevard.com/2019/04/123456-remains-the-worlds-most-breached-password/.

Biswas, Niloy. “Is the Environment a Security Threat? Environmental Security beyond Securitization.” International Affairs Review 20, no. 1 (2011): 1–22.

Blair, Tony. “Full Text of Tony Blair’s Speech to Parliament.” The Guardian, October 4, 2001. http://www.theguardian.com/world/2001/oct/04/september11.usa3.

Booth, Ken. “Security and Emancipation.” Review of International Studies 17, no. 4 (1991): 313–26. Borgesius, Frederik J. Zuiderveen, Judith Möller, Sanne Kruikemeier, Ronan Ó Fathaigh, Kristina

Irion, Tom Dobber, Balazs Bodo, and Claes de Vreese. “Online Political Microtargeting: Promises and Threats for Democracy.” Utrecht Law Review 14, no. 1 (February 9, 2018): 82–96. https://doi.org/10.18352/ulr.420.

Boudana, Sandrine. “A Definition of Journalistic Objectivity as a Performance.” Media, Culture & Society 33, no. 3 (April 1, 2011): 385–98. https://doi.org/10.1177/0163443710394899.

Boulding, Kenneth. The Image: Knowledge in Life and Society. Ann Arbor: University of Michigan Press, 1956. https://www.press.umich.edu/6607/image.

Bowden, Mark. Worm: The First Digital World War. Atlantic Books Ltd, 2012. Boyle, Danny, Chris Graham, and David Millward. “Finsbury Park Mosque Attack Latest: Theresa

May Vows Hatred and Evil Will Never Succeed as Labour Warns of Rise in Islamophobia.” Accessed May 15, 2021. https://web.archive.org/web/20170621045319/https://www.telegraph.co.uk/news/2017/06/19/finsbury-park-mosque-latest-terror-attack-london-live/.

Braddock, Kurt, and John Horgan. “Towards a Guide for Constructing and Disseminating Counternarratives to Reduce Support for Terrorism.” Studies in Conflict & Terrorism 39, no. 5 (May 3, 2016): 381–404. https://doi.org/10.1080/1057610X.2015.1116277.

Bibliography

280

Bradshaw, Samantha, and Philip N. Howard. “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation.” Oxford: University of Oxford Computational Propaganda Research Project, 2019. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf?utm_source=newsletter&utm_medium=email&utm_campaign=timestop10_daily_newsletter.

Brandom, Russell. “UK Hospitals Hit with Massive Ransomware Attack - The Verge.” The Verge, May 12, 2017. https://www.theverge.com/2017/5/12/15630354/nhs-hospitals-ransomware-hack-wannacry-bitcoin.

Brechenmacher, Saskia. “Comparing Democratic Distress in the United States and Europe.” Washington, DC: Carnegie Endowment for International Peace, 2018.

Brooks, Arthur C. “Opinion | Conspiracy Theories Are a Dangerous Threat to Our Democracy.” Washington Post, September 3, 2019. https://www.washingtonpost.com/opinions/2019/09/03/conspiracy-theories-are-dangerous-threat-our-democracy/.

Brunnstrom, David, and Jim Finkle. “U.S. Considers ‘proportional’ Response to Sony Hacking Attack | Reuters.” Reuters, December 19, 2014. https://www.reuters.com/article/us-sony-cybersecurity-northkorea-idUSKBN0JW24Z20141218.

Buente, Wane, and Chamil Rathnayake. “#WeAreMaunaKea: Celebrity Involvement in a Protest Movement.” In IConference 2016 Proceedings. Philadelphia, USA: iSchools, 2016. https://doi.org/10.9776/16311.

Bumiller, Elisabeth, and Thom Shanker. “Panetta Warns of Dire Threat of Cyberattack on U.S.” The New York Times, October 12, 2012, sec. World. https://www.nytimes.com/2012/10/12/world/panetta-warns-of-dire-threat-of-cyberattack.html.

Burki, Talha. “Vaccine Misinformation and Social Media.” The Lancet Digital Health 1, no. 6 (October 1, 2019): e258–59. https://doi.org/10.1016/S2589-7500(19)30136-0.

Burt, Tom. “New Action to Combat Ransomware Ahead of U.S. Elections.” Microsoft On the Issues, October 12, 2020. https://blogs.microsoft.com/on-the-issues/2020/10/12/trickbot-ransomware-cyberthreat-us-elections/.

Burton, Bob. Inside Spin : The Dark Underbelly of the PR Industry. Crows Nest: Allen & Unwin, 2007. https://research.usc.edu.au/discovery/fulldisplay/alma999899902621/61USC_INST:ResearchRepository.

Bush, George W. “President Bush Addresses the Nation.” Washington post, September 20, 2001. https://www.washingtonpost.com/wp-srv/nation/specials/attacked/transcripts/bushaddress_092001.html.

Bussolati, Nicolò. “The Rise of Non-State Actors in Cyberwarfare.” In Cyberwar: Law and Ethics for Virtual Conflicts, edited by Jens David Ohlin, Kevin Govern, and Claire Finkelstein, 102–26. New York: Oxford University Press, 2015. https://doi.org/10.1093/acprof:oso/9780198717492.003.0007.

Buzan, Barry. People, States, and Fear: An Agenda for International Security Studies in the Post-Cold War Era. New York: Prentice Hall, 1991.

———. “Regional Security Complex Theory in the Post-Cold War World.” In Theories of New Regionalism: A Palgrave Reader, edited by Fredrik Söderbaum and Timothy M. Shaw, 140–59. International Political Economy Series. London: Palgrave Macmillan UK, 2003. https://doi.org/10.1057/9781403938794_8.

———. “The Logic of Regional Security in the Post-Cold War World.” In The New Regionalism and the Future of Security and Development: Volume 4, edited by Björn Hettne, András Inotai, and Osvaldo Sunkel, 1–25. The New Regionalism. London: Palgrave Macmillan UK, 2000. https://doi.org/10.1007/978-1-137-11498-3_1.

Courteney O’Connor – PhD Thesis 2021

281

Buzan, Barry, Charles Jones, and Richard Little. The Logic of Anarchy: Neorealism to Structural Realism. New York: Columbia University Press, 1993.

Buzan, Barry, Ole Waever, and Jaap de Wilde. Security: A New Framework for Analysis. Boulder: Lynne Rienner Publishers, Inc, 1998.

Bytwerk, Randall L. “Believing in ‘Inner Truth’: The Protocols of the Elders of Zion in Nazi Propaganda, 1933–1945.” Holocaust and Genocide Studies 29, no. 2 (2015): 212–29.

Cabinet Office of the United Kingdom. “A Strong Britain in an Age of Uncertainty: The National Security Strategy.” The Stationery Office, 2010. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/61936/national-security-strategy.pdf.

———. “Cyber Security Strategy of the United Kingdom: Safety, Security and Resilience in Cyber Space.” The Stationery Office, 2009. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/228841/7642.pdf.

———. “Global Britain in a Competitive Age: The Integrated Review of Security, Defence, Development and Foreign Policy.” The APS Group, 2021. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/975077/Global_Britain_in_a_Competitive_Age-_the_Integrated_Review_of_Security__Defence__Development_and_Foreign_Policy.pdf.

———. “National Cyber Security Strategy 2016-2021.” The Stationery Office, 2016. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/567242/national_cyber_security_strategy_2016.pdf.

———. “National Security Capability Review.” The Stationery Office, 2018. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/705347/6.4391_CO_National-Security-Review_web.pdf.

———. “National Security Strategy and Strategic Defence and Security Review 2015: A Secure and Prosperous United Kingdom,” 2015. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/555607/2015_Strategic_Defence_and_Security_Review.pdf.

———. “Securing Britain in an Age of Uncertainty: The Strategic Defence and Security Review.” The Stationery Office, 2010. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/62482/strategic-defence-security-review.pdf.

———. “The National Security Strategy of the United Kingdom: Security in an Interdependent World.” The Stationery Office, 2008. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/228539/7291.pdf.

———. “The National Security Strategy of the United Kingdom: Update 2009.” The Stationery Office, 2009. http://www.cabinetoffice.gov.uk/media/216734/nss2009v2.pdf.

———. “The UK Cyber Security Strategy: Protecting and Promoting the UK in a Digital World.” The Stationery Office, 2011. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/60961/uk-cyber-security-strategy-final.pdf.

Cahill, Petra. “Cheney Says Russian Meddling in U.S. Election Possibly an ‘Act of War.’” NBC News, March 29, 2017. https://www.nbcnews.com/politics/white-house/dick-cheney-russian-election-interference-could-be-seen-act-war-n739391.

Cakebread, Caroline. “You’re Not Alone, No One Reads Terms of Service Agreements.” Business Insider Australia, November 15, 2017. https://www.businessinsider.com.au/deloitte-study-91-percent-agree-terms-of-service-without-reading-2017-11.

Bibliography

282

Cambridge Analytica. “CA Political,” July 2, 2017. https://web.archive.org/web/20170702090555/https://ca-political.com/.

Cambridge Analytica - The Power of Big Data and Psychographics. Concordia, 2016. https://www.youtube.com/watch?v=n8Dd5aVXLC.

Carera, Gordon. Intercept: The Secret History of Computers and Spies. London: Weidenfeld & Nicolson, 2015.

Carr, Austin. “When Jack Dorsey’s Fight Against Twitter Trolls Got Personal.” Fast Company, April 9, 2018. https://www.fastcompany.com/40549979/when-jack-dorseys-fight-against-twitter-trolls-got-personal.

CB Insights. “What Are Psychographics?” CB Insights Research, May 6, 2020. https://www.cbinsights.com/research/what-is-psychographics/.

CBS News. “UK Police: 22 Confirmed Dead after Terror Incident at Ariana Grande Concert.” CBS News, May 23, 2017. https://www.cbsnews.com/news/ariana-grande-concert-manchester-arena-explosion/.

Center for Strategic and International Studies. “Global Cyber Strategies Index.” Center for Strategic and International Studies, 2021. https://www.csis.org/programs/strategic-technologies-program/cybersecurity-and-governance/global-cyber-strategies-index.

Centers for Disease Control and Prevention. “Autism and Vaccines | Vaccine Safety | CDC.” Autism and Vaccines, January 26, 2021. https://www.cdc.gov/vaccinesafety/concerns/autism.html.

Centre for the Protection of National Infrastructure. “Centre for the Protection of National Infrastructure | CPNI.” Centre for the Protection of National Infrastructure | CPNI, 2021. https://www.cpni.gov.uk/.

Cerf, Vint. “A Brief History of the Internet & Related Networks.” Internet Society (blog), 2021. https://www.internetsociety.org/internet/history-internet/brief-history-internet-related-networks/.

Cha, Bonnie. “Too Embarrassed to Ask: What Is ‘The Cloud’ and How Does It Work?” Vox, April 30, 2015. https://www.vox.com/2015/4/30/11562024/too-embarrassed-to-ask-what-is-the-cloud-and-how-does-it-work.

Chabinsky, Steven, and F. Paul Pittman. “Data Protection 2020 | Laws and Regulations | USA.” International Comparative Legal Guides, 2020. https://iclg.com/practice-areas/data-protection-laws-and-regulations/usa.

Chatterjee, Partha. “The Classical Balance of Power Theory.” Journal of Peace Research 9, no. 1 (1972): 51–61.

Chirac, Jacques. “Allocution de M. Jacques Chirac, Président de la République, sur les attentats terroristes à New York et à Washington, la solidarité de la France avec les Etats-Unis et la coopération de la France dans la lutte contre le terrorisme, Washington, le 19 septembre 2001.” elysee.fr, September 19, 2001. https://www.elysee.fr/jacques-chirac/2001/09/19/allocution-de-m-jacques-chirac-president-de-la-republique-sur-les-attentats-terroristes-a-new-york-et-a-washington-la-solidarite-de-la-france-avec-les-etats-unis-et-la-cooperation-de-la-france-dans-la-lutte-contre-le-terrorisme-washington-le-19-sep.

———. “Lettre de M. Jacques Chirac, Président de la République, adressée à M. George Walker Bush , Président des Etats-Unis d’Amérique, à la suite des attentats ayant frappé les villes de New York et Washington, Paris le 11 septembre 2001.” elysee.fr, September 11, 2001. https://www.elysee.fr/jacques-chirac/2001/09/11/lettre-de-m-jacques-chirac-president-de-la-republique-adressee-a-m-george-walker-bush-president-des-etats-unis-damerique-a-la-suite-des-attentats-ayant-frappe-les-villes-de-new-york-et-washington-paris-le-11-septembre-2001.

Courteney O’Connor – PhD Thesis 2021

283

Choi, Kyung-shick, Claire Seungeun Lee, and Robert Cadigan. “Spreading Propaganda in Cyberspace: Comparing Cyber-Resource Usage of Al Qaeda and ISIS.” International Journal of Cybersecurity Intelligence and Cybercrime 1, no. 1 (2018): 21–39.

Cimpanu, Catalin. “FireEye, One of the World’s Largest Security Firms, Discloses Security Breach.” ZDNet, December 8, 2020. https://www.zdnet.com/article/fireeye-one-of-the-worlds-largest-security-firms-discloses-security-breach/.

Ciutǎ, Felix. “Security and the Problem of Context: A Hermeneutical Critique of Securitisation Theory.” Review of International Studies 35, no. 2 (2009): 301–26.

Clark, David D., and Susan Landau. “Untangling Attribution.” In Proceedings of a Workshop on Deterring Cyberattacks: Informing Strategies and Developing Options for U.S. Policy, 25–40. Washington, D.C.: National Research Council of the National Academies, 2010. https://doi.org/10.17226/12997.

Clarke, Richard A., and Robert Knake. Cyber War: The Next Threat To National Security and What to Do About It. New York: HarperCollins, 2012. https://www.harpercollins.com/products/cyber-war-richard-a-clarkerobert-knake.

Cloudflare. “What Is the Cloud? | Cloud Definition.” Cloudflare. Accessed June 19, 2021. https://www.cloudflare.com/learning/cloud/what-is-the-cloud/.

CNN Editorial Research. “2016 Presidential Campaign Hacking Fast Facts.” CNN, October 28, 2020. https://www.cnn.com/2016/12/26/us/2016-presidential-campaign-hacking-fast-facts/index.html.

Cohen, Jeremy. “Cell Tower Vandals and Re-Open Protestors — Why Some People Believe in Coronavirus Conspiracies.” The Conversation, May 22, 2020. http://theconversation.com/cell-tower-vandals-and-re-open-protestors-why-some-people-believe-in-coronavirus-conspiracies-138192.

Colby, Elbridge. “The Role of Nuclear Weapons in the U.S.-Russian Relationship.” Carnegie Endowment for International Peace, 2016. https://carnegieendowment.org/2016/02/26/role-of-nuclear-weapons-in-u.s.-russian-relationship-pub-62901.

Collier, David. “Understanding Process Tracing.” PS: Political Science & Politics 44, no. 4 (October 2011): 823–30. https://doi.org/10.1017/S1049096511001429.

Colombani, Jean-marie. “Nous sommes tous Américains.” Le Monde.fr, September 13, 2001. https://www.lemonde.fr/idees/article/2011/09/09/nous-sommes-tous-americains_1569503_3232.html.

Committee on Energy and Commerce. Cyber Espionage and the Theft of U.S. Intellelectual Property and Technology, Pub. L. No. 113–67, § Committee on Energy and Commerce (2013). https://www.govinfo.gov/content/pkg/CHRG-113hhrg86391/html/CHRG-113hhrg86391.htm.

Conley, Heather A., and Jean-Baptiste Jeangene Vilmer. “Successfully Countering Russian Electoral Interference.” Center for Strategic & International Studies, June 21, 2018. https://www.csis.org/analysis/successfully-countering-russian-electoral-interference.

Conte, Alex. Security in the 21st Century : The United Nations, Afghanistan and Iraq. New York: Routledge, 2006. https://doi.org/10.4324/9781351149563.

Conway, Maura, Moign Khawaja, Suraj Lakhani, Jeremy Reffin, Andrew Robertson, and David Weir. “Disrupting Daesh: Measuring Takedown of Online Terrorist Material and Its Impacts.” Studies in Conflict & Terrorism 42, no. 1–2 (February 1, 2019): 141–60. https://doi.org/10.1080/1057610X.2018.1513984.

Cook, James, and Joseph Archer. “Telegraph Investigation: Google Search Exposes Sensitive Files and Emails from inside the Government and the NHS.” The Telegraph, April 18, 2020. https://web.archive.org/web/20200418181314/https://www.telegraph.co.uk/technology/2018/07/21/telegraph-investigation-google-search-exposes-sensitive-government/.

Bibliography

284

Cook, John, and Stephan Lewandowsky. “Coronavirus Conspiracy Theories Are Dangerous – Here’s How to Stop Them Spreading.” The Conversation, April 20, 2020. http://theconversation.com/coronavirus-conspiracy-theories-are-dangerous-heres-how-to-stop-them-spreading-136564.

Cooper, Sam. “Coronavirus Conspiracies Pushed by Russia, Amplified by Chinese Officials: Experts.” Global News, April 28, 2020. https://globalnews.ca/news/6793733/coronavirus-conspiracy-theories-russia-china/.

Corderoy, Amy. “Vaccine Expert Julie Leask Says Internet Outrage about Vaccination Risks Make Problem Worse.” The Sydney Morning Herald, March 30, 2015. https://www.smh.com.au/national/nsw/vaccine-expert-julie-leask-says-internet-outrage-about-vaccination-risks-make-problem-worse-20150330-1mb3gp.html.

Cordesman, Anthony H. “America’s Failed Strategy in the Middle East: Losing Iraq and the Gulf,” 2020. https://www.csis.org/analysis/americas-failed-strategy-middle-east-losing-iraq-and-gulf.

Corporate Finance Institute. “Flash Crashes - Overview, Causes, and Past Examples.” Corporate Finance Institute, 2021. https://corporatefinanceinstitute.com/resources/knowledge/finance/flash-crashes/.

Corry, Olaf. “Securitisation and ‘Riskification’: Second-Order Security and the Politics of Climate Change.” Millennium: Journal of International Studies 40, no. 2 (January 2012): 235–58. https://doi.org/10.1177/0305829811419444.

Cossins, Daniel. “Discriminating Algorithms: 5 Times AI Showed Prejudice.” New Scientist, April 27, 2018. https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/.

Council of Europe. “Chart of Signatures and Ratifications of Treaty 185.” Treaty Office, November 4, 2021. https://www.coe.int/en/web/conventions/full-list.

———. “Council of Europe Counter-Terrorism Strategy 2018-2022.” Council of Europe | Committee of Ministers, 2018. https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=09000016808afc96.

———. “Country Profiles on Counter-Terrorism Capacity.” Council of Europe | Counter-terrorism, 2021. https://www.coe.int/en/web/counter-terrorism/country-profiles.

Council on Foreign Relations. “Greece’s Debt Crisis Timeline.” Council on Foreign Relations, 2021. https://www.cfr.org/timeline/greeces-debt-crisis-timeline.

Craig, Geoffrey. “Celebrities and Environmental Activism.” In Media, Sustainability and Everyday Life, by Geoffrey Craig, 135–63. Palgrave Studies in Media and Environmental Communication. London: Palgrave Macmillan UK, 2019. https://doi.org/10.1057/978-1-137-53469-9_6.

Crowdstrike. “Our Work with the DNC: Setting the Record Straight.” Crowdstrike (blog), June 5, 2020. https://www.crowdstrike.com/blog/bears-midst-intrusion-democratic-national-committee/.

———. “What Is a Zero-Day Attack? | CrowdStrike.” crowdstrike.com, 2021. https://www.crowdstrike.com/cybersecurity-101/zero-day-exploit/.

Cull, Nicholas J., Vasily Gatov, Peter Pomerantsev, Anne Applebaum, and Alistair Shawcross. “Soviet Subversion, Disinformation and Propaganda: How the West Fought Against It - An Analytic History, with Lessons for the Present.” LSE Consulting. London: LSE Institute of Global Affairs, 2017. https://www.lse.ac.uk/iga/assets/documents/arena/2018/Jigsaw-Soviet-Subversion-Disinformation-and-Propaganda-Final-Report.pdf.

Cunliffe, Emma, and Luigi Curini. “ISIS and Heritage Destruction: A Sentiment Analysis.” Antiquity 92, no. 364 (August 2018): 1094–1111. https://doi.org/10.15184/aqy.2018.134.

Curry, Andrew. “Here Are the Ancient Sites ISIS Has Damaged and Destroyed.” National Geographic, September 1, 2015.

Courteney O’Connor – PhD Thesis 2021

285

https://www.nationalgeographic.com/history/article/150901-isis-destruction-looting-ancient-sites-iraq-syria-archaeology.

Cushman Jr, John H., and Thom Shanker. “A Nation at War: Combat Technology - A War Like No Other Uses New 21st-Century Methods To Disable Enemy Forces.” The New York Times, April 10, 2003, sec. U.S. https://www.nytimes.com/2003/04/10/us/nation-war-combat-technology-war-like-no-other-uses-new-21st-century-methods.html.

Czosseck, C., Rain Ottis, and Anna-Maria Talihärm. “Estonia after the 2007 Cyber Attacks: Legal, Strategic and Organisational Changes in Cyber Security.” International Journal of Cyber Warfare and Terrorism 1 (2011): 24–34. https://doi.org/10.4018/ijcwt.2011010103.

Dahan, Michael. “Hacking for the Homeland: Patriotic Hackers Versus Hacktivists.” In Proceedings of the 8th International Conference on Information Warfare and Security: ICIW 2013, edited by Doug Hart. Denver: Academic Conferences Limited, 2013.

Dalaqua, Renata H. “‘Securing Our Survival (SOS)’: Non-State Actors and the Campaign for a Nuclear Weapons Convention through the Prism of Securitisation Theory.” Brazilian Political Science Review 7, no. 3 (2013): 90-117,167-168.

Davies, Harry. “Ted Cruz Campaign Using Firm That Harvested Data on Millions of Unwitting Facebook Users.” the Guardian, December 11, 2015. http://www.theguardian.com/us-news/2015/dec/11/senator-ted-cruz-president-campaign-facebook-user-data.

Davis, Joshua. “Hackers Take Down the Most Wired Country in Europe.” Wired, August 21, 2007. https://www.wired.com/2007/08/ff-estonia/.

Dearden, Lizzie, and May Bulman. “Isis Claims Responsibility for London Terror Attack.” The Independent, June 4, 2017. https://www.independent.co.uk/news/uk/home-news/london-terror-attack-isis-claims-responsibility-borough-market-bridge-a7772776.html.

Defense Advanced Research Projects Agency. “Paving the Way to the Modern Internet.” DARPA | Defense Advanced Research Projects Agency, 2021. https://www.darpa.mil/about-us/timeline/modern-internet.

Deibert, Ronald J. “Black Code Redux: Censorship, Surveillance, and the Militarization of Cyberspace.” In Digital Media and Democracy: Tactics in Hard Times, edited by Megan Boler, 137–64. London: The MIT Press, 2008.

Deibert, Ronald J., Rafal Rohozinski, and Masashi Crete-Nishihata. “Cyclones in Cyberspace: Information Shaping and Denial in the 2008 Russia–Georgia War.” Security Dialogue 43, no. 1 (February 1, 2012): 3–24. https://doi.org/10.1177/0967010611431079.

Deloitte. “Transforming Cybersecurity in the Financial Services Industry: New Approaches for an Evolving Threat Landscape.” Johannesburg: Deloitte, 2014.

Department of Foreign Affairs and Trade. “Australia’s International Cyber Engagement Strategy.” Australian Government, 2017. https://web.archive.org/web/20180216140850/https://www.dfat.gov.au/international-relations/themes/cyber-affairs/aices/preliminary_information/introduction.html.

Department of Justice Office of Public Affairs. “Two Chinese Hackers Working with the Ministry of State Security Charged with Global Computer Intrusion Campaign Targeting Intellectual Property and Confidential Business Information, Including COVID-19 Research.” The United States Department of Justice, July 21, 2020. https://www.justice.gov/opa/pr/two-chinese-hackers-working-ministry-state-security-charged-global-computer-intrusion.

Devlin, Hannah. “AI Programs Exhibit Racial and Gender Biases, Research Reveals.” The Guardian, April 14, 2017. https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals.

Dickson, E. J. “Wellness Influencers Are Calling Out QAnon Conspiracy Theorists for Spreading Lies.” Rolling Stone (blog), September 15, 2020.

Bibliography

286

https://www.rollingstone.com/culture/culture-news/qanon-wellness-influencers-seane-corn-yoga-1059856/.

Dizikes, Peter. “Study: On Twitter, False News Travels Faster than True Stories.” MIT News | Massachusetts Institute of Technology, March 8, 2018. https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308.

Dodd, Vikram. “Palace Terror Suspect Was Uber Driver Who Had Tried to Get to Windsor Castle.” the Guardian, August 31, 2017. http://www.theguardian.com/uk-news/2017/sep/01/buckingham-palace-terror-suspect-had-tried-to-get-to-windsor-castle.

Dodd, Vikram, and Matthew Taylor. “London Attack: ‘Aggressive’ and ‘Strange’ Suspect Vowed to ‘Do Some Damage.’” the Guardian, June 20, 2017. http://www.theguardian.com/uk-news/2017/jun/19/several-casualties-reported-after-van-hits-pedestrians-in-north-london.

Doffman, Zak. “Anonymous Hackers Target TikTok: ‘Delete This Chinese Spyware Now.’” Forbes, July 1, 2020. https://www.forbes.com/sites/zakdoffman/2020/07/01/anonymous-targets-tiktok-delete-this-chinese-spyware-now/.

———. “New Coronavirus Warning: These ‘COVID-19 Discounts’ Are Now The Most Dangerous Deals Online.” Forbes, March 19, 2020. https://www.forbes.com/sites/zakdoffman/2020/03/19/new-coronavirus-warning-beware-these-covid-19-discounts-the-most-dangerous-deals-online/.

Donnelly, Jack. “Realism.” In Theories of International Relations, by Scott Burchill, A. Linklater, R. Devetak, M. Paterson, J. Donnelly, C. Reus-Smit, and J. True, 3rd ed. Palgrave Macmillan, 2005. http://dro.deakin.edu.au/view/DU:30010384.

Dorling, Philip. “China Stole Plans for a New Fighter Plane, Spy Documents Have Revealed.” The Sydney Morning Herald, January 18, 2015. https://www.smh.com.au/national/china-stole-plans-for-a-new-fighter-plane-spy-documents-have-revealed-20150118-12sp1o.html.

Douglas, Karen M., and Robbie M. Sutton. “Climate Change: Why the Conspiracy Theories Are Dangerous - Karen M. Douglas, Robbie M. Sutton, 2015.” Bulletin of the Atomic Scientists 71, no. 2 (2015): 98–106. https://doi.org/10.1177/0096340215571908.

Doyle, Julie, Nathan Farrell, and Michael K. Goodman. “Celebrities and Climate Change.” Oxford Research Encyclopedia of Climate Science, September 26, 2017. https://doi.org/10.1093/acrefore/9780190228620.013.596.

Dunlap, Jr., Charles J. “The Police-Ization of the Military.” Journal of Political and Military Sociology 27 (1999): 217–32.

Dunn Cavelty, Myriam. “From Cyber-Bombs to Political Fallout: Threat Representations with an Impact in the Cyber-Security Discourse.” International Studies Review 15, no. 1 (March 2013): 105–22. https://doi.org/10.1111/misr.12023.

Duvenage, P., and S. von Solms. “The Case for Cyber Counterintelligence.” In 2013 International Conference on Adaptive Science and Technology, 1–8, 2013. https://doi.org/10.1109/ICASTech.2013.6707493.

Duvenage, PC, and Sebastian von Solms. “Cyber Counterintelligence: Back to the Future.” Journal of Information Warfare 13, no. 4 (2015): 42–56.

Duvenage, Petrus, Thenjiwe Sithole, and Basie von Solms. “Cyber Counterintelligence: An Exploratory Proposition on a Conceptual Framework.” International Journal of Cyber Warfare and Terrorism 9, no. 4 (October 2019): 44–62. https://doi.org/10.4018/IJCWT.2019100103.

Duvenage, Petrus, Thenjiwe Sithole, and Sebastian von Solms. “A Conceptual Framework for Cyber Counterintelligence: Theory That Really Matters.” In European Conference on Cyber Warfare and Security, 109–19. Reading, United Kingdom: Academic Conferences

Courteney O’Connor – PhD Thesis 2021

287

International Limited, 2017. https://search.proquest.com/docview/1966799163/abstract/9550E79F8887474EPQ/1.

Duvenage, Petrus, and Sebastian von Solms. “Putting Counterintelligence in Cyber Counterintelligence: Back to the Future.” In Proceedings of the 13th European Conference on Cyber Warfare and Security, 70–79. Piraeus, Greece: Academic Conferences International Limited, 2014.

Duvenage, Petrus, Sebastian von Solms, and Manuel Corregedor. “The Cyber Counterintelligence Process - a Conceptual Overview and Theoretical Proposition.” In Published Proceedings of the 14th European Conference on Cyber Warfare and Security, 42–51. Hatfield, UK, 2015.

Eckstein, Harry. “Case Study and Theory in Political Science.” In Case Study Method, edited by Roger Gomm, Martyn Hammersley, and Peter Foster, 118–64. London: SAGE Pulications Ltd, 2009. https://dx.doi.org/10.4135/9780857024367.d11.

Edmunds, Timothy. “What Are Armed Forces For? The Changing Nature of Military Roles in Europe.” International Affairs (Royal Institute of International Affairs 1944-) 82, no. 6 (2006): 1059–75.

Eftimiades, Nicholas. “The Impact of Chinese Espionage on the United States.” The Diplomat, December 4, 2018. https://thediplomat.com/2018/12/the-impact-of-chinese-espionage-on-the-united-states/.

Ehrman, John. “What Are We Talking About When We Talk about Counterintelligence? - CIA.” Studies in Intelligence 53, no. 2 (2009). https://www.cia.gov/resources/csi/studies-in-intelligence/volume-53-no-2/what-are-we-talking-about-when-we-talk-about-counterintelligence-pdf-172-0kb/.

Ellul, Jacques. Propaganda: The Formation of Men’s Attitudes. New York: Vintage Books, 1973. Endres, Kyle, and Kristin J. Kelly. “Does Microtargeting Matter? Campaign Contact Strategies and

Young Voters.” Journal of Elections, Public Opinion and Parties 28, no. 1 (January 2, 2018): 1–18. https://doi.org/10.1080/17457289.2017.1378222.

Espiner, Tom. “Hack Attack Forces DigiNotar Bankruptcy.” ZDNet, September 20, 2011. https://www.zdnet.com/article/hack-attack-forces-diginotar-bankruptcy/.

EU vs Disinfo. “Building Swedish Resilience.” EU vs DISINFORMATION, March 28, 2017. https://euvsdisinfo.eu/building-swedish-resilience/.

———. “In Sweden, Resilience Is Key to Combatting Disinformation.” EU vs DISINFORMATION, July 16, 2018. https://euvsdisinfo.eu/in-sweden-resilience-is-key-to-combatting-disinformation/.

European Parliament. Directorate General for Parliamentary Research Services. “Automated Tackling of Disinformation: Major Challenges Ahead.” Euopean Parliamentary Research Service, 2019. https://data.europa.eu/doi/10.2861/368879.

European Union Agency for Cybersecurity. “CSIRT Inventory.” Topic. ENISA | European Union Agency for Cybersecurity, 2021. https://www.enisa.europa.eu/topics/csirts-in-europe/csirt-inventory.

Fabre, Cecile. “Cosmopolitanism, Just War Theory and Legitimate Authority.” International Affairs 84, no. 5 (September 1, 2008): 963–76. https://doi.org/10.1111/j.1468-2346.2008.00749.x.

Fallis, Don. “A Conceptual Analysis of Disinformation.” In IConference 2009 Proceedings, 8. University of Illinois, 2009. http://hdl.handle.net/2142/15205.

Farwell, James P. “The Media Strategy of ISIS.” Survival 56, no. 6 (November 2, 2014): 49–55. https://doi.org/10.1080/00396338.2014.985436.

Farwell, James P., and Rafal Rohozinski. “Stuxnet and the Future of Cyber War.” Survival 53, no. 1 (February 2011): 23–40. https://doi.org/10.1080/00396338.2011.555586.

Fazzini, Kate. “Toys and Apps Often Track Your Kids and Collect Information about Them — Here’s How to Keep Them Safe.” CNBC, November 24, 2018. https://www.cnbc.com/2018/11/23/connected-toys-privacy-risks.html.

Bibliography

288

Federal Bureau of Investigation. “Aldrich Ames.” Page. Federal Bureau of Investigation, 2012. https://www.fbi.gov/history/famous-cases/aldrich-ames.

———. “Robert Hanssen.” Page. Federal Bureau of Investigation, 2001. https://www.fbi.gov/history/famous-cases/robert-hanssen.

Fenwick, Mark, Wulf A. Kaal, and Erik P. M. Vermeulen. “Regulation Tomorrow: What Happens When Technology Is Faster Than the Law?” American University Business Law Review 6, no. 3 (2016): 561–94. https://doi.org/10.2139/ssrn.2834531.

Fernandes, Rossi. “Flame Malware Controllers Send Uninstall Command.” Tech2, 11:02:51 +05:30. https://www.firstpost.com/tech/news-analysis/flame-malware-controllers-send-uninstall-command-3601111.html.

Ferrara, Emilio, Herbert Chang, Emily Chen, Goran Muric, and Jaimin Patel. “Characterizing Social Media Manipulation in the 2020 U.S. Presidential Election.” First Monday, October 19, 2020. https://doi.org/10.5210/fm.v25i11.11431.

Fetzer, James H. “Disinformation: The Use of False Information.” Minds and Machines 14, no. 2 (May 2004): 231–40. https://doi.org/10.1023/B:MIND.0000021683.28604.5b.

FireEye. “Advanced Persistent Threat Groups (APT Groups).” FireEye, 2021. https://www.fireeye.com/current-threats/apt-groups.html.

Floyd, Rita. “Can Securitization Theory Be Used in Normative Analysis? Towards a Just Securitization Theory.” Security Dialogue 42, no. 4–5 (2011): 427–39. https://doi.org/10.1177/0967010611418712.

———. “Towards a Consequentialist Evaluation of Security: Bringing Together the Copenhagen and the Welsh Schools of Security Studies.” Review of International Studies 33, no. 2 (2007): 327–50.

Forum of Incident Response and Security Teams. “FIRST Teams.” FIRST — Forum of Incident Response and Security Teams, 2021. https://www.first.org/members/teams.

Fraser, Nalani, Fred Plan, Jacqueline O’Leary, Vincent Cannon, Leong Raymond, Dan Perez, and Chi-en Shen. “APT41: A Dual Espionage and Cyber Crime Operation.” FireEye, August 7, 2019. https://www.fireeye.com/blog/threat-research/2019/08/apt41-dual-espionage-and-cyber-crime-operation.html.

French, Geoffrey S., and Jin Kim. “Acknowledging the Revolution: The Urgent Need for Cyber Counterintelligence.” National Intelligence Journal 1, no. 1 (2009): 71–90.

Fridman, Ofer. “‘Information War’ as the Russian Conceptualisation of Strategic Communications.” The RUSI Journal 165, no. 1 (January 2, 2020): 44–53. https://doi.org/10.1080/03071847.2020.1740494.

Friedman, Uri. “Putin Said in Helsinki He Wanted Trump to Win.” The Atlantic, July 19, 2018. https://www.theatlantic.com/international/archive/2018/07/putin-trump-election-translation/565481/.

Fruhlinger, Josh. “Equifax Data Breach FAQ: What Happened, Who Was Affected, What Was the Impact?” CSO Online, February 12, 2020. https://www.csoonline.com/article/3444488/equifax-data-breach-faq-what-happened-who-was-affected-what-was-the-impact.html.

———. “Marriott Data Breach FAQ: How Did It Happen and What Was the Impact?” CSO Online, February 12, 2020. https://www.csoonline.com/article/3441220/marriott-data-breach-faq-how-did-it-happen-and-what-was-the-impact.html.

———. “The Mirai Botnet Explained: How IoT Devices Almost Brought down the Internet.” CSO Online, March 9, 2018. https://www.csoonline.com/article/3258748/the-mirai-botnet-explained-how-teen-scammers-and-cctv-cameras-almost-brought-down-the-internet.html.

Fryxell, Alma. “Psywar By Forgery.” Studies in Intelligence, 1961. https://doi.org/10.1037/e740372011-001.

Courteney O’Connor – PhD Thesis 2021

289

Funke, Daniel, and Daniela Flamini. “A Guide to Anti-Misinformation Actions around the World.” Poynter (blog). Accessed May 22, 2021. https://www.poynter.org/ifcn/anti-misinformation-actions/.

Gady, Franz-Stefan. “New Snowden Documents Reveal Chinese Behind F-35 Hack.” The Diplomat, January 27, 2015. https://thediplomat.com/2015/01/new-snowden-documents-reveal-chinese-behind-f-35-hack/.

———. “What the Gulf War Teaches About the Future of War.” The Diplomat, March 2, 2018. https://thediplomat.com/2018/03/what-the-gulf-war-teaches-about-the-future-of-war/.

Galison, Peter. “The Journalist, the Scientist, and Objectivity.” In Objectivity in Science, edited by Flavia Padovani, Alan Richardson, and Jonathan Y. Tsou, Vol. 310. Boston Studies in the Philosophy and History of Science. Cham: Springer International Publishing, 2015. https://doi.org/10.1007/978-3-319-14349-1.

Garrett, R. Kelly. “Social Media’s Contribution to Political Misperceptions in U.S. Presidential Elections.” PLOS ONE 14, no. 3 (March 27, 2019): e0213500. https://doi.org/10.1371/journal.pone.0213500.

Gaskell, Adi. “State Sponsored Cyberattacks Are Happening Right Now.” CyberNews, June 23, 2020. https://cybernews.com/news/state-sponsored-cyberattacks-are-happening-right-now/.

———. “The Rise in State-Sponsored Hacking in 2020.” CyberNews, February 5, 2020. https://cybernews.com/security/the-rise-in-state-sponsored-hacking/.

George, Alexander L., and Andrew Bennett. Case Studies and Theory Development in the Social Sciences. Cambridge: The MIT Press, 2005.

Gerstein, Josh. “U.S. Brings First Charge for Meddling in 2018 Midterm Elections.” POLITICO, October 19, 2018. https://politi.co/2Ajbubq.

Glenza, Jessica. “Russian Trolls ‘spreading Discord’ over Vaccine Safety Online.” the Guardian, August 23, 2018. http://www.theguardian.com/society/2018/aug/23/russian-trolls-spread-vaccine-misinformation-on-twitter.

Gold, Matea, Maggie Farley, and Print. “World Trade Center and Pentagon Attacked on Sept. 11, 2001.” Los Angeles Times, September 12, 2001. https://www.latimes.com/travel/la-xpm-2001-sep-12-na-sept-11-attack-201105-01-story.html.

Goldberg, David. “Lessons from Standing Rock - Of Water, Racism, and Solidarity.” The New England Journal of Medicine 384, no. 14 (2017): 1403–5.

Goldman, Adam. “Justice Dept. Accuses Russians of Interfering in Midterm Elections.” The New York Times, October 19, 2018, sec. U.S. https://www.nytimes.com/2018/10/19/us/politics/russia-interference-midterm-elections.html.

Golovchenko, Yevgeniy, Mareike Hartmann, and Rebecca Adler-Nissen. “State, Media and Civil Society in the Information Warfare over Ukraine: Citizen Curators of Digital Disinformation.” International Affairs 94, no. 5 (2018): 975–94.

Goodin, Dan. “Discovery of New ‘Zero-Day’ Exploit Links Developers of Stuxnet, Flame.” Ars Technica, June 12, 2012. https://arstechnica.com/information-technology/2012/06/zero-day-exploit-links-stuxnet-flame/.

Gordon, Michael R., and Dustin Volz. “Russian Disinformation Campaign Aims to Undermine Confidence in Pfizer, Other Covid-19 Vaccines, U.S. Officials Say.” Wall Street Journal, March 7, 2021, sec. Politics. https://www.wsj.com/articles/russian-disinformation-campaign-aims-to-undermine-confidence-in-pfizer-other-covid-19-vaccines-u-s-officials-say-11615129200.

Gottlieb, Benjamin. “Flame FAQ: All You Need to Know about the Virus.” Washington Post, June 20, 2012. https://web.archive.org/web/20170225103056/https://www.washingtonpost.com/blog

Bibliography

290

s/blogpost/post/flame-faq-all-you-need-to-know-about-the-virus/2012/06/20/gJQAAlrTqV_blog.html.

Government Offices of Sweden. “A Practical Approach on How to Cope with Disinformation.” Government Offices of Sweden, October 6, 2017. https://www.government.se/articles/2017/10/a-practical-approach-on-how-to-cope-with-disinformation/.

———. “Kommittédirektiv - En ny myndighet för psykologiskt försvar (Committee Directive: A New Authority for Psychological Defence).” Stockholm, 2018. https://perma.cc/GCF8-Z6HV.

Goyal, Ravish, Suren Sharma, Savitri Bevinakoppa, and Paul Watters. “Obfuscation of Stuxnet and Flame Malware.” Latest Trends in Applied Informatics and Computing, 2012, 5.

Graff, Garrett M. “The Mirai Botnet Was Part of a College Student Minecraft Scheme.” Wired, December 13, 2017. https://web.archive.org/web/20180210043047/https://www.wired.com/story/mirai-botnet-minecraft-scam-brought-down-the-internet/.

Grassi, Paul A, Michael E Garcia, and James L Fenton. “Digital Identity Guidelines.” Gaithersburg, MD: National Institute of Standards and Technology, June 22, 2017. https://doi.org/10.6028/NIST.SP.800-63-3.

Grattan, Michelle. “The Politics of Spin.” Australian Studies in Journalism, 1998, 32–45. Gray, Colin S. The Implications of Preemptive and Preventive War Doctrines: A Reconsideration. Carlisle

Barracks, PA: Strategic Studies Institute, U.S. Army War College, 2007. Greenberg, Andy. “Anonymous’ Most Notorious Hacker Is Back, and He’s Gone Legit.” Wired,

October 21, 2016. https://www.wired.com/2016/10/anonymous-notorious-hacker-back-hes-gone-legit/.

———. “Hack Brief: As FBI Warns Election Sites Got Hacked, All Eyes Are on Russia.” Wired, August 29, 2016. https://www.wired.com/2016/08/hack-brief-fbi-warns-election-sites-got-hacked-eyes-russia/.

———. “How 30 Lines of Code Blew Up a 27-Ton Generator.” Wired, October 23, 2020. https://www.wired.com/story/how-30-lines-of-code-blew-up-27-ton-generator/.

———. “The Colonial Pipeline Hack Is a New Extreme for Ransomware.” Wired. Accessed July 3, 2021. https://www.wired.com/story/colonial-pipeline-ransomware-attack/.

———. “The NSA Confirms It: Russia Hacked French Election ‘Infrastructure.’” Wired, May 9, 2017. https://www.wired.com/2017/05/nsa-director-confirms-russia-hacked-french-election-infrastructure/.

———. “The Strange Journey of an NSA Zero-Day—Into Multiple Enemies’ Hands.” Wired, May 7, 2019. https://www.wired.com/story/nsa-zero-day-symantec-buckeye-china/.

Greenemeier, Larry. “False News Travels 6 Times Faster on Twitter than Truthful News.” PBS NewsHour, March 9, 2018. https://www.pbs.org/newshour/science/false-news-travels-6-times-faster-on-twitter-than-truthful-news.

Greenpeace International. “Greenpeace International.” Greenpeace International, 2021. https://www.greenpeace.org/international.

Greenwald, Glenn. No Place to Hide: Edward Snowden, the NSA and the Surveillance State. Melbourne: Hamish Hamilton, 2014.

Griffiths, Sarah. “Artefacts Destroyed by ISIS Restored in 3D Models by ‘Cyber Archaeology.’” Daily Mail, May 20, 2015. https://www.dailymail.co.uk/sciencetech/article-3087731/Cyber-archaeology-rebuilds-lost-treasures-Public-project-uses-photos-create-3D-models-artefacts-destroyed-ISIS.html.

Guccione, Darren. “What Is the Dark Web? How to Access It and What You’ll Find.” CSO Online, November 18, 2020. https://www.csoonline.com/article/3249765/what-is-the-dark-web-how-to-access-it-and-what-youll-find.html.

Courteney O’Connor – PhD Thesis 2021

291

Guerrero-Saade, Juan Andres, Costin Raiu, Daniel Moore, and Thomas Rid. “Penquin’s Moonlit Maze: The Dawn of Nation-State Digital Espionage.” Kaspersky Lab, 2018. https://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2018/03/07180251/Penquins_Moonlit_Maze_PDF_eng.pdf.

Haataja, Samuli. “The 2007 Cyber Attacks against Estonia and International Law on the Use of Force: An Informational Approach.” Law, Innovation and Technology 9, no. 2 (July 3, 2017): 159–89. https://doi.org/10.1080/17579961.2017.1377914.

Haemmerli, Bernard, and Andrea Renda. “Protecting Critical Infrastructure in the EU.” Centre for European Policy Studies (blog), December 16, 2010. https://www.ceps.eu/ceps-publications/protecting-critical-infrastructure-eu/.

Hammersley, Ben. Approaching the Future: 64 Things You Need to Know Now for Then. Soft Skull Press, 2013.

Harding, Luke. The Snowden Files: The Inside Story of the World’s Most Wanted Man. New York: Vintage Books, 2014.

Haskell-Dowland, Paul. “Facebook Data Breach: What Happened and Why It’s Hard to Know If Your Data Was Leaked.” The Conversation, April 6, 2021. http://theconversation.com/facebook-data-breach-what-happened-and-why-its-hard-to-know-if-your-data-was-leaked-158417.

Hass, Rabea. “The Role of Media in Conflict and Their Influence on Securitisation.” The International Spectator 44, no. 4 (December 2009): 77–91. https://doi.org/10.1080/03932720903351187.

Haubrich, Dirk. “September 11, Anti-Terror Laws and Civil Liberties: Britain, France and Germany Compared 1.” Government and Opposition 38, no. 1 (ed 2003): 3–28. https://doi.org/10.1111/1477-7053.00002.

Hay Newman, Lily. “The Midterm Elections Are Already Under Attack.” Wired, July 20, 2018. https://www.wired.com/story/midterm-elections-vulnerabilities-phishing-ddos/.

Healey, Jason, and Robert Jervis. “The Escalation Inversion and Other Oddities of Situational Cyber Stability.” Texas National Security Review, September 28, 2020. http://tnsr.org/2020/09/the-escalation-inversion-and-other-oddities-of-situational-cyber-stability/.

Heemskerk, Eelke, Jan Fichtner, and Milan Babic. “Who Is More Powerful – States or Corporations?” The Conversation, July 11, 2018. http://theconversation.com/who-is-more-powerful-states-or-corporations-99616.

Heinegg, Wolff Heintschel von. “Legal Implications of Territorial Sovereignty in Cyberspace.” In 4th International Conference on Cyber Conflict, edited by C. Czossek, K. Ziolkowski, and R. Ottis. NATO CCDCOE Publications, 2012.

Henley, Jon. “Russia Waging Information War against Sweden, Study Finds.” the Guardian, January 11, 2017. http://www.theguardian.com/world/2017/jan/11/russia-waging-information-war-in-sweden-study-finds.

Henschke, Adam. Ethics in an Age of Surveillance: Personal Information and Virtual Identities. Cambridge: Cambridge University Press, 2017. https://doi.org/10.1017/9781316417249.

———. “Privacy, the Internet of Things and State Surveillance: Handling Personal Information within an Inhuman System.” Moral Philosophy and Politics 7, no. 1 (May 26, 2020): 123–49. https://doi.org/10.1515/mopp-2019-0056.

Henschke, Adam, and Alastair Reed. “Toward an Ethical Framework for Countering Extremist Propaganda Online.” Studies in Conflict & Terrorism, March 8, 2021, 1–18. https://doi.org/10.1080/1057610X.2020.1866744.

Henschke, Adam, Matthew Sussex, and Courteney O’Connor. “Countering Foreign Interference: Election Integrity Lessons for Liberal Democracies.” Journal of Cyber Policy 5, no. 2 (2020): 180–98.

Bibliography

292

Hensel, Howard M. “Christian Belief and Western Just War Thought.” In The Prism of Just War: Asian and Western Perspectives on the Legitimate Use of Military Force, edited by Howard M. Hensel, 29–86. Justice, International Law and Global Security. Farnham: Ashgate Publishing Limited, 2010.

Herzog, Stephen. “Revisiting the Estonian Cyber Attacks: Digital Threats and Multinational Responses.” Journal of Strategic Security 4, no. 2 (2011): 49–60.

Hoffman, Bruce. Inside Terrorism. 3rd ed. Columbia Studies in Terrorism and Irregular Warfare. New York: Columbia University Press, 2017.

Hofverberg, Elin. “Government Responses to Disinformation on Social Media Platforms.” Web page, September 2019. https://www.loc.gov/law/help/social-media-disinformation/sweden.php#_ftn44.

———. “Initiatives to Counter Fake News.” Web page. Library of Congress, April 2019. https://www.loc.gov/law/help/fake-news/sweden.php.

Holmes, Aaron. “533 Million Facebook Users’ Phone Numbers and Personal Data Have Been Leaked Online.” Business Insider Australia, April 3, 2021. https://www.businessinsider.com.au/stolen-data-of-533-million-facebook-users-leaked-online-2021-4.

Hooker, Claire. “How to Cut through When Talking to Anti- Vaxxers and Anti-Fluoriders.” The Conversation, February 17, 2017, 3.

House of Commons Digital, Culture, Media and Sport Committee. “Disinformation and ‘Fake News’: Final Report,” 2019. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179102.htm.

Howard, John. “Application of the ANZUS Treaty to Terrorist Attacks on the United States.” Prime Minister of Australia | John Howard, September 14, 2001. https://web.archive.org/web/20051022123644/http://www.pm.gov.au/news/media_releases/2001/media_release1241.htm.

Hsu, Jeremy. “Strava Data Heat Maps Expose Military Base Locations Around the World.” Wired, January 29, 2018. https://www.wired.com/story/strava-heat-map-military-bases-fitness-trackers-privacy/.

http://www.jcs.mil, Joint Chiefs of Staff: “DOD Dictionary of Military and Associated Terms (June 2018).” United States. Joint Chiefs of Staff, June 1, 2018. https://www.hsdl.org/?abstract&did=.

Hultqvist, Peter. “Sweden’s Defense Minister: Additional Resources Are Coming to Bolster National Security, Alliances.” Defense News, January 11, 2021. https://www.defensenews.com/outlook/2021/01/11/swedens-defense-minister-additional-resources-are-coming-to-bolster-national-security-alliances/.

Human Rights Watch. “Transfer of Military Hardware to Police Could Lead to Abuses.” Human Rights Watch, August 30, 2017. https://www.hrw.org/news/2017/08/30/transfer-military-hardware-police-could-lead-abuses.

Hunt, Jennifer. “The COVID-19 Pandemic vs Post-Truth.” Global Health Security Network, 2020. https://www.ghsn.org/resources/Documents/GHSN%20Policy%20Report%201.pdf.

Information Commissioner’s Office. “Guide to the UK General Data Protection Regulation (UK GDPR).” Information Commissioner’s Office. ICO, 2021. https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/.

Information Warfare Monitor. “Tracking GhostNet: Investigating a Cyber Espionage Network.” The Citizen lab & The SecDev Group, 2009. https://citizenlab.ca/wp-content/uploads/2017/05/ghostnet.pdf.

Courteney O’Connor – PhD Thesis 2021

293

Ingram, Haroro J. A Brief History of Propaganda During Conflict: Lessons for Counter-Terrorism Strategic Communications. ICCT Research Paper. The Hague: The International Centre for Counter-terrorism, 2016. https://books.google.com.au/books?id=XRWTAQAACAAJ.

Inkster, Nigel. “China in Cyberspace.” Survival 52, no. 4 (September 2010): 55–66. https://doi.org/10.1080/00396338.2010.506820.

International Cyber Center. “C.E.R.T in Rest of the World.” International Cyber Center | George Mason University, 2021. http://www.internationalcybercenter.org/certicc/certworld.

International Telecommunications Union. “National Cybersecurity Strategies Repository.” ITU | International Telecommunications Union, 2021. https://www.itu.int:443/en/ITU-D/Cybersecurity/Pages/National-Strategies-repository.aspx.

INTERPOL. “INTERPOL Member Countries.” INTERPOL, 2020. https://www.interpol.int/en/Who-we-are/Member-countries.

Interpol. “INTERPOL Report Shows Alarming Rate of Cyberattacks during COVID-19.” Interpol, August 4, 2020. https://www.interpol.int/en/News-and-Events/News/2020/INTERPOL-report-shows-alarming-rate-of-cyberattacks-during-COVID-19.

Intersoft Consulting. “General Data Protection Regulation (GDPR) – Official Legal Text.” General Data Protection Regulation (GDPR), September 2, 2019. https://gdpr-info.eu/.

Isaak, J., and M. J. Hanna. “User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection.” Computer 51, no. 8 (August 2018): 56–59. https://doi.org/10.1109/MC.2018.3191268.

Jacobsen, Katja Lindskov. The Politics of Humanitarian Technology: Good Intentions, Unintended Consequences and Insecurity. Routledge, 2015.

Jain, Riddhi. “A Complete Facebook Data Breach & Privacy Leak Timeline (2005 to 2021).” iTMunch, April 14, 2021. https://itmunch.com/facebook-data-breach-timeline-2005-2021/.

Jenkins, Simon. “The US and Britain Face No Existential Threat. So Why Do Their Wars Go On?” the Guardian, November 15, 2019. http://www.theguardian.com/commentisfree/2019/nov/15/britain-and-us-wars-conflicts-middle-east.

Johnson, Loch K, and James J Wirtz. Intelligence: The Secret World of Spies : An Anthology. New York: Oxford University Press, 2011.

Jones, Christopher W. “Understanding ISIS’s Destruction of Antiquities as a Rejection of Nationalism.” Journal of Eastern Mediterranean Archaeology & Heritage Studies 6, no. 1–2 (2018): 31–58. https://doi.org/10.5325/jeasmedarcherstu.6.1-2.0031.

Jowett, Garth S., and Victoria O’Donnell. Propaganda & Persuasion. Newbury Park: SAGE Publications, 2018.

Joyce, Sean, Kristin Rivera, Brian Castelli, Joseph Nocera, and Emily Stapf. “How to Protect Your Companies from Rising Cyber Attacks and Fraud amid the COVID-19 Outbreak.” PwC, March 2, 2021. https://www.pwc.com/us/en/library/covid-19/cyber-attacks.html.

Kahney, Leander. “The FBI Wanted a Backdoor to the IPhone. Tim Cook Said No.” Wired, April 16, 2019. https://www.wired.com/story/the-time-tim-cook-stood-his-ground-against-fbi/.

Kalsnes, Bente. “Fake News.” In Oxford Research Encyclopedia of Communication. Oxford: Oxford University Press, 2018. https://doi.org/10.1093/acrefore/9780190228613.013.809.

Kalyvas, Andreas. Democracy and the Politics of the Extraordinary: Max Weber, Carl Schmitt, and Hannah Arendt. Cambridge: Cambridge University Press, 2008. https://doi.org/10.1017/CBO9780511755842.

Kang, Cecilia, and Mike Isaac. “Defiant Zuckerberg Says Facebook Won’t Police Political Speech.” The New York Times, October 17, 2019, sec. Business.

Bibliography

294

https://www.nytimes.com/2019/10/17/business/zuckerberg-facebook-free-speech.html.

Kaspersky. “What Is a Whaling Attack?” Kaspersky, 2021. https://www.kaspersky.com/resource-center/definitions/what-is-a-whaling-attack.

———. “What Is the Deep and Dark Web?” www.kaspersky.com, January 13, 2021. https://www.kaspersky.com/resource-center/threats/deep-web.

Kaspersky ICS CERT. “Energetic Bear / Crouching Yeti: Attacks on Servers.” Kaspersky ICS CERT | Kaspersky Industrial Control Systems Cyber Emergency Response Team (blog), April 23, 2018. https://ics-cert.kaspersky.com/reports/2018/04/23/energetic-bear-crouching-yeti-attacks-on-servers/.

Katz, Rita. “To Curb Terrorist Propaganda Online, Look to YouTube. No, Really. | WIRED,” October 20, 2018. https://www.wired.com/story/to-curb-terrorist-propaganda-online-look-to-youtube-no-really/.

Katz, Steven T., and Richard Landes. The Paranoid Apocalypse : A Hundred-Year Retrospective on the Protocols of the Elders of Zion. 1st ed. Vol. 3. Elie Wiesel Center for Judaic Studies. New York University Press, 2011.

Kavanagh, Camino. “New Tech, New Threats, and New Governance Challenges: An Opportunity to Craft Smarter Responses?” Carnegie Endowment for International Peace, August 28, 2019. https://carnegieendowment.org/2019/08/28/new-tech-new-threats-and-new-governance-challenges-opportunity-to-craft-smarter-responses-pub-79736.

Kaysen, Carl, Robert S. McNamara, and George W. Rathjens. “Nuclear Weapons After the Cold War,” June 21, 2018. https://www.foreignaffairs.com/articles/1991-09-01/nuclear-weapons-after-cold-war.

Kelly, Ross. “Almost 90% of Cyber Attacks Are Caused by Human Error or Behavior.” ChiefExecutive.Net (blog), March 3, 2017. https://chiefexecutive.net/almost-90-cyber-attacks-caused-human-error-behavior/.

Kharpal, Arjun. “Apple vs FBI: All You Need to Know.” CNBC, March 29, 2016. https://www.cnbc.com/2016/03/29/apple-vs-fbi-all-you-need-to-know.html.

Kim, Young Mie, Jordan Hsu, David Neiman, Colin Kou, Levi Bankston, Soo Yun Kim, Richard Heinrich, Robyn Baragwanath, and Garvesh Raskutti. “The Stealth Media? Groups and Targets behind Divisive Issue Campaigns on Facebook.” Political Communication 35, no. 4 (October 2, 2018): 515–41. https://doi.org/10.1080/10584609.2018.1476425.

Kimsey, John. “‘The Ends of a State’: James Angleton, Counterintelligence and the New Criticism.” The Space Between: Literature and Culture 1914-1945 13 (2017). https://scalar.usc.edu/works/the-space-between-literature-and-culture-1914-1945/vol13_2017_kimsey.

King, Gary, Robert O. Keohane, and Sidney Verba. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press, 1994.

Knake, Robert K. Untangling Attribution: Moving to Accountability in Cyberspace, § Subcommittee on Technology and Innovation of the United States House of Representatives (2010).

Knudsen, Olav F. “Post-Copenhagen Security Studies: Desecuritizing Securitization.” Security Dialogue 32, no. 3 (2001): 355–68.

Kobie, Nicole. “The Internet of Things: Convenience at a Price.” the Guardian, March 30, 2015. http://www.theguardian.com/technology/2015/mar/30/internet-of-things-convenience-price-privacy-security.

Kollars, Nina. “Cyber Conflict as an Intelligence Competition in an Era of Open Innovation.” Texas National Security Review Special Issue: Cyber Conflict as an Intelligence Contest (2020). https://tnsr.org/roundtable/policy-roundtable-cyber-conflict-as-an-intelligence-contest/.

Courteney O’Connor – PhD Thesis 2021

295

Kostadinov, Dimitar. “GhostNet - Part I.” InfoSec, 2013. https://resources.infosecinstitute.com/topic/ghostnet-part-i/.

Kragh, Martin, and Sebastian Åsberg. “Russia’s Strategy for Influence through Public Diplomacy and Active Measures: The Swedish Case.” Journal of Strategic Studies 40, no. 6 (September 19, 2017): 773–816. https://doi.org/10.1080/01402390.2016.1273830.

Krawchenko, Katiana, Donald Judd, Nancy Cordes, Julianna Goldman, Reena Flores, Rebecca Shabad, Emily Schultheis, Alexander Romano, Steve Chaggaris, and Associated Press. “The John Podesta Emails Released by WikiLeaks.” CBS News, November 3, 2016. https://www.cbsnews.com/news/the-john-podesta-emails-released-by-wikileaks/.

Krebs, Brian. “Adobe Breach Impacted At Least 38 Million Users – Krebs on Security.” Krebs On Security (blog), 2013. https://krebsonsecurity.com/2013/10/adobe-breach-impacted-at-least-38-million-users/.

———. “Who Makes the IoT Things Under Attack?” Krebs on Security (blog), October 3, 2016. https://krebsonsecurity.com/2016/10/who-makes-the-iot-things-under-attack/.

Kriel, Charles. “Fake News, Fake Wars, Fake Worlds.” Defence Strategic Communications 3 (2017): 171–89.

Lake, David A., and Patrick M. Morgan. Regional Orders: Building Security in a New World. University Park: Penn State Press, 2010.

Lane, Temryss MacLean. “The Frontline of Refusal: Indigenous Women Warriors of Standing Rock.” International Journal of Qualitative Studies in Education 31, no. 3 (March 16, 2018): 197–214. https://doi.org/10.1080/09518398.2017.1401151.

Lapowsky, Issie. “NATO Group Catfished Soldiers to Prove a Point About Privacy.” Wired, February 18, 2019. https://www.wired.com/story/nato-stratcom-catfished-soldiers-social-media/.

Lawrence, T.E. “Science of Guerilla Warfare.” In Strategic Studies: A Reader, edited by Thomas G. Mahnken and Joseph A. Maiolo. Oxon: Routledge, 2008.

Lawton, Graham. “Conspiracy Theories.” New Scientist, 2021. https://www.newscientist.com/definition/conspiracy-theories/.

Le Masurier, Megan. “Slow Journalism in an Age of Forgetting.” Opinion. ABC Religion & Ethics. Australian Broadcasting Corporation, June 18, 2019. https://www.abc.net.au/religion/slow-journalism-in-an-age-of-forgetting/11221092.

Leahy, Stephen. “Climate Change Driving Entire Planet to Dangerous ’global Tipping Point‘.” Science, November 27, 2019. https://www.nationalgeographic.com/science/article/earth-tipping-point.

Lee, Nathaniel. “How Police Militarization Became an over $5 Billion Business Coveted by the Defense Industry.” CNBC, July 9, 2020. https://www.cnbc.com/2020/07/09/why-police-pay-nothing-for-military-equipment.html.

Lee, Sangwon, and Michael Xenos. “Social Distraction? Social Media Use and Political Knowledge in Two U.S. Presidential Elections.” Computers in Human Behavior 90 (January 1, 2019): 18–25. https://doi.org/10.1016/j.chb.2018.08.006.

Lemos, Rob. “Why the Hack-Back Is Still the Worst Idea in Cybersecurity.” TechBeacon. Accessed April 10, 2021. https://techbeacon.com/security/why-hack-back-still-worst-idea-cybersecurity.

Lempert, Richard. “PRISM and Boundless Informant: Is NSA Surveillance a Threat?” Brookings (blog), November 30, 1AD. https://www.brookings.edu/blog/up-front/2013/06/13/prism-and-boundless-informant-is-nsa-surveillance-a-threat/.

Lenton, Timothy M., Johan Rockström, Owen Gaffney, Stefan Rahmstorf, Katherine Richardson, Will Steffen, and Hans Joachim Schellnhuber. “Climate Tipping Points — Too Risky to Bet Against.” Nature 575, no. 7784 (November 2019): 592–95. https://doi.org/10.1038/d41586-019-03595-0.

Bibliography

296

Lesk, M. “The New Front Line: Estonia under Cyberassault.” IEEE Security Privacy 5, no. 4 (July 2007): 76–79. https://doi.org/10.1109/MSP.2007.98.

Lewandowsky, Stephan, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwartz, and John Cook. “Misinformation and Its Correction: Continued Influence and Successful Debiasing.” Psychological Science in the Public Interest 13, no. 3 (n.d.): 106–31.

Li, Jianing. “Toward a Research Agenda on Political Misinformation and Corrective Information.” Political Communication 37, no. 1 (January 2, 2020): 125–35. https://doi.org/10.1080/10584609.2020.1716499.

Liang, Christina Schori. “Cyber Jihad: Understanding and Countering Islamic State Propaganda.” GCSP Policy Paper. Geneva: Geneva Centre for Security Policy, 2015. https://www.jugendundmedien.ch/fileadmin/PDFs/anderes/schwerpunkt_Radikalisierung/Cyber_Jihad_-_Understanding_and_Countering_Islamic_State_Propaganda.pdf.

Lieber, Chavie. “Big Tech Has Your Kid’s Data — and You Probably Gave It to Them.” Vox, December 5, 2018. https://www.vox.com/the-goods/2018/12/5/18128066/children-data-surveillance-amazon-facebook-google-apple.

Lieber, Keir. “The Offense-Defense Balance and Cyber Warfare.” In Cyber Analogies, edited by Emily O. Goldman and John Arquilla, 96–107. United States Cyber Command, 2014.

Locatelli, Andrea. “The Offense/Defense Balance in Cyberspace.” Working Paper. Milan, 2013. https://publires.unicatt.it/en/publications/the-offensedefense-balance-in-cyberspace-10.

Löfgren, Emma. “How Sweden’s Getting Ready for the Election-Year Information War.” The Local, April 11, 2018. https://web.archive.org/web/20180411113218/https://www.thelocal.se/20171107/how-swedens-getting-ready-for-the-election-year-information-war.

Löfven. “Statsminister Stefan Löfven (S): ”Så ska vi skydda valrörelsen från andra staters påverkan”.” DN. debatt, March 21, 2017. https://web.archive.org/web/20170321054938/https://www.dn.se/debatt/sa-ska-vi-skydda-valrorelsen-fran-andra-staters-paverkan/.

Lohrmann, Dan. “2020: The Year the COVID-19 Crisis Brought a Cyber Pandemic.” Government Technology, December 12, 2020. https://www.govtech.com/blogs/lohrmann-on-cybersecurity/2020-the-year-the-covid-19-crisis-brought-a-cyber-pandemic.html.

Lowenthal, Mark. Intelligence: From Secrets to Policy. 8th ed. Thousand Oaks: SAGE Pulications, Inc, 2020.

Lynn III, William J. “Cybersecurity - Defending a New Domain.” U.S. Department of Defense, 2010. https://archive.defense.gov/home/features/2010/0410_cybersec/lynn-article1.aspx.

Ma, Alexandra. “Parsons Green: Details on London Underground Bomb.” Insider, September 18, 2017. https://www.businessinsider.com/parsons-green-london-underground-attack-bomb-not-worked-properly-experts-2017-9?r=AU&IR=T.

MacFarquhar, Neil. “A Powerful Russian Weapon: The Spread of False Stories.” The New York Times, August 28, 2016, sec. World. https://www.nytimes.com/2016/08/29/world/europe/russia-sweden-disinformation.html.

MacIntyre, Ben. Double Cross: The True Story of the D-Day Spies. London: Bloomsbury Publishing, 2012. https://www.bloomsbury.com/uk/double-cross-9781408819906/.

———. “Operation Mincemeat.” Bloomsbury Publishing, 2010. https://www.bloomsbury.com/uk/operation-mincemeat-9781408809211/.

Mahr, Krista. “How Cape Town Was Saved from Running out of Water.” the Guardian, May 4, 2018. http://www.theguardian.com/world/2018/may/04/back-from-the-brink-how-cape-town-cracked-its-water-crisis.

Courteney O’Connor – PhD Thesis 2021

297

Manjikian, Mary. Cybersecurity Ethics: An Introduction. Oxon: Routledge, 2017. https://www.routledge.com/Cybersecurity-Ethics-An-Introduction/Manjikian/p/book/9781138717527.

Mansfield-Devine, Steve. “Estonia: What Doesn’t Kill You Makes You Stronger | Elsevier Enhanced Reader.” Network Security, 2012. https://doi.org/10.1016/S1353-4858(12)70065-X.

Mareš, Miroslav, and Veronika Netolická. “Georgia 2008: Conflict Dynamics in the Cyber Domain.” Strategic Analysis 44, no. 3 (May 3, 2020): 224–40. https://doi.org/10.1080/09700161.2020.1778278.

Marshall, Michael. “The Tipping Points at the Heart of the Climate Crisis.” the Guardian, September 19, 2020. http://www.theguardian.com/science/2020/sep/19/the-tipping-points-at-the-heart-of-the-climate-crisis.

Masters, Jonathan. “Russia, Trump, and the 2016 U.S. Election.” Council on Foreign Relations, February 26, 2018. https://www.cfr.org/backgrounder/russia-trump-and-2016-us-election.

Matz, S. C., M. Kosinski, G. Nave, and D. J. Stillwell. “Psychological Targeting as an Effective Approach to Digital Mass Persuasion.” Proceedings of the National Academy of Sciences 114, no. 48 (November 28, 2017): 12714. https://doi.org/10.1073/pnas.1710966114.

Mazzetti, Mark. The Way of the Knife: The CIA, A Secret Army, and a War at the Ends of the Earth. New York: Penguin RandomHouse, 2014. https://www.penguinrandomhouse.com/books/311164/the-way-of-the-knife-by-mark-mazzetti/.

McCarthy, Kieren. “Cybersecurity Giant FireEye Says It Was Hacked by Govt-Backed Spies Who Stole Its Crown-Jewels Hacking Tools.” The Register, December 9, 2020. https://www.theregister.com/2020/12/09/fireeye_tools_hacked/.

McCarthy, Tom. “Zuckerberg Says Facebook Won’t Be ‘arbiters of Truth’ after Trump Threat.” the Guardian, May 29, 2020. http://www.theguardian.com/technology/2020/may/28/zuckerberg-facebook-police-online-speech-trump.

McDonald, Avril. “Defining the War on Terror and Status of Detainees: Comments on the Presentation of Judge George Aldrich - ICRC.” Humanitäres Völkerrecht. 1, 14:16:41.0. https://www.icrc.org/en/doc/resources/documents/article/other/5p8avk.htm.

McDonald, Matt. “Securitization and the Construction of Security.” European Journal of International Relations 14, no. 4 (December 2008): 563–87. https://doi.org/10.1177/1354066108097553.

McGinnis, Shelley E. “Dallas, Roswell, Area 51: A Social History of American ‘Conspiracy Tourism.’” University of North Carolina, 2010.

McGowan, Michael. “How the Wellness and Influencer Crowd Serve Conspiracies to the Masses.” the Guardian, February 24, 2021. http://www.theguardian.com/australia-news/2021/feb/25/how-the-wellness-and-influencer-crowd-served-conspiracies-to-the-masses.

McGuinness, Damien. “How a Cyber Attack Transformed Estonia.” BBC News, April 27, 2017, sec. Europe. https://www.bbc.com/news/39655415.

McHugh, Kevin J., Lihong Jing, Sean Y. Severt, Mache Cruz, Morteza Sarmadi, Hapuarachchige Surangi N. Jayawardena, Collin F. Perkinson, et al. “Biocompatible Near-Infrared Quantum Dots Delivered to the Skin by Microneedle Patches Record Vaccination.” Science Translational Medicine 11, no. 523 (December 18, 2019). https://doi.org/10.1126/scitranslmed.aay7162.

McInnes, Colin, and Simon Rushton. “HIV/AIDS and Securitization Theory.” European Journal of International Relations 19, no. 1 (March 1, 2013): 115–38. https://doi.org/10.1177/1354066111425258.

Bibliography

298

Médecins Sans Frontières. “Médecins Sans Frontières (MSF) International.” Médecins Sans Frontières (MSF) International, 2021. https://www.msf.org/.

Meleshevich, Kirill, and Bret Schafer. “Online Information Laundering: The Role of Social Media.” Alliance For Securing Democracy (blog), January 9, 2018. https://securingdemocracy.gmfus.org/online-information-laundering-the-role-of-social-media/.

Merutka, Craig E. “Use of the Armed Forces for Domestic Law Enforcement:” Fort Belvoir, VA: Defense Technical Information Center, March 1, 2013. https://doi.org/10.21236/ADA589451.

Meyer, David S. “Framing National Security: Elite Public Discourse on Nuclear Weapons during the Cold War.” Political Communication 12, no. 2 (April 1, 1995): 173–92. https://doi.org/10.1080/10584609.1995.9963064.

MI5. “Klaus Fuchs | MI5 - The Security Service.” Security Service | MI5, 2020. https://www.mi5.gov.uk/klaus-fuchs.

Microsoft Security Response Center. “MSRC | Our Mission.” Microsoft, 2021. https://www.microsoft.com/en-us/msrc/mission.

Miller, Daniel. “The Weakest Link: The Role of Human Error in Cybersecurity.” Secure World Expo, January 14, 2018. https://www.secureworldexpo.com/industry-news/weakest-link-human-error-in-cybersecurity.

Mills, Elinor. “Shared Code Indicates Flame, Stuxnet Creators Worked Together.” CNET, June 11, 2012. https://www.cnet.com/news/shared-code-indicates-flame-stuxnet-creators-worked-together/.

Milman, Oliver. “James Hansen, Father of Climate Change Awareness, Calls Paris Talks ‘a Fraud.’” the Guardian, December 12, 2015. http://www.theguardian.com/environment/2015/dec/12/james-hansen-climate-change-paris-talks-fraud.

———. “Paris Climate Deal a ‘turning Point’ in Global Warming Fight, Obama Says.” the Guardian, October 5, 2016. http://www.theguardian.com/environment/2016/oct/05/obama-paris-climate-deal-ratification.

Ministry of Defence. “Defence in a Competitive Age.” The APS Group, 2021. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/971859/_CP_411__-_Defence_in_a_competitive_age.pdf.

Ministry of Defence, Government Offices of Sweden. “Main Elements of the Government Bill Totalförsvaret 2021–2025: Total Defence 2021-2025,” October 15, 2020. https://www.government.se/government-policy/defence/objectives-for-swedish-total-defence-2021-2025---government-bill-totalforsvaret-20212025/.

Miranda, Cristina. “Buying an Internet-Connected Smart Toy? Read This.” Consumer Information, December 6, 2018. https://www.consumer.ftc.gov/blog/2018/12/buying-internet-connected-smart-toy-read.

Mudrinich, Erik M. “Cyber 3.0: The Department of Defense Strategy for Operating in Cyberspace and the Attribution Problem.” Air Force Law Review 68 (2012): 167–206.

Mueller, Robert S. United States of America v. Internet Research Agency LLC, No. 1:18-cr-00032-DLF (United States District Court for the District of Columbia 2018).

Munro, Kate. “Deconstructing Flame: The Limitations of Traditional Defences.” Computer Fraud & Security 2012 (October 1, 2012): 8–11. https://doi.org/10.1016/S1361-3723(12)70102-1.

Murray, Stephanie. “Putin: I Wanted Trump to Win the Election.” Politico, July 16, 2018. https://www.politico.com/story/2018/07/16/putin-trump-win-election-2016-722486.

Myers, Norman. “Environment and Security.” Foreign Policy, no. 74 (1989): 23–41. https://doi.org/10.2307/1148850.

Courteney O’Connor – PhD Thesis 2021

299

Myndigheten för samhällsskydd och beredskap. “Countering information influence activities : A handbook for communicators.” Myndigheten för samhällsskydd och beredskap, 2018. https://www.msb.se/sv/publikationer/countering-information-influence-activities--a-handbook-for-communicators/.

———. “MSB i Almedalen: Frukostseminarium Om Informationspåverkan.” Myndigheten för samhällsskydd och beredskap, January 26, 2019. https://web.archive.org/web/20190126040104/https://www.msb.se/sv/Om-MSB/Nyheter-och-press/Nyheter/Nyheter-fran-MSB/MSB-i-Almedalen-Frukostseminarium-om-informationspaverkan/.

———. “Om Krisen Eller Kriget Kommer (Whether the Crisis or War is Coming).” Myndigheten för samhällsskydd och beredskap, 2018. https://perma.cc/4PHH-ZCQG.

Nabe, Cedric. “Impact of COVID-19 on Cybersecurity.” Deloitte Switzerland, December 15, 2020. https://www2.deloitte.com/ch/en/pages/risk/articles/impact-covid-cybersecurity.html.

Nagaraja, Shishir, and Ross Anderson. “The Snooping Dragon: Social-Malware Surveillance of the Tibetan Movement.” Technical Report. Cambridge: University of Cambridge, 2009. https://web.archive.org/web/20130122081850/http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-746.pdf.

Nagasako, Tomoko. “Global Disinformation Campaigns and Legal Challenges.” International Cybersecurity Law Review 1, no. 1 (October 1, 2020): 125–36. https://doi.org/10.1365/s43439-020-00010-7.

Nakashima, Ellen. “US and Israel Credited with Shooting Iran down in Flame.” The Sydney Morning Herald, June 20, 2012. https://www.smh.com.au/technology/us-and-israel-credited-with-shooting-iran-down-in-flame-20120620-20ok3.html.

———. “Why the Sony Hack Drew an Unprecedented U.S. Response against North Korea.” Washington Post, January 15, 2015, sec. National Security. https://www.washingtonpost.com/world/national-security/why-the-sony-hack-drew-an-unprecedented-us-response-against-north-korea/2015/01/14/679185d4-9a63-11e4-96cc-e858eba91ced_story.html.

Nakashima, Ellen, and Joby Warrick. “Stuxnet Was Work of U.S. and Israeli Experts, Officials Say.” Washington Post, June 2, 2012, sec. National Security. https://www.washingtonpost.com/world/national-security/stuxnet-was-work-of-us-and-israeli-experts-officials-say/2012/06/01/gJQAlnEy6U_story.html.

National Cyber Security Centre. “All Products & Services.” National Cyber Security Centre, 2021. https://www.ncsc.gov.uk/section/products-services/all-products-services-categories.

———. “Cyber Aware.” Cyber Aware, 2021. https://www.ncsc.gov.uk/cyberaware/home. ———. “Mitigating Malware and Ransomware Attacks.” National Cyber Security Centre, March

30, 2021. https://www.ncsc.gov.uk/guidance/mitigating-malware-and-ransomware-attacks.

———. “Most Hacked Passwords Revealed as UK Cyber Survey Exposes Gaps in Online Security.” National Cyber Security Centre, April 21, 2019. https://www.ncsc.gov.uk/news/most-hacked-passwords-revealed-as-uk-cyber-survey-exposes-gaps-in-online-security.

———. “NCSC CAF Guidance.” National Cyber Security Centre, 2021. https://www.ncsc.gov.uk/section/private-sector-cni/products-services.

———. “Reporting a Cyber Security Incident.” National Cyber Security Centre, 2021. https://report.ncsc.gov.uk/.

———. “The National Cyber Security Centre.” The National Cyber Security Centre, 2021. https://www.ncsc.gov.uk/.

NATO CCDCOE. “CCDCOE to Host the Tallinn Manual 3.0 Process.” CCDCOE, 2020. https://ccdcoe.org/news/2020/ccdcoe-to-host-the-tallinn-manual-3-0-process/.

Bibliography

300

———. “Strategic Importance of, and Dependence on, Undersea Cables.” Tallinn: NATO Cooperative Cyber Defence Centre of Excellence, 219AD.

NATO News. “NATO News: NATO Opens New Centre of Excellence on Cyber Defence.” NATO News, 2008. https://www.nato.int/docu/update/2008/05-may/e0514a.html.

N.C. “Conspiracy Theories Are Dangerous—Here’s How to Crush Them.” The Economist, August 12, 2019. https://www.economist.com/open-future/2019/08/12/conspiracy-theories-are-dangerous-heres-how-to-crush-them.

Neal, David. “Insider Reveals Details of Google Hacks.” iTnews, April 21, 2010. https://www.itnews.com.au/news/insider-reveals-details-of-google-hacks-172661.

Newman, Lily Hay. “DarkSide Ransomware Hit Colonial Pipeline—and Created an Unholy Mess.” Wired. Accessed July 3, 2021. https://www.wired.com/story/darkside-ransomware-colonial-pipeline-response/.

———. “How Leaked NSA Spy Tool ‘EternalBlue’ Became a Hacker Favorite.” Wired, March 7, 2018. https://www.wired.com/story/eternalblue-leaked-nsa-spy-tool-hacked-world/.

———. “What Really Caused Facebook’s 500M-User Data Leak?” Wired, April 6, 2021. https://www.wired.com/story/facebook-data-leak-500-million-users-phone-numbers/.

Nicas, Jack, and Katie Benner. “F.B.I. Asks Apple to Help Unlock Two IPhones.” The New York Times, January 7, 2020, sec. Technology. https://www.nytimes.com/2020/01/07/technology/apple-fbi-iphone-encryption.html.

Nish, Adrian, Saher Naumaan, and James Muir. “Enduring Cyber Threats and Emerging Challenges to the Financial Sector.” Carnegie Endowment for International Peace, 2020. https://carnegieendowment.org/2020/11/18/enduring-cyber-threats-and-emerging-challenges-to-financial-sector-pub-83239.

Nolan, Tom. “Militarization Has Fostered a Policing Culture That Sets up Protesters as ‘the Enemy.’” The Conversation, June 2, 2020. http://theconversation.com/militarization-has-fostered-a-policing-culture-that-sets-up-protesters-as-the-enemy-139727.

Norton. “What Is A Botnet?” Norton, 2019. https://us.norton.com/internetsecurity-malware-what-is-a-botnet.html.

———. “What Is the Difference Between Black, White and Grey Hat Hackers?” Norton, 2017. https://us.norton.com/internetsecurity-emerging-threats-what-is-the-difference-between-black-white-and-grey-hat-hackers.html.

Nossiter, Adam, David E. Sanger, and Nicole Perlroth. “Hackers Came, but the French Were Prepared.” The New York Times, May 10, 2017, sec. World. https://www.nytimes.com/2017/05/09/world/europe/hackers-came-but-the-french-were-prepared.html.

Nowakowski, Lauren. “The Moon Landing: Fake Movie Set or the Real Deal?” The Psychology of Extraordinary Beliefs, March 29, 2019. https://u.osu.edu/vanzandt/2019/03/.

NPR Staff. “Cyber Archaeologists Rebuild Destroyed Artifacts.” NPR.org, June 1, 2015. https://www.npr.org/sections/alltechconsidered/2015/06/01/411138497/cyber-archaeologists-rebuild-destroyed-artifacts.

Nye Jr., Joseph S. “Deterrence and Dissuasion in Cyberspace.” International Security 41, no. 3 (January 2017): 44–71. https://doi.org/10.1162/ISEC_a_00266.

Nyman, Joanna. “Securitization Theory.” In Critical Approaches to Security: An Introduction to Theories and Methods, edited by Laura J. Shepherd, 51–62. Oxon: Routledge, 2013.

Obama, Barack. The Atlantic Daily: Our Interview With Barack Obama. Interview by Caroline Mimbs Nyce, November 17, 2020. https://www.theatlantic.com/newsletters/archive/2020/11/why-obama-fears-for-our-democracy/617121/.

O’Connor, Courteney J. “Cyber Counterintelligence: Concept, Actors, and Implications for Security.” In Cyber Security and Policy: A Substantive Dialogue, edited by Andrew Colarik, Julian

Courteney O’Connor – PhD Thesis 2021

301

Jang-Jaccard, and Anuradha Mathrani, 109–28. Palmerston North: Massey University Press, 2017.

OECD. “Combatting COVID-19 Disinformation on Online Platforms,” July 3, 2020. https://www.oecd.org/coronavirus/policy-responses/combatting-covid-19-disinformation-on-online-platforms-d854ec48/.

———. “Trust in Government - OECD.” OECD, 2021. https://www.oecd.org/gov/trust-in-government.htm.

Office of the Director of National Intelligence. “Assessing Russian Activities and Intentions in Recent US Elections.” National Intelligence Council, January 6, 2017. https://www.dni.gov/files/documents/ICA_2017_01.pdf.

Omnicore. “Facebook by the Numbers (2021): Stats, Demographics & Fun Facts,” January 6, 2021. http://www.omnicoreagency.com/facebook-statistics/.

———. “Twitter by the Numbers (2021): Stats, Demographics & Fun Facts,” January 6, 2021. http://www.omnicoreagency.com/twitter-statistics/.

Osborn, Kris. “Shock and Awe: How the First Gulf War Shook the World.” Text. The National Interest. The Center for the National Interest, October 31, 2020. https://nationalinterest.org/blog/reboot/shock-and-awe-how-first-gulf-war-shook-world-171659.

Osborne, Charlie. “Ancient Moonlight Maze Backdoor Remerges as Modern APT.” ZDNet, April 3, 2017. https://www.zdnet.com/article/ancient-moonlight-maze-backdoor-remerges-as-modern-apt/.

O’Sullivan, Donie. “Facebook: Russian Trolls Are Back. And They’re Here to Meddle with 2020.” CNN, October 22, 2019. https://www.cnn.com/2019/10/21/tech/russia-instagram-accounts-2020-election/index.html.

Otis, Cindy. True or False: A CIA Analyst’s Guide to Spotting Fake News. New York: Feiwel and Friends, 2020.

Ottis, Rain. “Analysis of the 2007 Cyber Attacks against Estonia from the Information Warfare Perspective.” Tallinn: NATO Cooperative Cyber Defence Centre of Excellence, 2008. https://ccdcoe.org/library/publications/analysis-of-the-2007-cyber-attacks-against-estonia-from-the-information-warfare-perspective/.

Oxford Dictionary. “Discourse| Definition of Discourse by Oxford Dictionary.” Lexico Dictionaries | English, 2021. https://www.lexico.com/definition/discourse.

Pangalos, Philip. “Russia Denies Claims of Involvement in Malaysia Airlines MH17 Crash.” euronews, June 19, 2019. https://www.euronews.com/2019/06/19/russia-denies-claims-of-involvement-in-malaysia-airlines-mh17-crash.

Pankov, Nikolay. “Moonlight Maze: Lessons from History.” Kaspersky Daily, April 3, 2017. https://www.kaspersky.com.au/blog/moonlight-maze-the-lessons/6713/.

Parker, Martin. “Human Science as Conspiracy Theory.” The Sociological Review 48, no. 2 (2000): 191–207.

Patomaki, Heikki. “Absenting the Absence of Future Dangers and Structural Transformations in Securitization Theory.” International Relations 29, no. 1 (2015): 128–36.

Paul, T. V., James J. Wirtz, and Michel Fortmann. Balance of Power: Theory and Practice in the 21st Century. Stanford University Press, 2004.

Pawlyk, Oriana, and Phillip Swarts. “25 Years Later: What We Learned from Desert Storm.” Air Force Times, August 7, 2017. https://www.airforcetimes.com/news/your-air-force/2016/01/21/25-years-later-what-we-learned-from-desert-storm/.

Paz, Christian. “All of Trump’s Lies About the Coronavirus - The Atlantic.” The Atlantic, November 2, 2020. https://www.theatlantic.com/politics/archive/2020/11/trumps-lies-about-coronavirus/608647/.

Bibliography

302

Pearce, Fred. “As Climate Change Worsens, A Cascade of Tipping Points Looms.” Yale E360, December 5, 2019. https://e360.yale.edu/features/as-climate-changes-worsens-a-cascade-of-tipping-points-looms.

Peoples, Columba, and Nick Vaughan-Williams. “Introduction.” In Critical Security Studies: An Introduction, 2nd ed., 1–12. New York: Routledge, 2015.

Perley, Sara. “Arcana Imperii: Roman Political Intelligence, Counterintelligence, and Covert Action in the Mid-Republic.” The Australian National University, 2016.

Perlroth, Nicole. “Russian Hackers Who Targeted Clinton Appear to Attack France’s Macron.” The New York Times, April 25, 2017, sec. World. https://www.nytimes.com/2017/04/24/world/europe/macron-russian-hacking.html.

Pew Research Center. “Facts on News Media and Political Polarization in Sweden.” Pew Research Center’s Global Attitudes Project (blog), May 17, 2018. https://www.pewresearch.org/global/fact-sheet/news-media-and-political-attitudes-in-sweden/.

———. “Public Trust in Government: 1958-2021.” Pew Research Center - U.S. Politics & Policy (blog), May 17, 2021. https://www.pewresearch.org/politics/2021/05/17/public-trust-in-government-1958-2021/.

Pilkington, Ed. “Jeremy Hammond: FBI Directed My Attacks on Foreign Government Sites.” the Guardian, November 15, 2013. http://www.theguardian.com/world/2013/nov/15/jeremy-hammond-fbi-directed-attacks-foreign-government.

Polyakova, Alina. “The Kremlin’s Plot Against Democracy.” Foreign Affairs, 2020. https://www.foreignaffairs.com/articles/russian-federation/2020-08-11/putin-kremlins-plot-against-democracy.

Polyakova, Alina, and Daniel Fried. “Democratic Defense against Disinformation 2.0.” Report. Atlantic Council, June 9, 2019. https://apo.org.au/node/242041.

Popescul, Daniela, and Mircea Georgescu. “Internet of Things - Some Ethical Issues.” The USV Annals of Economics and Public Administration 13, no. 2 (2013): 7.

Posetti, Julie, and Alice Matthews. “A Short Guide to the History of ’fake News’ and Disinformation.” International Center for Journalists, 2018. https://www.icfj.org/sites/default/files/2018-07/A%20Short%20Guide%20to%20History%20of%20Fake%20News%20and%20Disinformation_ICFJ%20Final.pdf.

Prochko, Veronika. “The International Legal View of Espionage.” E-International Relations (blog), March 30, 2018. https://www.e-ir.info/2018/03/30/the-international-legal-view-of-espionage/.

Prunckun, Hank. Counterintelligence Theory and Practice. 2nd ed. Lanham: Rowman & LIttlefield Publishers, 2019. https://rowman.com/ISBN/9781786606884/Counterintelligence-Theory-and-Practice-Second-Edition.

PsyMetrics. “The OCEAN Profile.” PsyMetrics, 2013. https://www.psymetricsworld.com/ocean.html.

Putnam, Patrick. “Script Kiddie: Unskilled Amateur or Dangerous Hackers?” United States Cybersecurity Magazine, September 14, 2018. https://www.uscybersecurity.net/script-kiddie/.

Radsan, A John. “The Unresolved Equation of Espionage and International Law.” Michigan Journal of International Law 28, no. 3 (2007): 595–623.

Ragan, Steve. “Raising Awareness Quickly: The EBay Data Breach.” CSO Online, May 21, 2014. https://www.csoonline.com/article/2157782/security-awareness-raising-awareness-quickly-the-ebay-database-compromise.html.

Ranker. “22 Celebs Who Are Saving the Environment.” Ranker, May 31, 2019. https://www.ranker.com/list/celebrity-environmentalists/celebrity-lists.

Courteney O’Connor – PhD Thesis 2021

303

Redmond, Paul J. “The Challenges of Counterintelligence.” In Intelligence: The Secret World of Spies, edited by Loch K. Johnson and James J. Wirtz, 3rd ed., 295–306. New York: Oxford University Press, 2011.

Reed, Alastair. “Counter-Terrorism Strategic Communications: Back to the Future, Lessons from Past and Present.” The International Centre for Counter-Terrorism, July 4, 2017. https://icct.nl/publication/counter-terrorism-strategic-communications-back-to-the-future-lessons-from-past-and-present/.

———. “IS Propaganda: Should We Counter the Narrative?” The International Centre for Counter-Terrorism, March 17, 2017. https://icct.nl/publication/is-propaganda-should-we-counter-the-narrative/.

Reserve Bank of Australia. “The Global Financial Crisis | Explainer | Education.” Reserve Bank of Australia, April 17, 2019. Australia. https://www.rba.gov.au/education/resources/explainers/the-global-financial-crisis.html.

Reus-Smit, Christian. “Constructivism.” In Theories of International Relations, 5th ed., 217–40. New York: Palgrave Macmillan, 2013.

Reuters. “Australian State Violated Human Rights in COVID Lockdown-Report.” Reuters, December 17, 2020. https://www.reuters.com/article/health-coronavirus-australia-towers-idUSKBN28R0EC.

Reuters in Berlin. “NSA Tapped German Chancellery for Decades, WikiLeaks Claims.” the Guardian, July 8, 2015. http://www.theguardian.com/us-news/2015/jul/08/nsa-tapped-german-chancellery-decades-wikileaks-claims-merkel.

Reuters Staff. “Factbox: ‘Fake News’ Laws around the World.” Reuters, April 2, 2019. https://www.reuters.com/article/us-singapore-politics-fakenews-factbox-idUSKCN1RE0XN.

———. “False Claim: 5G Networks Are Making People Sick, Not Coronavirus.” Reuters, March 16, 2020. https://www.reuters.com/article/uk-factcheck-coronavirus-5g-idUSKBN2133TI.

———. “JPMorgan Hack Exposed Data of 83 Million, among Biggest Breaches in History.” Reuters, October 2, 2014. https://www.reuters.com/article/us-jpmorgan-cybersecurity-idUSKCN0HR23T20141002.

Rid, Thomas. Active Measures: The Secret History of Disinformation and Political Warfare. London: Profile Books, 2020.

———. Cyber War Will Not Take Place. London: Hurst, 2013. Rid, Thomas, and Ben Buchanan. “Hacking Democracy.” SAIS Review of International Affairs 38,

no. 1 (2018): 3–16. https://doi.org/10.1353/sais.2018.0001. Riedel, Bruce. “30 Years after Our ‘Endless Wars’ in the Middle East Began, Still No End in Sight.”

Brookings (blog), July 27, 2020. https://www.brookings.edu/blog/order-from-chaos/2020/07/27/30-years-after-our-endless-wars-in-the-middle-east-began-still-no-end-in-sight/.

Rietjens, Sebastiaan. “Unraveling Disinformation: The Case of Malaysia Airlines Flight MH17.” The International Journal of Intelligence, Security, and Public Affairs 21, no. 3 (September 2, 2019): 195–218. https://doi.org/10.1080/23800992.2019.1695666.

Riley, K. Jack, and Aaron C. Davenport. “How to Reform Military Gear Transfers to Police.” RAND Corporation, July 13, 2020. https://www.rand.org/blog/2020/07/how-to-reform-military-gear-transfers-to-police.html.

Robarge, David. “‘Cunning Passages, Contrived Corridors’: Wandering in the Angletonian Wilderness.” Studies in Intelligence 53, no. 4 (2009): 49–61.

———. “Moles, Defectors, and Deceptions: James Angleton and CIA Counterintelligence.” Journal of Intelligence History 3, no. 2 (December 2003): 21–49. https://doi.org/10.1080/16161262.2003.10555085.

Bibliography

304

Romero, Simon. “Fauci Pushes Back against Trump for Misrepresenting His Stance on Masks.” The New York Times, October 1, 2020, sec. World. https://www.nytimes.com/2020/10/01/world/fauci-pushes-back-against-trump-for-misrepresenting-his-stance-on-masks.html.

Romerstein, Herbert. “Disinformation as a KGB Weapon in the Cold War.” Journal of Intelligence History 1, no. 1 (June 2001): 54–67. https://doi.org/10.1080/16161262.2001.10555046.

Roth, Yoel, and Ashita Achuthan. “Building Rules in Public: Our Approach to Synthetic & Manipulated Media.” Twitter, February 4, 2020. https://blog.twitter.com/en_us/topics/company/2020/new-approach-to-synthetic-and-manipulated-media.html.

Roth, Yoel, and Nick Pickles. “Updating Our Approach to Misleading Information.” Twitter, May 11, 2020. https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html.

Rumsfeld, Donald. “Defense.Gov Transcript: DoD News Briefing - Secretary Rumsfeld and Gen. Myers.” U.S. Department of Defense, February 12, 2002. https://web.archive.org/web/20190428122842/https://archive.defense.gov/Transcripts/Transcript.aspx?TranscriptID=2636.

Runyon, Jennifer. “60 Minutes Investigates Chinese Cyber-Espionage in Wind Industry.” Renewable Energy World (blog), January 18, 2016. https://www.renewableenergyworld.com/wind-power/60-minutes-investigates-chinese-cyber-espionage-in-wind-industry/.

Ruser, Nathan. “Strava Released Their Global Heatmap.” Twitter, January 28, 2018. https://twitter.com/Nrg8000/status/957318498102865920.

Rushe, Dominic. “JP Morgan Chase Reveals Massive Data Breach Affecting 76m Households.” the Guardian, October 3, 2014. http://www.theguardian.com/business/2014/oct/02/jp-morgan-76m-households-affected-data-breach.

Russell, Jon. “Fitness App Strava Exposes the Location of Military Bases.” TechCrunch (blog), January 29, 2018. https://social.techcrunch.com/2018/01/28/strava-exposes-military-bases/.

Rust, John, and Susan Golombok. Modern Psychometrics: The Science of Psychological Assessment. New York: Routledge, 2009. https://www.routledge.com/Modern-Psychometrics-The-Science-of-Psychological-Assessment/Rust-Golombok/p/book/9780415442152.

Salter, Mark B., and Can E. Mutlu. “Securitisation and Diego Garcia.” Review of International Studies 39, no. 4 (October 2013): 815–34. http://dx.doi.org/10.1017/S0260210512000587.

Sanger, David E. “Obama Order Sped Up Wave of Cyberattacks Against Iran.” The New York Times, June 1, 2012, sec. World. https://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html.

Sanger, David E., and Nicole Perlroth. “FireEye, a Top Cybersecurity Firm, Says It Was Hacked by a Nation-State.” The New York Times, December 8, 2020, sec. Technology. https://www.nytimes.com/2020/12/08/technology/fireeye-hacked-russians.html.

Satter, Raphael, Jeff Donn, and Chad Day. “Inside Story: How Russians Hacked the Democrats’ Emails.” AP NEWS, November 4, 2017. https://apnews.com/article/hillary-clinton-phishing-moscow-russia-only-on-ap-dea73efc01594839957c3c9a6c962b8a.

Schadlow, Nadia, and Brayden Helwig. “Protecting Undersea Cables Must Be Made a National Security Priority.” Defense News, July 1, 2020. https://www.defensenews.com/opinion/commentary/2020/07/01/protecting-undersea-cables-must-be-made-a-national-security-priority/.

Schmitt, Michael N., ed. Tallinn Manual on the International Law Applicable to Cyber Warfare. Tallinn Manual 1. Cambridge: Cambridge University Press, 2013. https://www.cambridge.org/au/academic/subjects/law/humanitarian-law/tallinn-

Courteney O’Connor – PhD Thesis 2021

305

manual-international-law-applicable-cyber-warfare, https://www.cambridge.org/au/academic/subjects/law/humanitarian-law.

Schmitt, Michael N., and Liis Vihul. “Proxy Wars in Cyberspace: The Evolving International Law of Attribution Policy.” Fletcher Security Review 1, no. 2 (2014): 53–72.

———, eds. Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations. Tallinn Manual 2. Cambridge: Cambridge University Press, 2017. https://www.cambridge.org/au/academic/subjects/law/humanitarian-law/tallinn-manual-20-international-law-applicable-cyber-operations-2nd-edition, https://www.cambridge.org/au/academic/subjects/law/humanitarian-law.

Schmitt, Olivier. “When Are Strategic Narratives Effective? The Shaping of Political Discourse through the Interaction between Political Myths and Strategic Narratives.” Contemporary Security Policy 39, no. 4 (October 2, 2018): 487–511. https://doi.org/10.1080/13523260.2018.1448925.

Schmitz, Aldo Antonio, and Francisco Jose Castilhos Karam. “The Spin Doctors of News Sources.” Brazilian Journalism Research 9, no. 1 (2013): 96–113.

Schroeder, Ralph. “Even in Sweden?: Misinformation and Elections in the New Media Landscape.” Nordic Journal of Media Studies 2, no. 1 (June 7, 2020): 97–108. https://doi.org/10.2478/njms-2020-0009.

Scott. “In Race for Coronavirus Vaccine, Russia Turns to Disinformation.” POLITICO, November 19, 2020. https://www.politico.eu/article/covid-vaccine-disinformation-russia/.

Scott, Cory. “Protecting Our Members.” LinkedIn, May 18, 2016. https://blog.linkedin.com/2016/05/18/protecting-our-members.

Scott, Mark, and Carlo Martuscelli. “Meet Sputnik V, Russia’s Trash-Talking Coronavirus Vaccine.” Politico, April 1, 2021. https://www.politico.eu/article/russia-coronavirus-vaccine-disinformation-sputnik/.

Searle, John R. “A Taxonomy of Illocutionary Acts.” Minnesota Studies in the Philosophy of Science 7 (1975): 344–69.

Seebeck, Lesley. “Why the Fifth Domain Is Different.” The Strategist, September 4, 2019. https://www.aspistrategist.org.au/why-the-fifth-domain-is-different/.

SelfKey. “Facebook’s Data Breaches - A Timeline.” SelfKey (blog), March 11, 2020. https://selfkey.org/facebooks-data-breaches-a-timeline/.

Selgelid, Michael J., and Christian Enemark. “Infectious Diseases, Security and Ethics: The Case of Hiv/Aids.” Bioethics 22, no. 9 (2008): 457–65. https://doi.org/10.1111/j.1467-8519.2008.00696.x.

Senate Select Committee on Intelligence. “Report of the Select Committee on Intelligence United States Senate on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election Volume 2: Russia’s Use of Social Media With Additional Views.” Russian Active Measures Campaigns and Interfernece in the 2016 U.S. Elections. Washington, D.C.: United States Senate, 2019. https://www.intelligence.senate.gov/sites/default/files/documents/Report_Volume2.pdf.

———. “S. Hrg. 115-105 Russian Intervention in European Elections.” U.S. Senate Select Committee on Intelligence, June 28, 2017. https://www.intelligence.senate.gov/hearings/open-hearing-russian-intervention-european-elections#.

Sengupta, Kim. “The Final WhatsApp Message Sent by Westminster Attacker Khalid Masood Has Been Released by Security Agencies.” The Independent, April 28, 2017. https://www.independent.co.uk/news/uk/crime/last-message-left-westminster-attacker-khalid-masood-uncovered-security-agencies-a7706561.html.

Bibliography

306

Shackelford, Scott. “The Battle against Disinformation Is Global.” The Conversation. Accessed May 22, 2021. http://theconversation.com/the-battle-against-disinformation-is-global-129212.

Shearer, Elisa. “Social Media Outpaces Print Newspapers in the U.S. as a News Source.” Pew Research Center (blog), December 10, 2018. https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/.

Sheldon, Rose Mary. “A Guide to Intelligence from Antiquity to Rome.” The Intelligencer 18, no. 3 (2011): 49–51.

———. “Caesar, Intelligence, and Ancient Britain.” International Journal of Intelligence and CounterIntelligence 15, no. 1 (January 2002): 77–100. https://doi.org/10.1080/088506002753412892.

Sienkiewicz, Matt. “Open BUK: Digital Labor, Media Investigation and the Downing of MH17.” Critical Studies in Media Communication 32, no. 3 (May 27, 2015): 208–23. https://doi.org/10.1080/15295036.2015.1050427.

Sigholm, J., and M. Bang. “Towards Offensive Cyber Counterintelligence: Adopting a Target-Centric View on Advanced Persistent Threats.” In 2013 European Intelligence and Security Informatics Conference, 166–71. IEEE Computer Society, 2013. https://doi.org/10.1109/EISIC.2013.37.

Sjöstedt, Roxanna. “Exploring the Construction of Threats: The Securitization of HIV/AIDS in Russia.” Security Dialogue 39, no. 1 (March 1, 2008): 7–29. https://doi.org/10.1177/0967010607086821.

Smith, Gavin, and Valeska Bloch. “The Hack Back: The Legality of Retaliatory Hacking.” Allens: Insight, October 17, 2018. https://www.allens.com.au/insights-news/insights/2018/10/pulse-the-hack-back-the-legality-of-retaliatory-hacking/.

Smith, Hugh. “The Use of Armed Forces in Law Enforcement: Legal, Constitutional and Political Issues in Australia.” Australian Journal of Political Science 33, no. 2 (July 1998): 219–33. https://doi.org/10.1080/10361149850624.

Smith, Michael. The Secrets of Station X: How the Bletchley Park Codebreakers Helped Win the War. Hull: Biteback Publishing, 2011.

Smyczek, Peter J. “Regulating the Battlefield of the Future: The Legal Limitations on the Conduct of Psychological Operations (PSYOP) under Public International Law. - Free Online Library.” Air Force Law Review 57 (2005): 209–40.

Sottek, T. C., and Janus Kopfstein. “Everything You Need to Know about PRISM.” The Verge, July 17, 2013. https://www.theverge.com/2013/7/17/4517480/nsa-spying-prism-surveillance-cheat-sheet.

Spadafora, Anthony. “Hackers Use Covid-19 ‘special Offers’ to Spread Malware.” TechRadar, March 23, 2020. https://www.techradar.com/au/news/hackers-use-covid-19-special-offers-to-spread-malware.

———. “Mirai Botnet Returns to Target IoT Devices.” TechRadar, March 19, 2019. https://www.techradar.com/news/mirai-botnet-returns-to-target-iot-devices.

Spencer, Saranac Hale. “Conspiracy Theory Misinterprets Goals of Gates Foundation.” FactCheck.Org (blog), April 14, 2020. https://www.factcheck.org/2020/04/conspiracy-theory-misinterprets-goals-of-gates-foundation/.

Stanley, Jason. How Propaganda Works. Princeton: Princeton University Press, 2015. Steffek, Jens. “Discursive Legitimation in Environmental Governance | Elsevier Enhanced

Reader.” Forest Policy and Economic 11 (2009): 313–18. https://doi.org/10.1016/j.forpol.2009.04.003.

Steiner, Eva. “Legislating against Terrorism-The French Approach.” Lecture. London: Chatham House, 2005.

Courteney O’Connor – PhD Thesis 2021

307

https://www.chathamhouse.org/sites/default/files/public/Research/International%20Law/ilp081205.doc.

Stelzenmüller, Constanze. “The Impact of Russian Interference on Germany’s 2017 Elections.” Brookings (blog), June 28, 2017. https://www.brookings.edu/testimonies/the-impact-of-russian-interference-on-germanys-2017-elections/.

Stiennon, Richard. “A Short History of Cyber Warfare.” In Cyber Warfare: A Multidisciplinary Analysis, edited by James A. Green, 7–32. Oxon: Routledge, 2015.

Stritzel, Holger. Security in Translation: Securitization Theory and the Localization of Threat. New Security Challenges. New York: Palgrave Macmillan, 2014.

Stritzel, Holger, and Sean C Chang. “Securitization and Counter-Securitization in Afghanistan.” Security Dialogue 46, no. 6 (December 1, 2015): 548–67. https://doi.org/10.1177/0967010615588725.

Strömbäck, Jesper, Yariv Tsfati, Hajo Boomgaarden, Alyt Damstra, Elina Lindgren, Rens Vliegenthart, and Torun Lindholm. “News Media Trust and Its Impact on Media Use: Toward a Framework for Future Research.” Annals of the International Communication Association 44, no. 2 (April 2, 2020): 139–56. https://doi.org/10.1080/23808985.2020.1755338.

Suciu, Peter. “More Americans Are Getting Their News From Social Media.” Forbes, October 11, 2019. https://www.forbes.com/sites/petersuciu/2019/10/11/more-americans-are-getting-their-news-from-social-media/.

Sunawar, Lubna. “Regional Security Complex Theory: A Case Study of Afghanistan-Testing the Alliance.” Journal of Security and Strategic Analyses 4, no. 2 (Winter 2018): 53–78.

Swami, Viren, Jakob Pietschnig, Ulrich S. Tran, Ingo W. Nader, Stefan Stieger, and Martin Voracek. “Lunar Lies: The Impact of Informational Framing and Individual Differences in Shaping Conspiracist Beliefs About the Moon Landings.” Applied Cognitive Psychology 27, no. 1 (2013): 71–80. https://doi.org/10.1002/acp.2873.

Swift, John. “The Soviet-American Arms Race | History Today.” History Today, 2009. https://www.historytoday.com/archive/soviet-american-arms-race.

Sydow, Björn von. “NATO Review - Resilience: Planning for Sweden’s ‘Total Defence.’” NATO Review, April 4, 2018. https://www.nato.int/docu/review/articles/2018/04/04/resilience-planning-for-swedens-total-defence/index.html.

Talbot, id. “How Technology Failed in Iraq.” MIT Technology Review, November 1, 2004. https://www.technologyreview.com/2004/11/01/232152/how-technology-failed-in-iraq/.

Taureck, Rita. “Securitization Theory and Securitization Studies.” Journal of International Relations and Development 9, no. 1 (March 2006): 53–61. http://dx.doi.org.virtual.anu.edu.au/10.1057/palgrave.jird.1800072.

Taylor, Margaret L. “Combating Disinformation and Foreign Interference in Democracies: Lessons from Europe.” Brookings (blog), July 31, 2019. https://www.brookings.edu/blog/techtank/2019/07/31/combating-disinformation-and-foreign-interference-in-democracies-lessons-from-europe/.

Taylor, Stan A. “Definitions and Theories of Counterintelligence.” In Essentials of Strategic Intelligence, edited by Loch K. Johnson, 285–302. Praeger Security International Textbook. Santa Barbara: Praeger, 2015.

Telford, Taylor. “1,464 Western Australian Government Officials Used ‘Password123’ as Their Password. Cool, Cool.” Washington Post, August 23, 2018. https://www.washingtonpost.com/technology/2018/08/22/western-australian-government-officials-used-password-their-password-cool-cool/.

Tenove, Chris, Jordan Buffie, Spencer McKay, and David Moscrop. “Digital Threats to Democratic Elections: How Foreign Actors Use Digital Techniques to Undermine

Bibliography

308

Democracy.” Centre for the Study of Democratic Institutions, 2018. https://www.ssrn.com/abstract=3235819.

The Economist. “The World’s Most Valuable Resource Is No Longer Oil, but Data.” The Economist, May 6, 2017. https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data.

The Guardian Staff. “World Leaders Express Outrage.” The Guardian, September 11, 2001. http://www.theguardian.com/world/2001/sep/11/september11.usa10.

The Irish Times. “London Terror Attack: Who Was Khuram Shazad Butt?” The Irish Times. Accessed May 15, 2021. https://www.irishtimes.com/news/world/uk/london-terror-attack-who-was-khuram-shazad-butt-1.3109174.

The Kurdish Project. “Where Is Kurdistan?” The Kurdish Project (blog), 2021. https://thekurdishproject.org/kurdistan-map/.

The Local. “Russia Spreading Fake News and Forged Docs in Sweden: Report.” The Local Sweden (blog), January 7, 2017. https://www.thelocal.se/20170107/swedish-think-tank-details-russian-disinformation-in-new-study/.

———. “Sweden to Create New Authority Tasked with Countering Disinformation.” The Local Sweden, January 15, 2018. https://www.thelocal.se/20180115/sweden-to-create-new-authority-tasked-with-countering-disinformation/.

Thomas, Elise. “Covid-19 Disinformation Campaigns Shift Focus to Vaccines.” The Strategist, August 23, 2020. https://www.aspistrategist.org.au/covid-19-disinformation-campaigns-shift-focus-to-vaccines/.

Thomas, Elise, and Albert Zhang. “ID2020, Bill Gates and the Mark of the Beast: How Covid-19 Catalyses Existing Online Conspiracy Movements.” Australian Strategic Policy Institute International Cyber Policy Centre, 2020.

Thorsen, Einar. “Journalistic Objectivity Redefined? Wikinews and the Neutral Point of View.” New Media & Society 10, no. 6 (December 1, 2008): 935–54. https://doi.org/10.1177/1461444808096252.

Thorson, Emily. “Belief Echoes: The Persistent Effects of Corrected Misinformation.” Political Communication 33, no. 3 (July 2, 2016): 460–80. https://doi.org/10.1080/10584609.2015.1102187.

Timberg, Craig. “The Real Story of How the Internet Became so Vulnerable.” Washington Post (blog), 2015. http://www.washingtonpost.com/sf/business/2015/05/30/net-of-insecurity-part-1/.

TOR. “The Tor Project | Privacy & Freedom Online.” TOR, 2021. https://torproject.org. Toucas, Boris. “Exploring the Information-Laundering Ecosystem: The Russian Case.” Center for

Strategic & International Studies, August 31, 2017. https://www.csis.org/analysis/exploring-information-laundering-ecosystem-russian-case.

———. “The Macron Leaks: The Defeat of Informational Warfare.” Center for Strategic & International Studies, May 30, 2017. https://www.csis.org/analysis/macron-leaks-defeat-informational-warfare.

Traynor, Ian. “Angela Merkel: NSA Spying on Allies Is Not On.” the Guardian, October 24, 2013. http://www.theguardian.com/world/2013/oct/24/angela-merkel-nsa-spying-allies-not-on.

Trevors, Gregory J., Krista R. Muis, Reinhard Pekrun, Gale M. Sinatra, and Philip H. Winne. “Identity and Epistemic Emotions During Knowledge Revision: A Potential Account for the Backfire Effect.” Discourse Processes 53, no. 5–6 (July 3, 2016): 339–70. https://doi.org/10.1080/0163853X.2015.1136507.

Trump, Donald. “Nevada Is Turning out to Be a Cesspool of Fake Votes. @mschlapp & @AdamLaxalt Are Finding Things That, When Released, Will Be Absolutely Shocking!” Twitter, November 10, 2020.

Courteney O’Connor – PhD Thesis 2021

309

https://web.archive.org/web/20201109195514/https://twitter.com/realdonaldtrump/status/1325889532840062976.

Turner Johnson, James. “Thinking Morally about War in the Middle Ages and Today.” In Ethics, Nationalism, and Just War: Medieval and Contemporary Perspectives, edited by Henrik Syse and Gregory M. Reichberg, 3–10. Washington, D.C.: The Catholic University of America Press, 2007.

UK Public General Acts. “Data Protection Act 2018.” Legislation.gov.uk. Queen’s Printer of Acts of Parliament, 2018. https://www.legislation.gov.uk/ukpga/2018/12/contents/enacted.

UNAIDS. “UNAIDS | United Nations.” United Nations AIDS, 2021. https://www.unaids.org/en/Homepage.

UNFPA. “UNFPA - United Nations Population Fund.” United Nations Population Fund, 2021. https://www.unfpa.org/.

UNICEF. “UNICEF | United Nations.” United Nations Children’s Fund, 2021. https://www.unicef.org/.

United Nations. “Counter-Terrorism Committee.” United Nations Security Council Counter-Terrorism Committee, 2021. https://www.un.org/sc/ctc/.

———. “History of the Question of Palestine.” Question of Palestine (blog), March 19, 2020. https://www.un.org/unispal/history/.

———. “Paris Agreement - Status of Ratification.” United Nations Climate Change, 2016. https://unfccc.int/process/the-paris-agreement/status-of-ratification.

United Nations Institute for Disarmament Research. “The Cyber Index: International Security Trends and Realities.” Geneva: United Nations Institute for Disarmament Research, 2013. https://www.unidir.org/files/publications/pdfs/cyber-index-2013-en-463.pdf.

———. “UNIDIR Cyber Policy Portal.” UNIDIR Cyber Policy Portal, 2021. https://unidir.org/cpp/en/.

United Nations Office for Disarmament Affairs. “Group of Governmental Experts - Cyberspace.” United Nations Office for Disarmament Affairs, 2021. https://www.un.org/disarmament/group-of-governmental-experts/.

United Nations Security Council. “20 Years after Adopting Landmark Anti-Terrorism Resolution, Security Council Resolves to Strengthen International Response against Heinous Acts, in Presidential Statement.” United Nations Meetings Coverage and Press Releases, January 12, 2021. https://www.un.org/press/en/2021/sc14408.doc.htm.

United States Department of Defense. “Department of Defense Strategy for Operating in Cyberspace.” United States Department of Defense, 2011. https://csrc.nist.gov/CSRC/media/Projects/ISPAB/documents/DOD-Strategy-for-Operating-in-Cyberspace.pdf.

United States Department of Homeland Security. “Counterterrorism Laws & Regulations.” Department of Homeland Security, June 4, 2009. https://www.dhs.gov/counterterrorism-laws-regulations.

United States Department of Justice. “The USA PATRIOT Act: Preserving Life and Liberty.” United States Department of Justice, 2001. https://www.justice.gov/archive/ll/highlights.htm.

United States Government. “National Infrastructure Protection Plan | CISA.” Cybersecurity and Infrastructure Security Agency, 2021. https://www.cisa.gov/national-infrastructure-protection-plan.

U.S. Department of Homeland Security. “Homeland Security Digital Library.” Homeland Security Digital Library. Accessed March 20, 2021. https://www.hsdl.org/?search&exact=Senate+Report+on+Russian+Interference+in+2016+Elections&searchfield=series&collection=limited&submitted=Search&advanced=1&release=0&so=date.

Bibliography

310

Uscinski, Joseph E., Karen Douglas, and Stephan Lewandowsky. “Climate Change Conspiracy Theories.” In Oxford Research Encyclopedia of Climate Science. Oxford: Oxford University Press, 2017. https://doi.org/10.1093/acrefore/9780190228620.013.328.

Vaara, Eero. “Struggles over Legitimacy in the Eurozone Crisis: Discursive Legitimation Strategies and Their Ideological Underpinnings.” Discourse & Society 25, no. 4 (July 1, 2014): 500–518. https://doi.org/10.1177/0957926514536962.

Van Wynsberghe, Rob, and Smia Khan. “Redefining Case Study.” International Journal of Qualitative Methods 6, no. 2 (2007): 80–94.

Vardangalos, George. “Cyber-Intelligence and Cyber Counterintelligence (CCI): General Definitions and Principles.” Center for International Strategic Analyses, 2016. https://kedisa.gr/wp-content/uploads/2016/07/Cyber-intelligence-and-Cyber-Counterintelligence-CCI-General-definitions-and-principles-2.pdf.

Varouhakis, Miron. “An Institution-Level Theoretical Approach for Counterintelligence.” International Journal of Intelligence and CounterIntelligence 24, no. 3 (September 2011): 494–509. https://doi.org/10.1080/08850607.2011.568293.

Vazquez, Maegan, and Paul Murphy. “Trump Isn’t the Only Republican Who Gave Cambridge Analytica Big Bucks.” CNN, March 21, 2018. https://www.cnn.com/2018/03/20/politics/cambridge-analytica-republican-ties/index.html.

Ven Der Pijl, Kees. “La Disciplina Del Miedo. La Securitización de Las Relaciones Internacionales Tras El 11-S Desde Una Perspectiva Histórica.” Relaciones Internacionales 31 (2016): 153–87.

Victor, Daniel, Lew Serviss, and Azi Paybarah. “In His Own Words, Trump on the Coronavirus and Masks.” The New York Times, October 2, 2020, sec. U.S. https://www.nytimes.com/2020/10/02/us/politics/donald-trump-masks.html.

Vieira, Marco Antonio. “The Securitization of the HIV/AIDS Epidemic as a Norm: A Contribution to Constructivist Scholarship on the Emergence and Diffusion of International Norms.” Brazilian Political Science Review (Online) 2, no. SE (December 2007): 0–0.

Violino, Bob. “Want to Help Stop Cyber Security Breaches? Focus on Human Error.” ZDNet, January 23, 219AD. https://www.zdnet.com/article/want-to-help-stop-cyber-security-breaches-focus-on-human-error/.

Visit Estonia. “E-Estonia | Why Estonia?” Visitestonia.com, 2021. https://www.visitestonia.com/en/why-estonia/estonia-is-a-digital-society.

Voice of America. “China Denies Any Role in ‘GhostNet’ Computer Hacking | Voice of America - English.” Voice of America, November 2, 2009. https://www.voanews.com/archive/china-denies-any-role-ghostnet-computer-hacking.

Vosoughi, Soroush, Deb Roy, and Sinan Aral. “The Spread of True and False News Online.” Science 359, no. 6380 (March 9, 2018): 1146. https://doi.org/10.1126/science.aap9559.

Vreese, Claes H. de, and Matthijs Elenbaas. “Spin and Political Publicity: Effects on News Coverage and Public Opinion.” In Political Communication in Postmodern Democracy: Challenging the Primacy of Politics, edited by Kees Brants and Katrin Voltmer, 75–91. London: Palgrave Macmillan UK, 2011. https://doi.org/10.1057/9780230294783_5.

Waever, Ole. “Securitisation: Taking Stock of a Research Programme in Security Studies,” 2003. https://docplayer.net/62037981-Securitisation-taking-stock-of-a-research-programme-in-security-studies.html.

Walker, Kent. “Four Steps We’re Taking Today to Fight Terrorism Online.” Google, June 18, 2017. https://blog.google/around-the-globe/google-europe/four-steps-were-taking-today-fight-online-terror/.

Walkowicz, Lucianne. “The Source of UFO Fascination.” Issues in Science and Technology 35, no. 4 (2019): 12–14.

Courteney O’Connor – PhD Thesis 2021

311

Walsh, Nick Paton, Jo Shelley, Eduardo Duwe, and William Bonnett. “Bolsonaro Calls Coronavirus a ‘little Flu.’ Inside Brazil’s Hospitals, Doctors Know the Horrifying Reality.” CNN, May 25, 2020. https://edition.cnn.com/2020/05/23/americas/brazil-coronavirus-hospitals-intl/index.html.

Walton, D.N. “Poisoning the Well.” Argumentation 20, no. 3 (2006): 273–307. https://doi.org/10.1007/s10503-006-9013-z.

Waltz, Kenneth W. Man the State and War: A Theoretical Analysis. 2nd ed. New York: Columbia University Press, 2001.

Watkin, Amy-Louise, and Joe Whittaker. “Evolution of Terrorists’ Use of the Internet.” Text. Counter Terror Business, October 20, 2017. https://counterterrorbusiness.com/features/evolution-terrorists%E2%80%99-use-internet.

Watson, Amy. “Usage of Social Media as a News Source Worldwide 2020.” Statista, June 23, 2020. https://www.statista.com/statistics/718019/social-media-news-source/.

Watson, Scott D. “‘Framing’ the Copenhagen School: Integrating the Literature on Threat Construction.” Millennium: Journal of International Studies 40, no. 2 (January 2012): 279–301. https://doi.org/10.1177/0305829811425889.

Weber, Rolf H. “Internet of Things: Privacy Issues Revisited.” Computer Law & Security Review 31 (2015): 618–27. https://doi.org/10.1016/j.clsr.2015.07.002.

Webman, Esther. The Global Impact of the Protocols of the Elders of Zion: A Century-Old Myth. Florence, United States: Taylor & Francis Group, 2011.

Welch, Craig. “Why Cape Town Is Running Out of Water, and the Cities That Are Next.” Science, March 5, 2018. https://www.nationalgeographic.com/science/article/cape-town-running-out-of-water-drought-taps-shutoff-other-cities.

West, Darrell M. “How to Combat Fake News and Disinformation.” Brookings, December 18, 2017. https://www.brookings.edu/research/how-to-combat-fake-news-and-disinformation/.

Westbrook, Tom. “Joint Strike Fighter Plans Stolen in Australia Cyber Attack.” Reuters, November 10, 2017. https://www.reuters.com/article/us-australia-defence-cyber-idUSKBN1CH00F.

Wetsman, Nicole. “Woman Dies during a Ransomware Attack on a German Hospital.” The Verge, September 17, 2020. https://www.theverge.com/2020/9/17/21443851/death-ransomware-attack-hospital-germany-cybersecurity.

Wettering, Frederick L. “Counterintelligence: The Broken Triad.” International Journal of Intelligence and CounterIntelligence 13, no. 3 (October 2000): 265–300. https://doi.org/10.1080/08850600050140607.

Williams, Martyn. “Inside the Russian Hack of Yahoo: How They Did It.” CSO Online, October 4, 2017. https://www.csoonline.com/article/3180762/inside-the-russian-hack-of-yahoo-how-they-did-it.html.

Williams, Michael C. “Securitization as Political Theory: The Politics of the Extraordinary.” International Relations 29, no. 1 (2015): 114–20.

———. “Words, Images, Enemies: Securitization and International Politics.” International Studies Quarterly 47, no. 4 (December 2003): 511–31. https://doi.org/10.1046/j.0020-8833.2003.00277.x.

Wilson, Mark. “Here Is Facebook’s First Serious Attempt To Fight Fake News.” Fast Company, December 15, 2016. https://www.fastcompany.com/3066630/here-is-facebooks-first-serious-attempt-to-fight-fake-news.

Winder, Davey. “Lockheed Martin, SpaceX And Tesla Caught In Cyber Attack Crossfire.” Forbes, March 2, 2020. https://www.forbes.com/sites/daveywinder/2020/03/02/lockheed-martin-spacex-and-tesla-caught-in-cyber-attack-crossfire/.

Bibliography

312

Windrem, Robert. “Pentagon and Hackers in ‘Cyberwar.’” ZDNet, March 5, 1999. https://www.zdnet.com/article/pentagon-and-hackers-in-cyberwar/.

Winstead, Nicholas. “Hack-Back: Toward A Legal Framework For Cyber Self-Defense.” American University, June 26, 2020. https://www.american.edu/sis/centers/security-technology/hack-back-toward-a-legal-framework-for-cyber-self-defense.cfm.

Wiseman, Geoffrey, and Paul Sharp. “Diplomacy.” In An Introduction to International Relations, edited by Anthony Burke, Jim George, and Richard Devetak, 2nd ed., 256–67. Cambridge: Cambridge University Press, 2011. https://doi.org/10.1017/CBO9781139196598.022.

World Bank Group. “Financial Sector’s Cybersecurity: Regulations and Supervision.” Washington, D.C.: The World Bank Group, 2018.

World Health Organization. “World Health Organization | United Nations.” United Nations World Health Organization, 2021. https://www.who.int.

Wray, Christopher. “Responding Effectively to the Chinese Economic Espionage Threat.” Speech. Federal Bureau of Investigation, February 6, 2020. https://www.fbi.gov/news/speeches/responding-effectively-to-the-chinese-economic-espionage-threat.

Wyn Jones, Richard. Security, Strategy, and Critical Theory. Boulder: Lynne Rienner Publishers, 1999. Yourish, Karen, Larry Buchanan, and Derek Watkins. “A Timeline Showing the Full Scale of

Russia’s Unprecedented Interference in the 2016 Election, and Its Aftermath.” The New York Times, September 20, 2018. https://www.nytimes.com/interactive/2018/09/20/us/politics/russia-trump-election-timeline.html.

Zetter, Kim. “An Unprecedented Look at Stuxnet, the World’s First Digital Weapon.” Wired, November 3, 2014. https://web.archive.org/web/20151207190234/https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet/.

———. “Four Indicted in Massive JP Morgan Chase Hack.” Wired, October 10, 2015. https://www.wired.com/2015/11/four-indicted-in-massive-jp-morgan-chase-hack/.

———. “Meet ‘Flame,’ The Massive Spy Malware Infiltrating Iranian Computers.” Wired, May 28, 2012. https://web.archive.org/web/20141014203532/http://www.wired.com/2012/05/flame/all.

———. “Report: Google Hackers Stole Source Code of Global Password System.” Wired, April 20, 2010. https://www.wired.com/2010/04/google-hackers/.

Zimmermann, Kim Ann, and Jesse Emspak. “Internet History Timeline: ARPANET to the World Wide Web.” Live Science, June 27, 2017. https://www.livescience.com/20727-internet-history.html.

Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books, 2018.

Zurcher. “Hillary Clinton Emails - What’s It All About?” BBC News, November 6, 2016, sec. US & Canada. https://www.bbc.com/news/world-us-canada-31806907.