16
Identity crisis: Global challenges of identity protection in a networked world 1 Alison Knight * , Steve Saxby Institute for the Law and the Web, University of Southampton, UK Keywords: Digital identity Identity crime Identity management Big data Profiling Mobile identity Automated identification Identity surveillance Biometrics abstract Modern identity is valuable, multi-functional and complex. Today we typically manage multiple versions of self, made visible in digital trails distributed widely across offline and online spaces. Yet, technology-mediated identity leads us into crisis. Enduring accessibility to greater and growing personal details online, alongside increases in both computing power and data linkage techniques, fuel fears of identity exploitation. Will it be stolen? Who controls it? Are others aggregating or analysing our identities to infer new data about us without our knowledge or consent? New challenges present themselves globally around these fears, as manifested by concerns over massive online data breaches and automated identification technologies, which also highlight the conundrum faced by governments about how to safeguard individuals' interests on the Web while striking a fair balance with wider public interests. This paper reflects upon some of these problems as part of the inter- disciplinary, transatlantic ‘SuperIdentityproject investigating links between cyber and real-world identifiers. To meet the crisis, we explore the relationship between identity and digitisation from the perspective of policy and law. We conclude that traditional models of identity protection need supplementing with new ways of thinking, including pioneering ‘technical-legalinitiatives that are sensitive to the different risks that threaten our digital identity integrity. Only by re-conceiving identity dynamically to appreciate the increasing capabilities for connectivity between different aspects of our identity across the cyber and the physical domains, will policy and law be able to keep up with and address the chal- lenges that lie ahead in our progressively networked world. © 2014 Steve Saxby & Alison Knight. Published by Elsevier Ltd. All rights reserved. 1 An abbreviated version of this paper was presented at the 2013 IAITL Legal Conference Series (8th International Conference on Legal, Security and Privacy Issues in IT Law) held in Bangkok, Thailand and published in the conference proceedings as: Saxby, Steve and Knight, Alison (2013) ‘Identity crisis: global challenges of identity protection in a networked world. In: Kierkegaard, Sylvia (ed.) Law & Practice: Critical Analysis and Legal Reasoning. Copenhagen, DK: International Association of IT Lawyers, p. 13. This paper follows in the footsteps of ‘FIDIS(The Future of Identity in the Information Society), an EU-funded network of excellence from 2004 to 2009 that provided an integrated inter-disciplinary approach to an identity research agenda, by offering a broad overview for possible reflection on how future legal research around digital identity might progress. * Corresponding author. Institute for the Law and the Web, School of Law, Faculty of Business and Law, The University, Highfield, Southampton, SO17 1BJ, UK. E-mail addresses: [email protected] (A. Knight), [email protected] (S. Saxby). Available online at www.sciencedirect.com ScienceDirect www.compseconline.com/publications/prodclaw.htm computer law & security review 30 (2014) 617 e632 http://dx.doi.org/10.1016/j.clsr.2014.09.001 0267-3649/© 2014 Steve Saxby & Alison Knight. Published by Elsevier Ltd. All rights reserved.

Identity crisis: Global challenges of identity protection in a networked world1

  • Upload
    steve

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Identity crisis: Global challenges of identity protection in a networked world1

ww.sciencedirect.com

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2

Available online at w

ScienceDirect

www.compseconl ine.com/publ icat ions/prodclaw.htm

Identity crisis: Global challenges of identityprotection in a networked world1

Alison Knight*, Steve Saxby

Institute for the Law and the Web, University of Southampton, UK

Keywords:

Digital identity

Identity crime

Identity management

Big data

Profiling

Mobile identity

Automated identification

Identity surveillance

Biometrics

1 An abbreviated version of this paper wasSecurity and Privacy Issues in IT Law) heldKnight, Alison (2013) ‘Identity crisis: global cPractice: Critical Analysis and Legal Reasoninfootsteps of ‘FIDIS’ (The Future of Identityprovided an integrated inter-disciplinary apphow future legal research around digital ide

* Corresponding author. Institute for the LaSouthampton, SO17 1BJ, UK.

E-mail addresses: [email protected]

http://dx.doi.org/10.1016/j.clsr.2014.09.0010267-3649/© 2014 Steve Saxby & Alison Knig

a b s t r a c t

Modern identity is valuable, multi-functional and complex. Today we typically manage

multiple versions of self, made visible in digital trails distributed widely across offline and

online spaces. Yet, technology-mediated identity leads us into crisis. Enduring accessibility

to greater and growing personal details online, alongside increases in both computing

power and data linkage techniques, fuel fears of identity exploitation. Will it be stolen?

Who controls it? Are others aggregating or analysing our identities to infer new data about

us without our knowledge or consent? New challenges present themselves globally around

these fears, as manifested by concerns over massive online data breaches and automated

identification technologies, which also highlight the conundrum faced by governments

about how to safeguard individuals' interests on the Web while striking a fair balance with

wider public interests. This paper reflects upon some of these problems as part of the inter-

disciplinary, transatlantic ‘SuperIdentity’ project investigating links between cyber and

real-world identifiers. To meet the crisis, we explore the relationship between identity and

digitisation from the perspective of policy and law. We conclude that traditional models of

identity protection need supplementing with new ways of thinking, including pioneering

‘technical-legal’ initiatives that are sensitive to the different risks that threaten our digital

identity integrity. Only by re-conceiving identity dynamically to appreciate the increasing

capabilities for connectivity between different aspects of our identity across the cyber and

the physical domains, will policy and law be able to keep up with and address the chal-

lenges that lie ahead in our progressively networked world.

© 2014 Steve Saxby & Alison Knight. Published by Elsevier Ltd. All rights reserved.

presented at the 2013 IAITL Legal Conference Series (8th International Conference on Legal,in Bangkok, Thailand and published in the conference proceedings as: Saxby, Steve andhallenges of identity protection in a networked world’. In: Kierkegaard, Sylvia (ed.) Law &g. Copenhagen, DK: International Association of IT Lawyers, p. 13. This paper follows in thein the Information Society), an EU-funded network of excellence from 2004 to 2009 thatroach to an identity research agenda, by offering a broad overview for possible reflection onntity might progress.

w and the Web, School of Law, Faculty of Business and Law, The University, Highfield,

k (A. Knight), [email protected] (S. Saxby).

ht. Published by Elsevier Ltd. All rights reserved.

Page 2: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2618

1. What does ‘identity’ mean today?

Identity refers to ‘who I am’. Extrapolating beyond this basic

definition presents us with a choice of intellectual routes. At a

descriptive level, identity is a physical fact inseparable from

and often unique to us (such as our fingerprints and DNA

profile). Surpassing our biological self, it incorporates bio-

graphical characteristics that others attribute to us or that we

acquire during our lives (such as our birth name, date of birth,

nationality or employment history). Combinations of identity

attributes may also include personality traits, our interests,

our habits and other behavioural characteristics.

Identity becomes even more intricate when we drill down

further. It is a concept shaped by context and circumstance.

Who I am now depends upon a host of factors. For example,

howwe present ourselves in our daily lives depends on where

we are, who we are with and what roles we are fulfilling.

Indeed, we have wider scope nowadays than generations

previously for constructing alternate identities for different

purposes. Consider someone who is a parent, a lawyer, a

diabetic, a keen gardener and a Manchester United football

supporter. While the projection of each aspect of ‘self’ may

correctly come to the fore under particular circumstances, a

person can be all of these personae simultaneously and much

more. Each one displays only a part of what makes up an in-

dividual. Our self-presentation (‘impression management’)

may also change over time as we respond to the norms and

expectations of others, and as we mature in ‘our-selves’. The

advance of the online world enhances this trend.

Identity conception is also pragmatic: it is that which

represents and makes us identifiable within a set of people.

Social identity, for example, typically refers to identification in

a group or belonging to a class. In this context, your race,

gender and sexual orientation are constituents of identity,

albeit that they may not be obvious to a bystander or neces-

sarily clear-cut. Closely associated is the phrase ‘cultural

identity’ suggesting association with a specific ethnicity, or

other tradition-based social grouping such as religious affili-

ation, which may be manifest through a particular choice of

lifestyle.

Law, by contrast, customarily conceives of citizens as

having only one, panoptic identity to which legal rights and

responsibilities affix.2 Nevertheless, even fairly stable civic

‘identifiers’ (those proxies bywhichwe are recognisable), such

as full name, nationality and gender, can alter over a lifetime,

with changes typically subject to legal recognition and ratifi-

cation. In combination with the fact that many of the formal

identifiers used by a State may be associated with more than

one individual within it, the conception of a fixed, single

2 Based on the locus of interests of the authors, this paperprovides examples primarily from European law and regulation,in particular from the UK. However, it is hoped that the readerwill appreciate that there are important commonalities to thechallenges faced by countries all around the globe that allowgeneralisation to a certain degree on these issues. For an inter-national perspective, we recommend Sullivan, C. (2010). DigitalIdentity, an Emergent Legal Concept: The Role and Legal Natureof Digital Identity in Commercial Transactions. Adelaide,Australia: University of Adelaide Press.

identity becomes legal fiction, together with the notion of a

fool-proof guarantee of its assurance. For example, although

possession of physical tokens (such as birth certificates) may

suggest that a recognised authority has provided them to us,

they do not connect intrinsically to us. The same is true for

information knowledge chosen by us (such as passwords).

Individuals only connect with identifiers with very high cer-

tainty in the case of certain biometrics (such as DNA). Such

obstacles are surmountable at an everyday practical level

because themeans for formal identification required by law is

variable depending upon the perspective of the one doing the

identifying, alongside the degree of certainty required in the

ultimate identity decision.

Given that identity reveals itself in digital as well as

physical contexts, an additional consideration in determining

what identity means today is the impact of modern technol-

ogy. Technology influences how we present ourselves and

how others identify us. The importance of identification grew

as electronic technologies replaced paper documents, gener-

ating personal data for storage and automatic processing into

usable information. Copying, searching, storing and sharing

personal details takes even less time and expense since the

arrival of the Internet and ‘datification’ (the ability to quantify

all sorts of information into machine-readable data format).3

Therefore, the notion of identity is multi-faceted and

evokes complex arrays of sub-themes. It distinguishes one

individual from another, while also connoting qualities of

sameness over time. Identity is a sumof different attributes by

which we, and others, recognise our individuality and shared

commonalities within sets of people. Furthermore, it com-

bines multi-disciplinary perspectives of psychology, physi-

ology and philosophy, amongst others academic areas. While

theories of identity from different disciplines persevere,

technological developments, in hand with economic and so-

cietal changes, affect modern identity perceptions, its con-

struction and presentation.

The purpose of this paper is to explore the changing nature

of identity online and reflect upon some current issues of

misuse giving rise to challenges worldwide of a legal and

policy nature. This work is a component of an interdisci-

plinary project called ‘SuperIdentity’, which seeks to develop

a model for evaluating and investigating elements of identity

across both online and offline spaces, together with the direct

and indirect linkages between them.4

The analysis develops in stages. The next section reflects

upon the distributed nature of digital identity. Sections three

and four explore identity misuse in its multiple forms,

including the rising challenges posed by big data and profiling

(where vast amounts of data are analysed extensively using

3 Cukier, K., & Mayer-Schoenberger, V. (2013) Big Data. A Rev-olution That Will Transform How We Live, Work, and Think.London, UK: John Murray, p. 15.

4 The Engineering and Physical Science Research Council(EPSRC) funds the SuperIdentity project under grant number EP/J004995/1. See ‘SID: An Exploration of Super-Identity’. Retrieved16 July 2014 from http://gow.epsrc.ac.uk/NGBOViewGrant.aspx?GrantRef¼EP/J004995/1. See also the Website of the Super-Identity Project. University of Southampton. Retrieved 21September 2013 from http://www.southampton.ac.uk/superidentity/.

Page 3: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2 619

complex algorithms). Bridging links between online and off-

line identities have potentially serious implications for in-

dividuals and raises concerns that need addressing as

important matters of policy. These are discussed in sections

five and six. The penultimate two sections reflect upon the

shortcomings of existing legal models for protecting in-

dividuals against identity misuse and suggest principles on

which to base an evolved legal and policy paradigm for tack-

ling the identity crisis. We conclude with a discussion of the

challenges ahead.

2. Digital identity and its fragmentation

At its most basic conceptualisation, digital identity suggests

the sum of all data available about us. It also has wider

connotations. We can express ourselves in more ways and

to more people online than offline, including through social

media, chat rooms, personal websites, discussion boards,

blogs and virtual worlds. Alongside extensive opportunities

to adopt group identities through common interests, cy-

berspace creates ideal conditions for generating an addi-

tional set of identities as individuals. Social categorisation

(such as by age, gender, race and body shape) is much more

difficult online.5 Our digital representation can additionally

take many forms: text, photos, videos, avatars and our ‘so-

cial map’ (the data visualisation of our network of

connections).

Thus, we can customise or construct anew multiple pre-

sentations of self online for multiple purposes and interact

with people and organisations inmany different ways via the

exchange of identity data.6 To this end, a growing area of

psychological research is concerned with how people

manage their identities online and why they express their

digital personae differently in certain online spaces.7 Of

course, we can also express and leak personal data online

without real awareness of who can see what about us. Even if

one chooses not to create user profiles online, for example,

others may still discuss us, ‘tag’ our image (link it to our

name), or merely collect information about us to which they

may add their own personal interpretation and embellish as

suits their wishes.

With increased sharing and accessibility of identity-related

data, boundaries between the public and the private are more

likely to breakdown online than offline. Moreover, partial or

total control over our projected identitiesmay be lost and false

personae imposed upon us. Consequently, our ‘digitally-

extensible’ selves emerge in fragments distributed across

5 International Telecommunication Union. (2006). ITU InternetReport 2006: digital.life. Geneva, Switzerland. Retrieved 21September 2013 from http://www.itu.int/osg/spu/publications/digitalife/, p. 93.

6 Floridi argues that we are informational entities and infor-mation communication technologies are “technologies of theself” (Floridi, L. (2013) The Ethics of Information. Oxford, UK:Oxford University Press, p.221).

7 See, for example, Whitty, M., Doodson, J., Creese, S., &Hodges, D. (2014). ‘Image Choice to Represent the Self in DifferentOnline Environments’. In: Social Computing and Social Media.Cham, Switzerland: Springer International Publishing, p. 528.

cyberspace. Taken together, isolated pieces of data moving

across multiple contexts can give incomplete and disjointed

informational perspectives of the people they partially

represent and lead to ill-founded assumptions that our ‘cyber-

doubles’ are comprehensive representations of our offline

selves.

Despite heightened concerns about privacy today, how

securely we protect our digital identities depends not just

upon active identity management techniques, but also on our

levels of understanding around the potential risks on theWeb.

A willingness to divulge personal details is rarely matched by

knowledge of what happens once such information is uploa-

ded online, viz. a recognition of the impact that subsequent

processing may have and how to action objections. The sheer

volume of personal data accessible makes effective oversight

impossible and few of us have the ability and desire to adjust

technical settings to minimise data disclosure or the will to

exercise the option to withdraw from the online world

completely. The increasing pace of technological change e

smart phones, tablets, apps, cloud storage, automatic image

recognition, faster connectivity and new platforms enhances

the potential for confusion.

Distinctions between, on the one hand, our self-projection

online and, on the other, the means of our online identifica-

tion, also blur increasingly. Technological advances have

made ‘cyber-identifiers’ e digital proxies for physical pres-

ence used to represent and establish our identities online e

increasingly accessible. As well as chosen by us, they can also

be imposed by others or derived and relatable to us through

analysis. Examples include usernames, user-specified ques-

tion responses, email addresses, PINs and online browsing

patterns. Yet, validating the identity of users through the

presentation of identity credentials is inherently more diffi-

cult online than offline. As the Web-overlaid Internet lacks an

embedded identification system at the network layer to war-

rant the identity of those we interact with, we are forced to

rely on application layer cyber-identifiers8 that can be

manipulated. Murray describes increasing reliance on proxy

data to identify ourselves as a “divorce of identity from the

person”, placing our disembodied identity “at unique risk” to

which we now turn.9

3. Online threats to identity

Identity threats of the type that exist in the physical domain

(the so-called ‘real’world) increase in type and severity online

due to the reach and availability of the Internet and the Web.

Moreover, the sheer volume of personal data now available

online, coupled with new technologies that makes it easier to

access and link together previously discrete elements of

identity in new ways, have facilitated an unprecedented level

of growth in personal knowledge capture. The consequences

8 ‘Application layer identifiers’ applied to the Web relate tosoftware applications for end-users that implement communi-cations methods across an Internet protocol (IP) computernetwork. An example is an email address.

9 Murray, A. (2011). Information Technology Law: The Law andSociety. Oxford, UK: Oxford University Press, p. 463.

Page 4: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2620

for individuals can be severe. It is critical, therefore, to

consider the main types of identity misusers, as well as their

methods and motivations.10

3.1. Criminals and the individual

Online identity fraud can take many forms. Often motivated

by the prospect of financial gain, typical means are through

the appropriation of one or more cyber-identifiers. This can

be by deception in an attempt to impersonate a victim, take

over their accounts (including their existing privileges,

together with any implied authority to create new privileges,

drained from their identifier-associated reputation) and

commit crimes under their persona. In extreme cases of

identity fraud, complete exposure of the physical person to

the criminal may follow as “the bleed between the [cyber-

space and the natural world] can offer opportunity for in-

telligence gathered in one to be used in attacks on the

other”.11

Loss attributable to identity crimes has risen exponentially

in total value since the Millennium and it is now a significant

problemworldwide.12 Cyberspace facilitates identity crime by

making it easier, quicker and less costly to identify and

defraud victims than in the offline world. Several factors may

explain why. Minimising physical interaction reduces crimi-

nals' operational risks. Online fraudsters can more easily

persuade victims to disclose personal details directly or indi-

rectly in exchange for perceived benefits.13 In addition, poorly

chosen password selection remains widespread due to the

convenience factor.14

10 These are not exhaustive categories. For example, identitymisuse online can be carried out by other individuals foundedupon a desire to manipulate and/or publicly humiliate an indi-vidual. Examples include the non-consensual revealing of a per-son's physical identity online, such as the unauthorisedpublication of material containing their private sexual activity.Reputational damage and other types of loss may follow, themagnitude of which should not be underappreciated by makersof policy and law.11 Hodges, D., & Creese, S. (2013, October). ‘Breaking the Arc:

Risk control for Big Data’. In: Big Data (2013) IEEE InternationalConference. IEEE, p. 613.12 For example in 2010, based on a review of several studies

carried out in the preceding decade regarding the extent ofidentity crime, Smith estimated that losses for the UK, Australia,the US and Canada exceeded the equivalent of £1 billion annuallyper country (Smith, R. (2010). ‘Identity theft and fraud’. In: TheHandbook of Internet Crime. Abingdon, UK: Willan Publishing,pp. 273e201, p. 293). In a 2014 report, McAfee estimated that thetotal cost of cybercrime to the global economy was $445 billionper annum: McAfee Center for Strategic and InternationalStudies. (2014). Net Losses: Estimating the Global Cost of Cyber-crime. Santa Clara, US. Retrieved 16 July 2014 from http://csis.org/files/attachments/140609_rp_economic_impact_cybercrime_report.pdf, p. 6.13 Whitty, M. (2013). The Scammers Persuasiveness Techniques

Model Development of a Stage Model to explain the Online DatingRomance Scam. British Journal of Criminology, 53:4, p. 665.14 Creese, S., Hodges, D., Jamison-Powell, S., & Whitty, M. (2013).

‘Relationships between Password Choices, Perceptions of Riskand Security Expertise’. In: Marinos, L., & Askoxylakis, I. (2013).Human Aspects of Information Security, Privacy, and Trust. Ber-lin, Germany: Springer Berlin Heidelberg, p. 80.

3.2. Business and the individual

We are witnessing the growing monetisation of digital iden-

tities in a transactional sense, to the point that identity is

touted as “the new money”.15 While information has always

been highly valued by commerce, data are now valuable

commodities. Data about us have become critical assets for

many businesses due to new streams of revenue unlocked

from within through data analysis. Many of these uses can be

beneficial or neutral in their effects upon individuals. E-com-

merce sites, like Amazon and Netflix, collect personal details

about their customers to offer automated, personalised ser-

vices. Cyberspace has also facilitated highly targeted adver-

tising via third party agreements. Google, for example, sells

data gathered about us so that user-suitable adverts may

display on its website.

Nonetheless, the effect on individuals can be prejudicial.

While individuals can barter for knowledge or access to ser-

vices via disclosure of aspects of their identity, thereby

forsaking a degree of privacy, temptations lurk in the online

world to reveal personal data in exchange for immediate

gratification that, conversely, we may be disinclined under

normal circumstances to share with a stranger or organisa-

tion in a physical setting.16 Personal details and preferences

can also form the basis of negative decisions taken about

individuals. These include potential exclusion from access to

services and offers, as well as other significant discrimination

and privacy implications. Worse, the overall ‘picture’ upon

which such decisions are based can be biased, dependent

upon the commercial perspective at play. As Rosen points

out, “In cyberspace…the version of you being constructed out

there e from bits and pieces of stray data e is probably not

who you think you are”.17 Of note, while Facebook and Google

brand themselves as identity service providers, they and

other online service providers are primarily interested in

individuating their customers so they can establish and

refine a commercial relationship with them by creating per-

sonalised and customer-oriented services, rather than in

strongly authenticating their customer records as factually

accurate.

To understand the extent of these concerns, we need to

consider how data gathering and processing have changed as

the cost of data storage has dropped. Not only do we interact

more digitally now, but also we do so in recordable ways that

are more easily convertible into analysable form, attributable

in real-time. From the trivial to themajor, data records link us

15 Crosby, S. HM Treasury. (2008). Challenges and opportunitiin identity assurance. London, UK. Retrieved 16 July 2014 frohttp://www.statewatch.org/news/2008/mar/uk-nat-identity-crosby-report.pdf, p. 3.16 The terms ‘identity data’ and ‘personal data’ are hereaft

used interchangeably to mean data that can be related (directlyindirectly) to an individual, the revelation of which individualor cumulatively may be sufficient to enable real-world disclosu(and traceability) of that person.17 Rosen, J. (2000). The Eroded Self. The New York Tim

Magazine. Retrieved 21 September 2013 from http://partnernytimes.com/library/magazine/home/20000430mag-internetprivacy.html.

esm

erorlyre

ess.

Page 5: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2 621

to the content we download from and upload to the Web,

enabling others to follow our ‘digital identity trail’ and trade

information related to it. This trail, in turn, is translatable into

knowledge about who we know, what we are interested in,

howwe are feeling and what we are doing. Even ourmetadata

(data about our data, automatically computer-generated as

logs by the use of certain technology, such as device hardware

details, operating system and browser version) can have

economic value as revealing personal information. For

example, from communications metadata it may be possible

to work out the identity of the persons with whom a service

user has communicated and by what means, the times of

communication, as well as the place from which that

communication took place and the frequency during a given

period.

While many companies are interested in such data in their

own right, data brokers act as intermediaries in collecting vast

amounts to sell-on to interested parties. Yet we are mostly

oblivious to the data generated, observed and stored around

our online activities. One example is an IP address (the

network address of an internet-connecting device) which is

easily detectable. Another is the installation by online pro-

viders (website operators or online advertising networks) of

cookies upon a user computer, with the ability to track

browsing behaviour.

The capacity to re-use data in new ways is another key

driver in data's monetisation. Indeed, the value generated by

data re-use is often for reasons not envisioned upon at first

collection. Economic value may similarly derive from aggre-

gating personal data to infer and extract additional knowledge

about individuals, with potentially severe privacy-related

consequences. It may prove particularly damaging to in-

dividuals by connecting compartmentalised contexts of their

lives and compromising their ability to separate these or to

counteract innuendo. We are similarly vulnerable to being

defined prejudicially when information is taken out-of-

context, such that “a part of our identities will come to be

mistaken for who we are”.18 Conclusions drawn about us by

companiesmay also be false. Hence, identities revealed in and

through cyberspace can become distorted, outdated and

increasingly detached from factual truth but nonetheless may

endure in online search results as if they are current and

correct.

4. The identity challenges of profiling

Increases in computing powers and new technologies are

major factors in the facilitation of profitability from identity

exploitation. The capacity for aggregating fragmented aspects

of our identities into much more complete identity compos-

ites is beyond that previously imaginable. In particular, the

identity risks for individuals flowing from the creation and

retention of vast repositories of personal data and associated

capabilities for automated profiling exceed those risks out-

lined so far.

18 Rosen, J. (2000). The Eroded Self. The New York Times Maga-zine. Retrieved 21 September 2013 from http://partners.nytimes.com/library/magazine/home/20000430maginternetprivacy.html.

4.1. Profiling techniques

In everyday language, a ‘profile’ refers to a collection of

characteristics applied to individuals. Thus, a typical com-

mercial purpose for creating a personal profile is to identify an

individual's purchasing interests. The term also finds use in

relation to criminal profiling, to draw up a list of characteris-

tics that might be associated with a targeted individual under

investigation.

Automated profiling, by comparison, is a technique that

starts from the premise of a group profile, created from data

collected from numerous individuals that may be applicable

to one or more of its members. A single source or cross-

referencing from multiple sources can be used to glean the

aggregated data, with the results either anonymised or relat-

able to individuals. The crucial commonality is using analytics

to create a group profile through the ‘mining’ of data to

discover correlated data patterns. Increasingly, the analytical

combination of extremely large and disparate data sets (so-

called ‘big data’) enables the identification of relational trends

between individuals that distinguish them as a set from

others. The final stage of profiling is the application of profiles

to known individuals because they share one or more of a

group profile's characteristics. Profilers then draw statistical

inferences to attribute new information to such individuals

resulting in the creation of predictions about previously un-

known elements of their identities upon which decisions may

be based.

4.2. Consequences for the individual

While inferential predictions about individuals based on data

correlations lie at the heart of the autonomic profiling

method, the discovery of new information that profilers may

infer through deductive reasoning over huge datasets can be

highly privacy-intrusive. Profiling can create highly accurate,

detailed and intimate insights into the multifaceted lives of

people. Due to the widespread availability of personal data

online, and the increasing possibilities of interlinking such

data, they can potentially reveal preferences, interests, habits,

or other sensitive attributes or associated facts about which

even the individual concerned may lack awareness. For

example, a study published in 2013 analysed what people

‘like’ on Facebook and found that “highly sensitive personal

attributes” e including sexual orientation, substance addic-

tion and even parental separation e are highly predictable.19

Moreover, the purposes to which knowledge acquired by

profilers (based upon inferential data-led conclusions) may be

used can give cause for concern. Profiles can form the basis of

negative and unjustified decisions taken about individuals,

such as price discrimination or denial of access to a service or

product. Discriminatory decisions taken based upon conclu-

sions drawn, such as by potential employers, insurers and

health providers, may also perpetuate existing prejudices and

stereotypes, in turn aggravating problems of social stratifica-

tion and exclusion. Automated profiling can also lead to

19 Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traitsand attributes are predictable from digital records of humanbehavior. Proceedings of the National Academy of Sciences, 110:15, p. 5802.

Page 6: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2622

inferential predictions about an individual's future behaviour.

In 2012, a study reported the ability to predict a person'sapproximate location up to 80 weeks into the future, at an

accuracy level of above 80%.20 This possibility bodes serious

cause for concern when persons are subject to decisions and

actions “in the name of the behaviour they are expected to

have taken”.21

Furthermore, there are risks associated with the applica-

tion of inaccurate profiles to individuals. Errors and bad data

aside, since autonomic profiling uses statistical extrapolation,

its output will de facto not always be factually correct. The

possibility of profiling leading to erroneous conclusions

regarding a person's identity can have serious ramifications

for the profiled, including falsifying their identities and

possibly leading to false accusations.

Perhaps most disturbingly from an Orwellian perspective,

profiling is far less transparent than other personal data pro-

cessing. The profiled may not be aware that it is taking place

nor its extent. Even if aware, theymay still not understand the

source of the data that led to the creation of profiles about

them, the logic of follow-on decisions or be able to access their

profiles to correct them. As Hildebrandt observes, we have “no

effective means to know whether and when profiles are used

or abused”.22 Major risks associated with profiling activities

are conceivable, therefore, in terms of shifting informational

power structures that favour profilers heavily over the pro-

filed. Hence, the creation of profiles and reliance upon them

may have significant effects on the interests and rights of

individuals.

5. Linking online and offline identities

So far, we have considered profiling processes directly linked

to specific individuals. Individuals do not have to be identified

(in a real-world sense) to be subject to profiling, however, as

long as they are distinguishable from other individuals. An

example is an individual who remains non-traceable by the

20 Sadilek, A., & Krumm, J. (2012). Far Out: Predicting Long-TermHuman Mobility. Twenty-Sixth AAAI Conference on ArtificialIntelligence, 2012. Retrieved 16 July 2014 from http://www.cs.rochester.edu/~sadilek/publications/Sadilek-Krumm_Far-Out_AAAI-12.pdf. According to the abstract of the paper describing thefindings of the study, the authors use “an efficient nonparametricmethod that extracts significant and robust patterns in locationdata, learns their associations with contextual features (such asday of week), and subsequently leverages this information topredict the most likely location at any given time in the future.The entire process is formulated in a principled way as aneigendecomposition problem. Evaluation on a massive datasetwith more than 32,000 days worth of GPS data across 703 diversesubjects shows that our model predicts the correct location withhigh accuracy, even years into the future”.21 Bellanova, R., & Friedewald, M. (2011). Deliverable 1.1: Smart

Surveillance e State of the Art. Retrieved 21 September 2013 fromhttp://www.sapientproject.eu/docs/D1.1-State-of-the-Art-sub-mitted-21-January-2012.pdf, p. 15.22 Hildebrandt, M. (2007). ‘Profiling and the Identity of the Eu-

ropean citizen’. In: Hildebrandt, M., & Gutwirth, S., (Eds.). (2008).Profiling the European Citizen. Cross-disciplinary perspectives.Dordrecht, Holland: Springer, p. 318.

profiler but is recognisable from online browsing patterns

collected about them via cookies.

Where profiles relate to unknown individuals, we might

assume that the potential consequences for them are less

severe. However, where large-scale aggregation and anal-

ysis of data is involved, profiles not directly linked to a

physical identity now may become linkable later on. In

other words, individuals are more easily identifiable offline

because of large-scale aggregation and analysis of data from

an array of digitised sources that span online and offline

contexts. An example is cross-referencing a user's bankcard

transactions with metadata related to their mobile phone

and social networking sites (SNS) to deduce where a person

has been, at what time, what they were doing and with

whom.

Attempts to link aspects of offline and offline identities

may increase in the future, not just because of the

“increasingly networked state of many people's lives”,23 but

also because of growing commercial demand for more ac-

curate personal and behavioural profiles. For example, last

year Facebook announced that it had struck a deal with data

brokers to merge their data, linking real-world activities to

those online.24 SNS specifically designed for smart phone

(‘MOSN’) may also play a large role in this trend. Through

access to geo-location metadata about the phones of users

and those of their connections, “[t]he combination of loca-

tion information, unique identifiers of devices, and tradi-

tional leakage of other personally identifiable information

now give third-party aggregation sites the capacity to build a

comprehensive and dynamic portrait of MOSN users”.25 The

implication here is that the boundaries between online and

offline activities will blur further. As offline activities are

increasingly reflected in information about us online, “the

quantity, fidelity, sensitivity, resulting value of this infor-

mation is increasing”.26

Of course, the linking of offline and online identities via

automated profiling or other mechanisms to create the po-

tential for more robust means of identification may also be

intended to benefit the average citizen. To combat the risks

associated with identity misuse, for example, the Super-

Identity project proposes that a complete sense of modern

identity requires acknowledgement and understanding of

23 The Government Office for Science. (2013). Foresight FutureIdentities e Final Report. London, UK. Retrieved 21 September2013 from http://www.bis.gov.uk/assets/foresight/docs/identity/13-523-future-identities-changing-identities-report.pdf, p. 24.24 Sullivan, B. (2013). Facebook to team up with real-world data

brokers to pick ads for you. NBC News. Retrieved 21 September2013 from http://www.nbcnews.com/technology/facebook-team-real-world-data-brokers-pick-ads-you-1C8552879.25 Castelluccia, C. (2012). ‘Behavioural Tracking on the Internet:

A Technical Perspective’. In: Gutwirth S., Leenes R., De Hert, P., &Poullet, Y., (Eds.). (2012). European Data Protection: In GoodHealth? London, UK: Springer, p. 26.26 Van Kleek, M., Smith, D. A., Tinati, R., O'Hara, K., Hall, W., &

Shadbolt, N. (2014). ‘7 billion home telescopes: observing socialmachines through personal data stores’. In: Proceedings of thecompanion publication of the 23rd International Conference onWorld Wide Web Companion. International World Wide WebConferences Steering Committee, p. 915.

Page 7: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2 623

both offline and online spaces in unison, and appreciation of

how identification reliability can vary with context, as a

means to help detect online deception. Specifically, the proj-

ect analyses general correlations found in datasets between

identity measures from the following four domains, together

making a ‘superidentity’27

� ‘Biometric’ encompassing physical (including behavioural)

traits, e.g. iris patterns or fingerprints;

� ‘Biographical’ related to facts about people, e.g. age or

name;

� ‘Personality’ related to those beliefs, values or traits that

direct and determine behaviour, e.g. howwe choose online

avatars; and,

� ‘Cybermetric’ characteristics related to our online identi-

ties, e.g. social media profiles or swipe patterns on smart

phones.

Mapping linkages between different types of identifiers in

suchdomains so classified - in particular, those linking aspects

of identity in the physical world to those in the digital world-

feeds into a complex mathematical model created and

visualisedwithin theproject.28 In turn, thismodel can facilitate

clearerunderstandingofhowto improvetheaccuracyofdigital

identification and authentication decisions. Upon populating

the model with person-specific data during an investigation,

for example, it can help spot potential contradictions in those

data that might denote identity-related fraud. This model also

aims to enable knownmeasures of identity in particular cases

to predict unknown measures, generating new insights about

suspect individuals during a law enforcement or intelligence

investigation to enrich their identity profiles.

For example, a prediction about an individual's gender e

and the confidence that can be associatedwith that prediction

can be enhanced not only by knowledge about their anatom-

ical features (say, their height), but also by knowledge about

the relationship between certain anatomical features,

enabling inference from one feature to another (say, gait

stride to height). Similarly, personality traits and other bio-

graphical and/or biological identifiers may be inferred with

certain levels of confidence from choices revealed in the cyber

domain for example, by an online user's default SNS privacy

settings, their choice of virtual avatar and the pattern of their

onscreen touch gestures.29

The advanced capabilities of automated identification

technologies for both private and public contexts, bringing

27 Black, S., Creese, S., Guest, R., Pike, B., Saxby, S., StantonFraser, D., Stevenage, S., & Whitty, M. T. (2012). Superidentity:Fusion of identity across real and cyber domains. ID360: TheGlobal Forum on Identity.28 Creese, S., Gibson-Robinson, T., Goldsmith, M., Hodges, D.,

Kim, D., Love, O., Nurse, J., Pike, B., & Scholtz, J. (2013). ‘Tools forunderstanding identity’. In: Technologies for Homeland Security(HST) (2013) IEEE International Conference. IEEE, p. 558.29 Emanuel, L., Bevan, C., & Hodges, D. (2013). ‘What does your

profile really say about you?: Privacy warning systems and self-disclosure in online social network spaces’. In: ACM SIGCHIConference on Human Factors in Computing Systems (CHI2013):Extended Abstracts, April 27eMay 2, 2013 Paris, France.

both individual risks and public benefits, justify contempla-

tion of policy and legal considerations.

6. Challenges for identity policy

6.1. States and the individual

Identity issues challenge governments around the world.

States focus significant efforts on reliablemeans of identifying

individuals residing in their jurisdictions. Yet the methods

they use may vary widely. In particular, States have different

cultural attitudes affecting their choice of identity verification

and authentication schemes, from reliance upon possession

of physical tokens to usage of high-tech identification tech-

nologies and data analytics. For example, in the last decade

the UK saw a sharply contested debate around the proposal to

introduce compulsory, state-issued identity cards, later

dropped in the face of a public outcry over concerns that it

would infringe civil liberties.30

The Web has also caused many governments to re-

examine their identity policies. With the rise in online iden-

tity fraud, States grapple with the unique challenges associ-

ated with identification policies in cyberspace, including the

setting of appropriate levels of trustworthy identity manage-

ment and assurance standards in relation to the provision of

e-government services.

The reasons that States may require reliable identifica-

tion techniques can vary in different contexts but typically

relate to security concerns. We look briefly at three fields of

application from a policy perspective.

6.1.1. ImmigrationImmigration issueshave dominated newsheadlinesworldwide

over the last decade. Physical tokens that countries use to

authenticate identity, with varying degrees of success, include

birth certificates, passports and identity cards. As the sophisti-

cation of identity authentication technologies increase, some

countries are also developing and implementing state-wide

initiatives involving electronic identity (‘eID’) cards designed

arounddigital proxies, suchas electronic signatures andunique

identification numbers, supported by encryption techniques.

The combination of multiple metrics of identification has clear

benefits over single proxies of identity in termsof increasing the

probability of making a correct identification.31 However,

30 The UK Identity Cards Act 2006 was repealed following criti-cism around the feasibility of its national identity card schemeproposals, the public interest benefits of which (if achieved) wereconsidered likely to be disproportionately small compared to thesecurity and privacy risks involved. Central to this debate was thepublication of a report setting out the view that “the proposals aretoo complex, technically unsafe, overly prescriptive and lack afoundation of public trust and confidence” (Department of In-formation Systems, London School of Economics and PoliticalScience. (2005). The identity project: an assessment of the UKIdentity Cards Bill and its implications. London, UK. Retrieved 11August 2014 from http://eprints.lse.ac.uk/684/).31 In India, for example, there is a huge national programme of

identity registration underway designed to give every citizen anidentity that relies on three biometric modalities: face, finger-print, and iris scans.

Page 8: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2624

despite notable efforts to introduce cross-border interopera-

bility,32 a lack of international harmonisation is currently

a significant limitation upon the utility provided by eID cards.

The possibility of globally recognised and operable digital

identifiers seems unlikely without incorporating biometric

recognition, an increasingly common feature in civilian ap-

plications. Biometric traits are a preferred security standard as

they are inherent to the individual and cannot be lost or

transferred. Some show strong anti-spoofing capabilities and

others require evidence of viability of life, making them

difficult to use fraudulently. The value of a biometric en-

hances when it carries unique or near-unique characteristics

that are stable and resistant to change. eID cards may contain

one or more biometrics of the individual concerned, such as

facial and fingerprint images, often accompanied by radio

frequency identification (RFID) chips. There has also been

uptake in the use of automated biometric recognition tech-

nologies, predominantly iris and fingerprint scans, in border

control procedures.

The security and privacy risks associated with the use of

biometrics online include the fact that digital biometric rep-

resentations are still liable to misappropriation at potentially

greater risks to individuals than other methods of identity

assurance.33 Moreover, identity authentication is only

possible when there is a pre-existing database to check

against, which hinders mass scaling.

In addition, border controls use profiling techniques. Pas-

senger surveillance systems analyse and data mine personal

data to identify high-risk travellers. Resultant profiles may

constitute “the basis for decisions on fly/no-fly, arrest, detain

for questioning and so on”.34 It can also generate inferences

about matters not relevant to security issues, “[b]y collecting

and correlating information like ‘special meal requirements’

or ‘seating particularities’ or even by ‘efficient’ use (profiling)

of the same individual's name, inferences may be made about

such sensitive issues as the religion or health condition of the

passengers”.35

32 An interoperability agenda in eHealth and eGovernment ismoving forwards within the EU as part of the single marketobjective to support the mobility of its citizens across borders. Forexample, the EU launched its ‘STORK 1.0’ (Secure idenTity AcrOssBoRders LinKed) project to promote mutual recognition of na-tional electronic identities, enabling EU-wide citizens to accesselectronic public services securely in EU Member State countriesother than their home country.33 See, for example, Senator Franklin, A. (Chairman of the US

Senate Judiciary Subcommittee on Privacy, Technology and theLaw). (2013). Letter to the CEO of Apple, Inc. about the inclusion ofa fingerprint reader on the iPhone 5S. Retrieved 21 September2013 from http://www.franken.senate.gov/files/documents/130919AppleTouchID.pdf: “[I]f hackers get hold of your thumb-print, they could use it to identify and impersonate you for therest of your life”.34 Geradts, Z., & Sommer, P., (Eds.). (2008). Forensic Profiling

(FIDIS Deliverable D6.7c). Retrieved 21 September 2013 fromhttp://www.fidis.net/resources/fidis-deliverables/forensic-implications/d67c-forensic-profiling/.35 Papakostantinou V., & De Hert P. (2009), The PNR Agreement

and Transatlantic anti-terrorism Cooperation: No firm humanrights framework on either side of the Atlantic. Common MarketLaw Review, 46:3, p. 887.

6.1.2. Law enforcementOne of themost important uses for identificationmechanisms

is in law enforcement. The importance of identifying persons

responsible for the commission of criminal behaviour

correctly remains at the heart of justice systems across the

globe because of the dire consequences of wrongful identifi-

cation for the innocent and guilty alike. Law-enforcers there-

fore require both the tools and skills necessary to enable them

to attribute behaviours to individuals as accurately as possible.

In developing ways to make more robust the means of

identifying criminals, countries have moved generally to-

wards implementing national biometric databases (typically

of fingerprints and DNA samples), as well as researching

models for enhancing abilities to identify individuals within

cyberspace, both raising privacy and security concerns.

Intelligence-led policing aimed at identifying potential

criminals to manage (and if possible pre-empt) risks of future

offences taking place (so-called ‘predictive policing’) is also

generating increasing interest at a governmental level, of

which the inferential capacities of automated profiling is a

vital component. However, to cast judgement on people for

actions they have not carried out, reliant upon machine

agency predictions of what they might do, has obvious im-

plications for notions of fairness and justice based upon a

presumption of innocence until guilt is proven.36

6.1.3. Counter terrorismIdentity-related intelligence gathering is now at the forefront

of national security priorities, in particular to detect and pre-

vent terrorist attacks. In order to respond to terrorist threats,

investigating agencies aim either to identify a known target or

to single out possible enemies of the State based on certain

characteristics or behavioural patterns.

The Web increases the potential for covert surveillance

activities (‘dataveillance’) by States on individuals. Govern-

ments are also interested in automated profiling technolo-

gies. What sets governments apart from the private sector is

the amount of data potentially available to them about the

activities and habits of their citizens from which to uncover

knowledge and generate profiles. Many governments have

sought legislative powers to tap into personal data held by

companies. Publicly available information and information

input on government records are other potential sources of

personal data that governments can mine.37 This potential

impact on individuals will be exacerbated in countries

where private data is shared between different public

agencies.38

36 Cukier, K., & Mayer-Schoenberger, V. (2013) Big Data. A Rev-olution That Will Transform How We Live, Work, and Think.London, UK: John Murray, p.163.37 Obvious examples include health records, tax records and law

enforcement records.38 Plans to share more personal data between public bodies is a

point of current contention and scrutiny in the UK (see, e.g. TheTelegraph, (2014). Revealed: Whitehall plans to share your privatedata. Retrieved 8 August 2014 from http://www.telegraph.co.uk/news/11009405/Revealed-Ministers-blueprint-to-share-private-data.html).

Page 9: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2 625

6.2. Balancing private interests against public interests

Thus, there are many legitimate reasons why governments

need to investigate and enrich their knowledge of an in-

dividual's identity. At times, it may be justified for a State to

use surveillance technologies without an individual'sknowledge. Nevertheless, the intrusion accompanying such

measures raises important questions for policy-makers about

how to strike a fair balance between private and public

interests.

The major concern is one of ensuring the necessity to

legitimate purpose and proportionality. Media reports about

the broad remit of the data-gathering powers of the National

Security Agency (NSA) in the US have spurred global debate

on this issue. For example, the NSA has admitted to coming

into contact with nearly 2% of the total 1.826 petabytes of

online data processed every day.39 Human analysts employed

by the State may review a mere fraction only of the mass of

online data under surveillance at the time of interception.

However, there are obvious temptations for States to try to

legitimate the indiscriminate retention of data (the content

as well as associated metadata linked to online communi-

cations) for long durations in case some of it proves ‘useful’

later on.

Of course, the US government is one of many amassing

mountains of personal data. However, different States take

different approaches to striking a fair balance between public

and private interests. The decision-making process is highly

politicised and the types of safeguards taken to ensure fair

balance diverge. Indeed, the need for safeguards may be

under-appreciated where technology (“neither good nor bad,

nor […] neutral”)40 plays a critical role rather than human

judgement.

A recent UK government-commissioned report on future

identities makes predictions as to how Britons may perceive

and use identities in the coming decade in light of techno-

logical impacts and their possible consequences for UK so-

ciety. In this context, the report describes trust levels

between citizens and authorities as “critical to issues such as

holding, protecting and regulating personal data, and the use

of ‘big data’ sets”.41 This statement holds true worldwide.

Nevertheless, achieving sufficient levels of civic trust based

solely upon the premise that public policy is capable of

establishing fair trade-offs between public and private

39 National Security Agency. (2012). The National SecurityAgency: Missions, Authorities, Oversight and Partnerships.Retrieved 21 September 2013 from http://www.nsa.gov/public_info/_files/speeches_testimonies/2013_08_09_the_nsa_story.pdf.40 Kranzberg M. (1986), Technology and History: Kranzberg's

Laws, Technology and Culture, 27:3, p. 544.41 The Government Office for Science. (2013). Foresight Future

Identities e Final Report. London, UK. Retrieved 21 September2013 from http://www.bis.gov.uk/assets/foresight/docs/identity/13-523-future-identities-changing-identities-report.pdf, p. 51.More generally, in summarising the findings of a project thataimed to come to a broad and independent scientific view ofchanging identities in the UK by bringing together multi-disciplinary evidence, the report is an interesting contributionto the worldwide dialogue ensuing about governmental roles inorienting policy around identity issues in hyper-connectedsocieties.

interests is challenging. How policy makers establish trade-

off decisions (such as the processes by which they identify

and assign values to different types of interests, e.g. in the

context of national security and privacy concerns) are rarely

transparent.

We now turn to law to see if legal models for protecting

identity are more effective.

7. Legal models for identity protection

Along with concerns relating to State public policy about

identity, there is increased awareness of the importance of its

protection in law in its own right. To ensure legal safeguards

protect individuals against the risks outlined above, we look

first to existing legal models. Two models are chosen, as they

form part of the legal frameworks of many countries world-

wide. These are the right to privacy and rules on data pro-

tection. Both recognise that, while States have primary

obligations to ensure the security of their citizens, it is also

necessary to preserve individual freedoms enshrined in law.

7.1. A right to privacy e a redress model

A right to privacy is often part of human rights legislation.

Examples include Article 8 (“the right to respect for an in-

dividual's private and family life”) of the European Convention

on Human Rights and Fundamental Freedoms (‘ECHR’),42 and

Article 7 (“the right to respect for private life”) of the EU

Charter of Fundamental Freedoms (‘the Charter‘),43 to which

the vast bulk of European countries subscribe. In general, a

privacy right of this type codifies the notion of entitlement to

some level of protection from the scrutiny and interference of

public powers.

As a human right, the right to privacy so conceived is also

an example of law that applies after the appearance of dam-

age flowing from a wrong, which aims to compensate victims

for harm caused by the actions of another and/or to penalise

the perpetrator of that harm. A fundamental feature of this

legal model is redress: protecting individuals from the effects

of an unjustified interference by States into their private af-

fairs after it has happened.

A right to privacy is not absolute, however, but subject to

countervailing human rights and public interests (such as

crime prevention and national security) on the facts of each

case. In other words, for it to be justified as legitimate in

law, any interference with privacy rights must usually be

proportionate (‘no greater than that required’) to achieve

other legally recognised rights or interests in the public

interest.

42 Council of Europe, European Convention for the Protection ofHuman Rights and Fundamental Freedoms, as amended by Pro-tocols Nos. 11 and 14, 4 November 1950, ETS 5.43 Charter of Fundamental Rights of the European Union, OJ C

364, 18.12.2000, p. 18. The EU Lisbon Treaty made the EU Charterof Fundamental Rights legally binding on EU Member States,explicitly recognising both the right to respect for private life andthe right to protection of personal data.

Page 10: Identity crisis: Global challenges of identity protection in a networked world1

47

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2626

A recent European Court of Justice (‘ECJ’) judgement is

instructive in this regard. In the Digital Ireland case,44 the ECJ

considered the compatibility of the EU Data Retention Direc-

tive45 with the Charter, in particular its Article 52(1), which

provides that, “any limitation on the exercise of the rights and

freedoms recognised by this Charter must be provided for by

law and respect the essence of those rights and freedoms”.

The Directive mandates that EU Member States adopt laws

requiring communications service providers to retain for be-

tween six and 24 months certain types of traffic, subscriber

and location data generated by their service users, which can

then be made available for the purposes of the investigation,

detection and prosecution of “serious crime”. The ECJ held

that the collection and retention of the large quantities of data

generated in connection with the pervasive, everyday elec-

tronic communications of EU citizens constituted a particu-

larly serious interference with Article 7 of the Charter.

Specifically, the Court stated that, taken as a whole, such

metadata can supply very precise and intimate information

revelatory of the private lives of persons.46 While the ECJ

accepted that the provisions included in the Directive were

nevertheless suitable to achieve a material public interest

objective, it found that the duration of the retention period

constituted a disproportionate interference with the right to a

private life, and one moreover without any temporal,

geographical or other restrictions based on individuals'behaviour. For those reasons, the Directive was declared

invalid. In other words, the legislative instrument was

adjudged to overvalue the achievement of public interests to

the detriment of the protection of private ones.

As with public policy, however, the judicial trade-off

thought process between private and public interests is

rarely so transparent as to be found debated in a publicly-

available court judgement of the highest jurisdictional level.

Indeed, appropriate levels of transparency, in terms of what a

citizen can expect to find out about public interests such as

national security, is worthy of separate and lengthy legal

discussion. Moreover, while legislation usually contains its

own privacy safeguards embedded into their statutory

framework (for example, surveillance legislation setting out

the legal conditions necessary for authorisation of surveil-

lance activities), it is unclear how such due process would

44 Digital Rights Ireland and Seitlinger and others, Joined CasesC-293/12 and 594/12, 8 April 2014.45 Directive 2006/24/EC of the European Parliament and of the

Council of 15 March 2006 on the retention of data generated orprocessed in connection with the provision of publicly availableelectronic communications services or of public communicationsnetworks and amending Directive 2002/58/EC, OJ L 105, 13.4.2006,p. 54.46 Indeed, in his (non-binding but influential) Opinion preceding

the ruling in this case, Advocate General Cruz Villal�on describedthis possibility as extending at its fullest to providing “a completeand accurate picture of his private identity” (paragraph 74). Thus,the Advocate General appears to recognise the importance of anidentity thematic in law where an aggregation of data pertainingto the activities of individuals is being considered. The implica-tion is that Web-based activities should be considered integral towho we are as evidencing a core component of the multi-facetedlives we now lead.

apply to automated data analytics, especially in the context of

enabling State-commissioned, pervasive online surveillance.

A redress right to privacy against a State, therefore, ap-

pears unfit for purpose on its own to address the identity

challenges outlined above. Overall, it is adrift from the tech-

nological reality of digital identity misuse as currently exists.

Moreover, Andrade warns that any form of privacy fixation in

legal terms obscures identity's significance in its own right.47

For example, the seclusion of parts of us from the public

view only protects against certain types of identity misuse.

Rather than wishing to hide from public view, victims of

identity misuse may seek to create or maintain a truthful

expression of their identity in public uncontradicted (what we

might call ‘identity integrity’ concerns).48 There is also a view

that societal fascination with privacy over the last three de-

cades has in fact facilitated an increase in identity crime.

According to LoPucki, the decline in ‘public identities’ since

the 1970's (“in a world where home addresses were listed in

numerous public places and names were plastered on the

sides of mailboxes”), together with greater secrecy generally

in society around the identification process in private trans-

actions, has enabled impersonation in ways that are now

often invisible to a victim.49

7.2. Data protection e a control model

With the introduction of the data protection model, the law

evolved from focussing on privacy in relationships to focus-

sing on informational privacy. The principles it espouses aim

to ensure fair automatic processing (including collection,

storage, dissemination and use) of personal data by thosewho

have control over such activities.

An example is the EU Data Protection Directive (‘the

Directive’).50 According to the Directive, “personal data”51

should be obtained only for specified, explicit and lawful

purposes and not further processed in any incompatible

manner with those purposes; kept for no longer than is

necessary for those purposes; shall be adequate, relevant and

not excessive; and where necessary, kept up-to-date. Data

controllers of personal data should notify the data subjects of

Andrade, N. (2011). ‘The Right to Privacy and the Right toIdentity in the Age of the Ubiquitous Computing: Friends of Foes?A Proposal towards a Legal Articulation. In: Akrivopoulou, C., &Athanasios, P. (Eds.). (2011). Personal Data Privacy and Protectionin a Surveillance Era: Technologies and Practices. Hershey, US:Information Science Publishing, p. 19.48 Burrell, W. (2011). I Am He as You Are He as You Are Me: Being

Able to Be Yourself, Protecting the Integrity of Identity Online.Loyola. L.A. Law Review 44, p. 705.49 LoPucki, L. (2003). Did Privacy Cause Identity Theft? Hastings

Law Journal, 54:4, p. 1278.50 Directive 95/46/EEC on the protection of individuals with re-

gard to the processing of personal data and on the free movementof such data, OJ L281/31 (1995).51 Personal data are defined in the Directive as “any information

relating to an identified or identifiable natural person (“datasubject”); an identifiable person is one who can be identified,directly or indirectly, in particular by reference to an identifica-tion number or to one or more factors specific to his physical,physiological, mental, economic, cultural or social identity”(Article 2(a)).

Page 11: Identity crisis: Global challenges of identity protection in a networked world1

57 In 2013, the national data protection authorities of six EUMember States (France, the Netherlands, the UK, Spain, Ger-many and Italy) announced that they would set up an inter-national administrative cooperative procedure between themto examine whether Google was complying with EU data pro-tection rules in respect of its revised privacy policy. Retrieved11 August 2014 from http://www.cnil.fr/english/news-and-

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2 627

the purposes for which they intend to process the data and

provide access to them in certain circumstances. These and

other Directive provisions are distillable, broadly, into princi-

ples espousing proportionality, accuracy and transparency.

Data protection rules predominantly aim to prevent unfair

data processing before it occurs, with the intention of safe-

guarding individuals' rights to control whether, how, for what

purpose and whomay process their personal data. The model

does this primarily through reliance upon the notion of an

entitlement to give or withhold ‘informed’ consent at the time

the data is collected. Notwithstanding, the rules are subject to

some exemptions, such as in cases of personal data held for

law enforcement purposes where the upholding of data pro-

tection principles is considered likely by the data controller to

prejudice the prevention or detection of crime, or the appre-

hension or prosecution of offenders.

As such, the model aims to remedy a number of limitations

suffered by the right to privacy as applied to information tech-

nologiesas, according toAndrade, “scholarsbeganrealizing that

the underlying interest for isolation and seclusion was not suf-

ficient toprotectone'sprivacy”.52Andradegoeson tosay that, in

this way, the concept of privacy “took on an activist dimension,

entailing … a person's right to control the world in order to

maintain her secret “hiding place” so changing “fromstances of

passivity and seclusion to activity and control”.53

Nonetheless, the data protection model also falls short in

its ability to protect identity. One criticism relates to its

reliance upon the efficacy of the informed consent mecha-

nism. The Directive requires that consent be “unambigu-

ous”54 and a “freely given, specific and informed indication of

the wishes by which the data subject signifies his agreement

to personal data relating to him being processed”.55 In prac-

tice, however, the consent regime is rarely adequate as con-

sent generally turns out be “formal, rarely free and often

unavoidable”56.

A connected criticism relates to how themodel approaches

proportionality. In exhorting the limitation of personal data

processing to the specific purposes for which it was gathered

(the purported ‘purpose limitation’ rule), data protection rules

aims to prevent further uses of personal data in different

contexts that data subjects may find objectionable. This

approach to addressing the problem of creeping data func-

tionality is flawed in practice when viewed in combination

with the confidence placed upon the notion of informed

52 Andrade, N. (2011). ‘The Right to Privacy and the Right toIdentity in the Age of the Ubiquitous Computing: Friends of Foes?A Proposal towards a Legal Articulation. In: Akrivopoulou, C., &Athanasios, P. (Eds.). (2011). Personal Data Privacy and Protectionin a Surveillance Era: Technologies and Practices. Hershey, US:Information Science Publishing, p. 19.53 Andrade, N. (2011). ‘The Right to Privacy and the Right to

Identity in the Age of the Ubiquitous Computing: Friends of Foes?A Proposal towards a Legal Articulation. In: Akrivopoulou, C., &Athanasios, P. (Eds.). (2011). Personal Data Privacy and Protectionin a Surveillance Era: Technologies and Practices. Hershey, US:Information Science Publishing, p. 22.54 Article 7, the Directive.55 Article 2(h), the Directive.56 Gutwirth, S., Poullet, P., De Hert, P., de Teerwangne, C., &

Nouwt, S. (Eds.). (2009). Reinventing Data Protection? Dordrecht,Holland: Springer, Preface xiii.

consent at the time of data collection. How can organisations

provide notice, and individuals give informed consent, to

exploitation of their personal data for unknown future pur-

poses (including data aggregation as part of profiling activ-

ities)? Without consent, however, any re-use of personal data

would require re-asking permission from every data subject.

Huge datasets wouldmake this process a logistical nightmare.

Such tensions were visible from the European-wide regu-

latory action faced by Google concerning its 2012 adoption of a

single, sweeping privacy policy.57 The revision of its privacy

policy allowedGoogle to aggregate and evaluate extensively all

of its users' personal data from their different Google service

accounts, thereby enhancing significantly its ability to create

enrichedcustomerprofiles.Googlewas found tobe inbreachof

data protection rules for broadly similar reasons by the EU

Member States data protection agencies carrying out investi-

gation procedures. In a letter to Google by theUK's Information

Commissioner'sOffice, forexample, it states, “[We]believe that

the updated policy does not provide sufficient information to

enable UK users of Google's services to understand how their

data will be used across all of the company's products”.58 Ac-

cording to the Hamburg Commissioner for Data Privacy and

Freedomof Information, therefore, “it is completely impossible

for the user to foresee the scope and content of his consent to

theprocessingof his data”.59 Similar conclusionswere reached

in Spain in December 2013, with the Spanish data protection

agency declaring in a statement that Google had failed to

obtain the consent of its users as the “highly ambiguous” lan-

guage in its privacy policy created difficulties for individuals

attempting to find out what would happen to their data.60 The

Frenchdataprotectionagencyalso found thatGoogle “doesnot

sufficiently inform its users of the conditions in which their

personal data are processed, nor of the purposes of this pro-

cessing. They may therefore neither understand the purposes

for which their data are collected, which are not specific as the

law requires, nor the ambit of the data collected through the

different services concerned. Consequently, they are not able

events/news/article/google-privacy-policy-six-european-data-protection-authorities-to-launch-coordinated-and-simultaneo/.58 Information Commissioner's Office. (2013). ICO update on

Google privacy policy. Retrieved 11 August 2014 from http://ico.org.uk/news/latest_news/2013/ico-update-on-google-privacy-policy-04072013.59 Der Hamburgische Beauftragte fur Datenschutz und Infor-

mationsfreiheit. (2013). Google's Privacy Policy under Scrutiny.Retrieved 16 July 2014 from https://www.datenschutz-hamburg.de/fileadmin/user_upload/documents/PressRelease_2013-04-02_Google_PrivacyPolicy.pdf.60 Agencia Espa~nola de Protecci�on de datos. (2013). La AEPD

sanciona a Google por vulnerar gravemente los derechos de losciudadanos. Retrieved 11 August 2014 from http://www.agpd.es/portalwebAGPD/revista_prensa/revista_prensa/2013/notas_prensa/common/diciembre/131219_NP_AEPD_POL_PRIV_GOOGLE.pdf.

Page 12: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2628

to exercise their rights, in particular their right of access, ob-

jection or deletion”.61 Google has so far been ordered to pay

fines ranging from EUR150,000 (in France) to EUR900,000 (in

Spain), althoughnotably foracorporation thesizeofGoogle the

quantumof thefines isunlikely tohaveasignificantdeterrence

effect and are dwarfed by the size of fines that can be imposed

for breaches of EU competition law.

More generally, data protection law as traditionally

conceived appears wholly inadequate conceptually from the

perspective of the individual to deal with this and similar

challenges related to data amalgamation concerns, including

those emergent from big data analytics and progressively

automated decision-making.62 For example, data protection

law stipulates that data subjects should not be subject to

automated processing decisions that produce legal effects

concerning or significantly affecting them. However, unless

their legitimate interests are protected legally,63 the invisi-

bility of profiling creates nothing more than a pretence of

adequate data protection. As mentioned above, individuals

are unlikely to be awarewhen they are the subjects of profiling

activities and therefore to suspect possible damage resulting

to their interests. Moreover, it remains highly contentious as

to the exact points at which a person can be said to be affected

‘significantly’ by automated-generated effects (as opposed to

because of the actions of humans), or when their legitimate

interests might be deemed sufficiently protected legally.

64 Discussion around data deletion and anonymisation modelsrelated to identity protection involves complex issues deserving aseparate paper and so is only mentioned in passing, save forreference to the so-called ‘right to be forgotten’ online. A variant ofthis right was recently adjudicated upon by the European Court ofJustice in the context of obligations placed upon search engines(Google Spain and Google, Case C-131/12 Judgment of the Court(Grand Chamber), 13 May 2014). A similar concept (a ‘right toerasure’) was independently proposed in January 2012 by the Eu-ropeanCommission as part of a legislative proposal, (Article 17 of a

7.3. Legal model evolution

Seeking redress from those who impose unreasonable con-

straints upon the intimate expression of identity, and control

over informational aspects of identity one shares with others,

form the heart of existing legal identity protection in many

countries. The evolution between the two illustrates how pop-

ular legal models have expanded in their identity-protective

capacities: from a court-enforceable model against States, to a

61 Commission nationale de l'informatique et des libert�es.(2014). The CNIL's Sanctions Committee issues a 150 000 V

monetary penalty to GOOGLE Inc. Retrieved 11 August 2014 fromhttp://www.cnil.fr/english/news-and-events/news/article/the-cnils-sanctions-committee-issues-a-150-000-EUR-monetary-penalty-to-google-inc/.62 In 2013, the EU Data Protection Article 29 Working Party (a

group formed of the EU Member State's Data Protection Com-missioners) published an Opinion on the principle of purposelimitation (Opinion 03/2013) singling out mention of the circum-stances under which it believes that personal data can be usedlegitimately in the context of big data analytics projects. ThisOpinion makes two broad recommendations. First, requests forconsent to use personal data must not include vague or broad-brush statements describing the purposes for which they arecollecting personal data. Second, data controllers must assess thecompatibility of any new purposes for which they want to pro-cess personal data with the purposes for what the data wascollected initially, and for which consent was obtained beforethey proceed. The Working Party suggests that data controllersmay only process personal data for ‘incompatible’ purposesbased on a domestic exemption adopted by an EU Member State.While opinions of the Working Party are likely to be taken intoaccount by national data protection regulators when interpretingnational laws, they are not binding legally under EU law.63 Article 15, the Directive.

model leaning heavily upon the notion of advanced informed

consent, regulatory oversight and self-regulation by the private

sector.

Nonetheless, these existing legal models were both

designed for the offline world and, while very important pil-

lars of any legal system, they do not go far enough in their

protective coverage of additional private interests threatened

by identity misuse. For example, data subjects rarely have

default rights to use or rectify their personal data (indeed, law

is often agnostic on the issue of who owns personal data,

preferring the position that it is not possible to have pro-

prietary rights in data). As discussed above, when considering

recent technological developments affecting identity, these

problems aggravate its potential for falsification and the

consequences that may flow from loss of control over its

integrity. Identity integrity, in turn, relates closely to impor-

tant human values such as reputation, dignity and autonomy,

as a form of informational self-determination. In this context,

identity, as a concept, aligns more closely to the notion of a

development and sense of self that people evolve over the

course of their lives. It is clear that greater identity protection

is required to cover such unaddressed areas of vulnerability.

While many see data deletion and anonymisation as the

only tools left to individuals to combat misuse, they are often

very difficult to achieve and impossible to guarantee.64

We must look elsewhere.

proposal for a Regulation of the European Parliament and of theCouncil on the protection of individuals with regard to the pro-cessing of personal data and on the free movement of such data(2012/0011, COD) (the General Data Protection Regulation)), in thecontext of wider plans to reform EU data protection rules. Like theright toprivacy, the right tobe forgotten/erasure conceptaddressesharm after it has occurred. Like the data protection model, how-ever, its procedural nature is an attempt to reinstate user controlover the portrayal of aspects of their digital identities. By providingusers with a mechanism to request the removal of informationthey no longer wish to be publicly available online, it aims tomakeup for someof the deficiencies of the consent regimeand implicitlyrecognises that personal identity is dynamic and interaction-dependent. Andrade calls it, “the right to be different, not fromothers but from oneself, i.e. from the one(s) we were before”(Andrade, N. (2012). Oblivion: The Right to Be Different from One-self e Reproposing the Right to Be Forgotten. VII InternationalConference on Internet, Law & Politics. Net Neutrality and otherchallenges for the future of the Internet, IDP No. 13, p. 122). Yetthere are many open questions surrounding the implementationand enforcement of a right to be forgotten/erasure, such as itsgeographical circumscription to Europe. Such difficulties illustratehow thenotion that any of us canhave complete control our digitalidentities through lawor technology alone is likely to beunrealisticfantasy. Furthermore, the right to be forgotten/erasure seemsinadequate in protecting individuals against the informationasymmetries arising from profiling technologies.

Page 13: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2 629

8. New ways of thinking

As technology changes our expression and understanding of

the identity concept, we need new ways of thinking about

ways to tackle its digital challenges. These include reconsi-

dering how we can safeguard those freedoms that make us

who we are in law. We propose seven complementary prin-

ciples to start a new identity dialogue that might lay the

foundation for improved models of protection designed

around the needs of the individual, by cohering and devel-

oping existing legal thinking in this area.

67 Van Kleek, M., & O'Hara, K. (2014). The Future of Social isPersonal: The Potential of the Personal Data Store. Retrieved 16July 2014 from http://eprints.soton.ac.uk/363518/1/pds.pdf.68 European Commission Communication. COM (2014) 442 final.

Towards a data-driven economy. Retrieved 16 July 2014 fromhttp://ec.europa.eu/information_society/newsroom/cf/dae/document.cfm?doc_id¼6210.69 Van Zoonen, L., & Turner, G. (2014). Exercising identity:

agency and narrative in identity management. Kybernetes, 43:6,

8.1. Techno-regulation

It is clear that law-makers must gain deeper understanding of

the ways in which technologies work. In addition, and in light

of the regulatory lag that persists as regulators try reactively to

catch up with technological innovation,65 identity-protective

values need to be inscribed into technology in ways that

work with existing or developing legal frameworks. Notable

examples include online visualisation tools that allow users to

see the personal data that companies hold about them, tools

for tagging data with software code (including individual

preferences for data usage in different contexts) to facilitate

compliance with fair data processing principles by data con-

trollers, as well as obfuscation-based techniques for location

privacy protection.

Such techno-regulation opportunities must not only be

technically feasible, but also enforceable in practice, with

penalties for breach. Indeed, often one will not work without

the other. For example, the introduction to the US Senate of

the ‘Do-Not-Track’ Bill in 2011 would, if enacted, have allowed

users to declare their preference to block companies from

tracking them online and prohibit those who act in breach of

these preferences. However, barring mass adoption of

browsers to implement such preferences (which in turnwould

require market agreement on standard setting and sufficient

levels of endorsingmarket demand), change seems unlikely in

this area. Developing societal norms is clearly important, such

as the acceptance of notions of personal data ownership and

possibilities for its mass marketisation (such as intermediary

organisations helping individuals pool and licence out their

personal data to companies at a cost). A pioneering example in

the UK is the ‘Midata’ project, which aims to improve access

by consumers to the electronic personal data that businesses

hold about them via its supply in a re-usable, portable

format.66

Another aspect of shifting cultural attitudes relates to

greater recognition of the roles of technology designers and

their ability to give users more control over their digital

65 Ball, K., Lyon, D., Murakimi-Wood, D., Norris C., & Raab, C.(2006). A Report on the Surveillance Society: For the InformationCommissioners office by the Surveillance Studies Network. Lon-don, UK, p. 30.66 Department for Business, Innovation & Skills (2014). Providing

better information and protection for consumers. London, UK.Retrieved 16 July 2014 from https://www.gov.uk/government/policies/providing-better-information-and-protection-for-consumers/supporting-pages/personal-data.

identities, together with doing more to encourage self-

governance and standard setting, possibly with formal regu-

latory oversight. An analogous process is ‘privacy by design’,

where privacyeprotective processes embed into systems ar-

chitecture. The European Commission, for example, views

privacy enhancing technologies as going hand-in-hand with

complementary legal principles and advocates privacy impact

assessment upon the adoption of new technologies. Similar

thinking could resonate around the notion of ‘identity pro-

tection by design’. Current research into the potential for so-

called ‘personal data stores’ - decentralised, often cloud-

based technologies that aim to bolster the capabilities of its

users “for managing, curating, sharing and using data them-

selves and for their own benefit” - is promising in this regard.67

The individual-centric nature of personal data stores, facili-

tating interactive identity management (and thus dynamic

control over isolated elements of identity enabled through

technology) by the user, is apt to set the right tone for framing

a dialogue across technical and legal audiences in this

context. The European Commission has declared that it will

launch a consultation in this area imminently.68

8.2. Adjusting to context and risk

Legal rules need to be more flexible to encompass modern

conceptions of identity. As Van Zoonen points out, “a different

concept of identity is needed” that goes beyond the conven-

tionally static notion of identity as “the sum of different in-

formation about a human being”; a more dynamic paradigm

encompasses identity as “doing in addition to being”.69 In a

similar fashion, law must evolve beyond the constraints of

traditional discourses around privacy and data protection

with its emphasis on the point of data collection, towards an

identity-centric one that is capable of taking into account

varying levels of risk associated with usage and outcomes

arising from data processing activities in different contexts.70

Factors that could affect assessments of risk include -

not just the type of data at issue and the wishes of the data

subject - but also the likely permanence and severity of po-

tential effects upon individuals from personal data misuse,

such as might be the case if the security of biometric data is

jeopardised. Legal rules must also be flexible enough to

p. 8.70 The recommendation that policy-makers should focus more

on the actual uses of data and less on its collection is a majortheme of a report concluding a 90-day study requested by Presi-dent Obama to examine the effects of big data technologies in theUS (President's Council of Advisors on Science and Technology(PCAST) Big Data and Privacy Working Group. (2014). Big Data:Seizing Opportunities, Preserving Value. Retrieved 16 July 2014from http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf).

Page 14: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2630

recognise an array of different types of possible detrimental

effects to individuals that can flow from identity-intrusive

behaviours online. The fact that profiling techniques can

have quite different results when introduced into different

contexts and under different circumstances suggests that

profilers should be mandated legally to carry out upfront pri-

vacy impact assessments and (if negative effects on in-

dividuals are reasonably foreseeable) that they also be

required to formulate and implement bespoke safeguards to

mitigate such effects.

Visual models of identity like the SuperIdentity model

could potentially play some role in assisting such assessments

by helping individuals to understand better the nature of the

risks they might face by disclosing different types of personal

data online. For example, given a particular context and the

assumption that certain pieces of information are already in

the public domain, a person may be able to assess more

accurately the likelihood of new inferences being made about

them and what these might be, as well as the type of personal

data most likely valued highest in a targeted attack.71

In line with the adoption of digital risk assessment, law

must take into account how the very nature of identification

and re-identification risk has changed. With ever-wider

technological capabilities related to de-anonymisation

emerging, alongside the advance of technology and tran-

sitioning from the offline to the digital world, the ability to

identify someone from data is much more context-specific,

depending upon who the identifier is and what methods are

used. For that reason, another area of focus ripe for re-

evaluation from an identity-centric and big data viewpoint is

the legal definition of ‘personal data’ (more commonly

referred to as ‘personally identifiable information’ from a US

perspective) and, with it, the outer boundaries of legal and

regulatory rules applying to its protection.

73 In English law, for example, it is not a criminal offence in it-self to appropriate another's identity.74 Article 29 Data Protection Working Party. (2013). Advice paper

on essential elements of a definition and a provision on profilingwithin the EU General Data Protection Regulation. Retrieved 16 July

8.3. Greater data accountability

Those who process personal data should bear greater legal

accountability and responsibility for its misuse (even going so

far as to attribute them with, a term we have coined, ‘data

controller stewardship’ to denote a very high level of entrust-

ment of care). In linewith the thinking ofWeitzner, information

accountability “must become a primary means through which

societyaddressesappropriateuse”72and,hence,also itsmisuse.

Thus, greater accountability and responsibilitymay be required

for some data controllers by establishing specific safeguards to

protect subjects' rights, such as by creating an obligation to

anonymise personal data they process on their behalf.

Accountability should be proportionate to the level of

misuse. Severe cases of identity misuse should expose mis-

users to robust sanctions, including regulatory actions such as

significant fines so that organisationswill take data protection

71 Creese, S., Gibson-Robinson, T., Goldsmith, M., Hodges, D.,Kim, D., Love, O., Nurse, J., Pike, B., & Scholtz, J. (2013). ‘Tools forunderstanding identity’. In: Technologies for Homeland Security(HST) (2013) IEEE International Conference. IEEE, p. 558.72 Weitzner, D., Abelson, H., Berners-Lee, T., Feigenbaum, J.,

Hendler, J., & Sussman, G. (2008). Information accountability.Communications of the ACM, 51(6), p. 82.

compliance more seriously than at present and/or criminal

prosecutions. Achieving this outcome may involve giving

regulators stronger powers, with quick response times to

create deterrence effects.

Also required is the creation of bespoke laws to deter and

address identity misuse directly, including specific criminal

offences for identity-centric crimes.73 In the future, punish-

ment for data offences related to illegally gathering and

aggregating personal data, or de-anonymising individuals

without their consent, may become the normwhere such acts

are shown to have caused significant harm.

8.4. Greater transparency

Transparency complements accountability. More information

should be made available to individuals about who controls

their personal data and to what ends. For instance, individuals

can better appreciate the underlying value and the possible

implications of sharing their personal data by understanding

how its processing takes place, by whom, andwhat safeguards

exist to reduce the likelihood of risks of harm occurring.

There is a need to focus in particular on disclosure around

the potential for data re-usage in order to help individuals un-

derstand how and when their data may be recycled for new

purposes and the potential implications for them before this

happens. For example, the EU's Article 29 Working Party has

published an advice paper on profiling recommending that

more be done to explain and mitigate the various risks that

profiling canpose,while advocating the introduction of specific

requirements inorder for profilingactivities to be legitimated.74

These includesuggestions thatprofilersbeobliged tomakedata

subjects aware when their personal data are used in profiling

and requiring information to be provided about the context,

purpose and underlying logic used for automatic processing.

The practices involved in covert government surveillance

of individuals, and accompanying legal checks and balances,

would especially benefit from increased transparency. While

the rationale of such surveillance and risk profiling requires a

degree of opacity around case specific details, more openness

would be beneficial in terms of providing clearer and more

detailed information about the application of data analytics,

as well as how exactly law and any built-in safeguards apply

to protect individual interests in respect of automated iden-

tification mechanisms.

A sector-specific example around the merits of greater

transparency relates to organisations active in health and

social care that provide explanations to patients around the

2014 from http://ec.europa.eu/justice/data-protection/article-29/documentation/other-document/files/2014/annex_one-stop_shop_20130513_advice-paper-on-profiling_en.pdf. In seeking tolimit the extent to which data subjects may become subject tomeasures based on the automatedpersonal profiling of individuals,the Working Party recommends that profiling should be subject tothe data subject's explicit consent, to guarantee additional infor-mation rights and a higher level of control for individuals.

Page 15: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2 631

anonymised uses of personal data collected about them, their

rights to withhold or withdraw consent to its sharing and use,

as well as the consequences of not providing consent. A

number of recommendations to this effect were set out, for

example, in a 2013 report commissioned by the UK Govern-

ment into information governance in these sectors.75 Rec-

ommendations include that health and social care

organisations should keep and record an audit trail of patient

consent decisions, including revocations, and the steps taken

to implement them when they apply to data in their control.

Furthermore, such organisations should be required to pub-

lish details of all cases where there has been unlawful pro-

cessing or sharing of patients' personal data (accurate or

inaccurate).

8.5. More empowered users

Greater transparency also engenders the ability for greater

involvement by individuals. As the value of data becomesmore

tangible and their uses more dynamic, people may want to

exercise control and choice in ways not relevant to offline

identity-constitutive information. Again, we make reference to

the possibility of personal data stores that can help individuals

to specify exactly what data about themselves they are willing

to share with others. Ease of usability, as well as security and

portability, must lie at the heart of such initiatives.

From a purely legalistic viewpoint, the EU Data Protection

Article 29 Working Party Opinion 03/2013 on the purpose

limitation principle singles out mention of big data analytics

that lead to the formation of conclusions about individuals

and states that opt-in consent would usually be required for

these activities.76 Positively empowering individuals using

legal mechanisms such as consent opt-ins, or the ability to

contest any measure or decision based on profiling, should be

enforceable in court.

Greater user empowerment might also involve giving in-

dividuals more rights over the processing of their personal

data of a type that go beyond those currently available under

European data protection rules. The European Commission

started in this direction of travel by proposing a data protec-

tion, legislative reform package in 2012, which heralds the

laying of an important, new foundation for the potential

development of European law. Included within the package

are proposals to introduce practical measures and tools to

protect privacy rights, which would strengthen online users'control over their digital identities by enabling them to

participate more fully in the data choices they make. For

example, the draft General Data Protection Regulation in-

cludes a specific provisionmandating data controllers to build

in data protection ‘by design and default’, which it is hoped

75 Caldicott, F., Department of Health. (2013). Information: Toshare or not to share. The Information Governance Review.London, UK. Retrieved 16 July 2014 from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/192572/2900774_InfoGovernance_accv2.pdf.76 Article 29 Data Protection Working Party. (2013). Opinion 03/

2013 on the purpose limitation principle. Retrieved 16 July 2014from http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf.

will facilitate creative technical solutions for protecting legal

rights without losing the benefits of a big data project. More-

over, it is proposed to introduce a new data portability right

would enable individuals to obtain a copy of the data held

about them by organisations in a reusable, electronic format.

While the full implications are still being considered of what

recognising individuals legally as ‘data controllers’ of their

personal data would involve, shifting to a new kind of legal

thinking is essential to help promote and develop the idea that

individuals can be endowed, not only with legal rights that are

enforceable practically, but also responsibilities to manage

their own data.

When identitymisusesoccur, ingeneral thereshouldalsobe

well-publicised, appropriate and easy ways for individuals to

report them. Moreover, tasking agencies to review and pursue

the most serious cases to enforcement proceedings should

reinforce the desired deterrence effect related to the commis-

sion of identity crime currently missing in many countries.

8.6. Identity-centric government programmes

Policy-led government initiatives specific to identity concerns

could help individuals by placing their interests as consumers,

citizens and users of governmental digital services at the

heart of governmental reforms. For example, government-

commissioned programmes (such as the UK Midata project

and its vision of greater consumer empowerment mentioned

above) can be designed to help individuals to have greater

control over both the amount of personal data they reveal

online and the awareness of who now holds their data.

Of course, there is no single design best for governments to

adopt and there are obvious risks involved in reliance upon a

central government register of identity data. It is important

that the design capabilities of identification systems be so-

cially and ethically acceptable to be usable.77 One promising

approach is a legislation-implemented, identity assurance

scheme set up by a government advisory group and delivered

by certified private sector online service providers. Identity

service providers under such a scheme could contract to meet

certain criteria, and help promote greater public trust in the

safety of online interactions, similar to the Identity Assurance

Programme and its Principles currently under proposal in the

UK.78 If individuals trust in certified organisations to provide

an identity service (such as identity authentication), operating

according to transparent trust principles, they will have

greater reassurance that their cyber-identifiers are secure

when accessing government services and transacting online.

8.7. Internationalism

Legal jurisdiction based on national borders can create false

limitations where the online world is concerned. Taken in

77 Hodges, D., & Creese, S. (2013, October). ‘Breaking the Arc:Risk control for Big Data’. In: Big Data (2013) IEEE InternationalConference. IEEE, p. 613.78 Privacy and Consumer Privacy Group. Cabinet Office. (2013).

Draft identity assurance principles. London, UK. Retrieved 16 July2014 from https://www.gov.uk/government/consultations/draft-identity-assurance-principles/privacy-and-consumer-advisory-group-draft-identity-assurance-principles.

Page 16: Identity crisis: Global challenges of identity protection in a networked world1

c om p u t e r l aw & s e c u r i t y r e v i ew 3 0 ( 2 0 1 4 ) 6 1 7e6 3 2632

conjunctionwith the global nature of the Internet, and despite

the fact that identity policies are often culturally sensitive, the

internationalisation of legal responses to online identity

misuse is likely to become increasingly necessary.

Reliance upon international laws and consortia favours

non-binding guidelines and informal rules that have practical

effect through shared understandings of mutual benefits,

rather than as direct challenges to national sovereignty. By

being more adaptive to changing circumstances, such ‘soft’

laws might also facilitate greater collaboration between pri-

vate groups and public actors in the development and

administration of rules regarding identitymisuse. In turn, this

would help promote wider participation in identity-centric

policy and law formulation internationally (although ulti-

mately these effects would be normative only).

An example of an international legal instrument specif-

ically focused on protection from harms arising from profiling

technologies is a European Recommendation on the use of

profiling in the public and private sector introduced in 2010.79

The Recommendation aims to ensure more transparency and

access in profiling. Although non-binding and therefore non-

enforceable legally, the Recommendation signifies a step in

the right direction towards providing essential protection

against profiling risks to identity.

9. Concluding thoughts

Across many countries around the globe, it now abundantly

clear that technological and cultural change has led to an

identity crisis. The detail and the traceability of the ‘portraits’

that our digital identities provide - the exhaustive mapping

and assemblage of our everyday lives in data - grow steadily

and with it their value and potential for usurpation. Govern-

ments and law-makers increasingly find themselves

embroiled with the consequences of digital identity misuse

and mechanisms for policing it.

Without effective safeguards to protect identity, we have

argued that there are substantial risks to individuals. The time

is ripe, therefore, to reconceptualise its protection in keeping

with our richer understanding of modern identities mediated

through digital technology. We need now to reflect upon how

to ensure appropriate balances between private and public

interests, and robust oversight of automated identification

techniques (particularly those that rely on big data analytics

and geo-location data).

Against this backdrop, law must continue to “seek to bal-

ance the rights of individuals to liberty and freedom of

expression (both online and offline), while protecting their

privacy and their data”.80 At the same time, existing models

need to adapt so as to embrace opportunities to evolve a

protective capacity against emerging threats to identity, while

79 Council of Europe. Recommendation CM/Rec (2010)13. On theprotection of individuals with regard to automatic processing ofpersonal data in the context of profiling. Retrieved 16 July 2014from https://wcd.coe.int/ViewDoc.jsp?id¼1710949.80 The Government Office for Science. (2013). Foresight Future

Identities e Final Report. London, UK. Retrieved 21 September2013 from http://www.bis.gov.uk/assets/foresight/docs/identity/13-523-future-identities-changing-identities-report.pdf, p. 59.

recognising that there are no complete legal or technical fixes.

An enhanced legal framework fit to meet new identity chal-

lenges must accommodate a risk-management approach to

accountability that appreciates themulti-contextual nature of

people's identities. It must also empower individuals to

recognise and take advantage of new opportunities to benefit

from their identity data, including through greater data

handling transparency. Furthermore, policy intended to un-

derpin identity management initiatives needs to reflect what

is socially acceptable, as well as technologically feasible, in its

country of implementation.

As indicated, some steps are already underway in Europe

and elsewhere around the world to meet such challenges.

However, there is long way to go and the wise path will direct

us towards an inter-disciplinary approach to identity in which

our relationship with data, machines and each other is inex-

tricably linked to who we are today.

As society's appetite for digital technology, hyper-

connectivity and big data shows no sign of abating, the

identity crisis is destined to deepen still. In the next few de-

cades, even the most mundane and unlikely records of day-

to-day behaviour may become useable for unanticipated

and ever-wider purposes. Increased use of mobile technology

and remote biometric recognition further blurs the distinc-

tion between offline and online spaces. Consequently, the

capacity for automated identification technologies to engage

in real-time monitoring to deduce whether patterns of

human behaviour diverge from a pre-defined normality (e.g.

to monitor crowds or verify transactional identity automati-

cally for digital payments) becomes an ever more stark

reality.

In the more distant future, artificial intelligence advances

could result in profiles of unnerving precision and persis-

tence, even of absolute falsehoods, and pervasive automated

decision-making. The way we interact with each other and

our society is likely to change radically. The ‘internet of

things’ (everyday objects with sensors linkable online)

recording and tracking our movements, as well as ground-

breaking developments in genetic research and mass bio-

banks, may enable the federation of our identities81 into

single digital identifiers running across multiple systems

that follow us continuously throughout our lives. We must

face the implications now because one day our virtual rep-

resentationsmay becomemore important thanwhowe truly

are.

Acknowledgement

This paper is produced under the auspices of the Engineering

and Physical Science Research Council (EPSRC) SuperIdentity

project under grant number EP/J004995/1 within the umbrella

remit of the Global Uncertainties Programme, together with

the Department of Homeland Security in the US.

81 A ‘federated identity management’ solution refers to a singledomain that enables interoperability between different identitymanagement systems, such that different user identitiesmanaged by different systems could be recognised as belongingto the same individual and tracked as such.