21
NeXt generation Techno-social Legal Encryption Access and Privacy nextleap.eu Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized Systems Harry Halpin (INRIA), Francesca Musiani (CNRS) Beneficiaries: CNRS, INRIA Internal Reviewers: Workpackage: WP4, D4.6 Policy Recommendations for Decentralized Systems Description: This deliverable describes in a more 'easy-to-understand' language on a high- level the results on the previous deliverables on formal methods and simulations and so provides recommendations on a policy level for those wishing to support decentralization within an institutional framework in order to achieve data sovereignty. Version: 1.0 Nature: Report (R) Dissemination level: Public (P) Pages: Date: 2018-12-31 Project co-funded by the European Commission within the Horizon 2020 Programme 1

DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

NeXt generation Techno-social Legal Encryption Access and Privacy nextleap.euGrant No. 688722 Project Started 2016-01-01. Duration 36 months

DELIVERABLE D4.6

Policy Recommendations for Decentralized Systems

Harry Halpin (INRIA), Francesca Musiani (CNRS)

Beneficiaries: CNRS, INRIAInternal Reviewers: Workpackage: WP4, D4.6 Policy Recommendations for Decentralized SystemsDescription: This deliverable describes in a more 'easy-to-understand' language on a high-

level the results on the previous deliverables on formal methods and simulations and so provides recommendations on a policy level for those wishing to support decentralization within an institutional framework in order to achieve data sovereignty.

Version: 1.0Nature: Report (R)Dissemination level: Public (P)Pages: Date: 2018-12-31

Project co-funded by the European Commission within the Horizon 2020 Programme

1

Page 2: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

Table of Contents

Executive Summary................................................................................................................21. Introduction: On Socio-Technical Sovereignty and Data......................................................22. Can the General Data Protection Regulation be Enforced?................................................73. Enforcing fundamental rights through policy and technology: Three policy recommendations....................................................................................................................8

Policy Recommendation n°1: No Backdoors in Encryption...............................................10Policy Recommendation n°2: Use Open Standards and Free Software...........................14Policy Recommendation n°3: Deploy Decentralization.....................................................16

4. Conclusions: Protocols from NEXTLEAP..........................................................................18

Executive Summary

This deliverable decribes on a high-level the the concepts of socio-technical sovereignty and how technological protocols is necessary in an era where data extractivism has become increasingly widespread and the General Data Protection Regulation does not appear to be enough. The debate between human rights and technology, including the new concept of “net rights,” is explicated. We provide recommendations on a policy level for those wishing to support data sovereignty within an institutional framework. The three recommendations are 1) No Backdoors in Encryption 2) Use Open Standards and Free Software and 3) Deploy Decentralization. These policy recommendations show support for formal verification and privacy simulations. Finally, the protocols developed by NEXTLEAP are discussed within this framework.

2

Page 3: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

1. Introduction: On Socio-Technical Sovereignty and Data

Socio-technical systems consist of two intertwined levels: One layer is the social level, which consists of the habits, values, and administration of society in service of its continued existence,and these social structures are encoded as legal system. The other layer is the technical level, which consists of the programs, data, and infrastructure that forms a technological system. Bothof these systems coexist to maintain the stability of larger structures while achieving fundamental goals: For example, the legal system put into place by the EU Charter of Human Rights guarantees various freedoms and protections to European citizens and people living inside Europe, while at the same time stabilizing the structure of society so it does not fall into cataclysmic war, as happened in the Second World War. In order to guarantee this social order, the desired social order of law-makers is codified into the law of various individual member states. In cases where the greater harmony of Europe (as given by regulations from the European Commission that are ratified by the European Parliament) conflicts with local legal codes of individual nations, nation-states such as France are forced to modify their local legal systems in order to guarantee those freedoms in a consistent manner. The same can be said forthe United States Constitution, whose goal is to guarantee individual liberties while maintaining the stability of the United States governments, and so the Supreme Court may override the rulings of lower courts and the individual states. However, as social order is dissolved and fractured by digital technologies, what is the role of policy in this digital era in preserving fundamental rights? Data Protection, originally laws passed in the 1970s in countries like Germany and Sweden that know too well the potential of the abuse of mass data collection fromthe Holocaust onwards, may point the way forward.

Ever since 2009 it has been recognized that “personal data is the new oil of the internet” by European Consumer Commissioner Meglena Kuneva. One decade hence, the inevitable resource wars for the control of personal data en masse have begun, with one one side being private companies like Google and Facebook that profit from data extractivism, the strategy of the capture and refinement of personal data to sell on the world market, and on the other public policy that has only just recently started to understand the value of personal data and systematize this incommensurable value into law. If we are to consider privacy and the rest of the rights systematized in D6.5 on net rights to be fundamental public goods that are worthy of being preserved against their privatization, then we must find a way to defend the data sovereignty of both nations and citizens. Personal data should belong to the citizen, including data that they wish to be private, while much data – such as data ranging from maps to data from sensor networks in urban spaces – is ultimately social data and so belongs to public administration. The General Data Protection Regulation was the ultimate attempt by the European Commission to use purely legal means to ensure the “right to a private life” in the sphere of digital data. Has this regulation an effective answer to the erosion of data sovereignty by technological platforms such as Google and Facebook? The answer seems to be that while itis a step in the right direction, data protection laws are not enough. Data Protection - and human rights in general that are intertwined with the Internet – must be defended technologically.

3

Page 4: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

Yet politics has abandoned technology. Over the last few decades, politicians and policy-makers have been able to focus on shaping the legal system, while the sharing and care of the technological system has effectively outsourced to technocratic experts from the private sector. Even when working the most egregious example of this was the first director of White House Information Technology, David Recordon, who also previously worked at Facebook, where he designed the “Like” Button: Was it assumed that he would create a “Vote” button for Obama? This outsourcing of policy around technology to the private sector - thinly-disguised private sector technologists working in the public sector - has led to a chasm between the actual state of technologies and the law. Rather than co-evolve via feedback, digital technologies were left togrow unregulated, and so developed into a number of transnational platforms such as Facebookand Google that, although they may have been incorporated in a particular nation-state, owe no allegiance to a particular juridical order. In a so far successful attempt to escape regulation, these platforms have investigated everything from establishing off-shore island havens to putting their data-centers on boats.1

These platforms provide services that the majority of the population has become dependent on for everyday communication: The first time in human history where the public sphere of democratic debate, as now incarnated on Twitter and Facebook, has become effectively ran by private entities. This kind of technical sovereignty is an effective competitor to traditional forms of government, as established by the formation of national sovereignty given by the Treaty of Westphalia. A subterranean process that was already well-away in 2013, the Snowden revelations of the subversion of these supposedly “neutral” platforms by the NSA to enable the foreign policy of the United States was proof without doubt that the terrain - per Carl Schmitt, thenomos - of warfare and sovereignty had moved from traditional geopolitics to the “smooth space” of the Internet even between nation-states.2 This loss of privacy leads to the concrete loss of national sovereignty, such as the revelation that the NSA was gathering information from Angela Merkel’s mobile phone and using this information to subvert trade negotiations, while e-mail via providers such as Gmail as well as the transit of e-mail has been revealed to be insecure due to the NSA’s PRISM programme. Furthermore, the NSA had a budget to subvert the traditional “neutral” governance standards bodies of the Internet such as the ISO and IETF. Furthermore, the kinds of mass surveillance and subversion of technology by national governments is today no longer the exclusive domain of the NSA and GCHQ, but becomes common-place, with authoritarian regimes in Russia and China developing massive collections of offensive hacking technologies and demonstrating the capacity to either create their own platforms or “hack” Silicon Valley platforms to work for their ends.3

1 See Kramer, J.D., 2010. Seafaring Data Havens: Google's Patented Pirate Ship. Univeristy Illinois Journal of Law, Technology, and Policy. 2 Schmitt, C., 2003. The Nomos of the earth. Trans. GL Ulmen. New York: Telos Press.3 Although there is yet definitive proof of Russian hacking the 2016 US election or French election, there is definitive proof of Russia using Facebook for state-sponsored propaganda in an attempt to interfere with elections, and thus a sort of “psychological hacking” may be more common than traditional cyberespionage: https://www.wired.com/story/facebook-may-have-more-russian-troll-farms-to-worry-about/

4

Page 5: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

In the 21st century, it is no longer tenable to regard technical and social systems as distinct, as the ubiquity of the Internet has caused the world to be enveloped in a single data-driven techno-social system. In this conjuncture, any questions of sovereignty has to be addressed simultaneously in both the technical and the social spheres. This requires a new approach, where rather than attempt to mandate laws that merely apply to the underlying technology, suchas the General Data Protection Regulation, policy-makers should encourage the adoption of technology that maintains the society that they want. The mistake has been made in policy circles that technologies are always hostile, and that decentralized and encryption technologies will somehow enable crime and terrorism, while in fact the reverse may be true: Putting the rights-preserving technology in the hands of local administration and citizens may be the last and best bastion against data extractivism, especially if done in concert with policy around data protection. In this sense, organizations and instances such as the Conseil national du numérique in France have produced detailed advice on encryption technologies and why they should not be banned, but encouraged.4 These documents set a sort of 'philosophy' that may or may not be followed in actual regulation, but is interesting to assess as it takes shape.

There can be negative examples of technology and policy, as authoritarian regimes may encourage the deployment of socio-technical systems like the “Chinese social credit system” where a single government-ran platform is used to make decisions regarding nearly all aspects of their citizens lives, from education to mobility, in a privacy-invasive manner that disregards the fundamental rights of individuals, placing the data sovereignty of the nation over that of the individual.5 On the other hand, the inability of the United States to effectively regulate Silicon Valley platforms is exactly what led to the construction of massive surveillance by private technical companies in the first place. What is needed desperately, more than ever, is a rights-preserving alternative in terms of technology, and laws that are both aware of the state-of-art of technology and co-evolve with the technology in order to build new techno-social system that preserves sovereignty both inside traditional national borders and across these borders. Given the resurgence of interest in privacy triggered in part by the passage of the General Data Protection Regulation and the global trust still held by Europe, the time is right for Europe to make a move.

If there is no purely legal regime to enforce these rights, there has to be a new kind of material constitution where the rights are embedded in our technology. While previous constitutions havebeen purely verbal agreements to enforce rights by law, the ability to guarantee fundamental human rights via technology has gained momentum. Currently, the closest phenomena to a rights review happens in this regard is the work of the Internet Research Group for Human Rights Considerations for Protocols.6 This Research Group, basing its work explicitly on the U.N. Human Rights Charter, attempts to do technical reviews of protocols such that they can be determined if they are compliant or not with Human Rights. Still, as this review happens after the protocol has itself been designed, so at best, minor modifications can be made to existing

4https://www.cnil.fr/fr/lacces-des-autorites-publiques-aux-donnees-chiffrees5https://www.forbes.com/sites/audreymurrell/2018/07/31/pushing-the-ethical-boundaries-of-big-data-a-look-at-chinas-social-credit-scoring-system6https://irtf.org/hrpc

5

Page 6: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

protocols, such as voicing support for Encrypted SNIs (Server Name Indication, the one or moredomain names one is connecting to via an IP address) in TLS 1.3 in order to prevent the “breaking” of encryption by telephony companies in order to enable mass surveillance and censorship. Given there is no legally-binding regulatory power to standards bodies like the IETF and W3C on the nation-state level (unlike, say the ISO), these standards bodies at best make recommendations with the help of self-regulation of the industry itself, and it is these self-regulating standards bodies like the IETF and W3C that control the Internet and Web respectively, unlike the ISO and ITU whose standards have much less endorsement from engineers that actually build the protocols that compose the Internet.

However, these open “multi-stakeholder” standards bodies of the IETF and W3C are being taken over by Silicon Valley, as exemplified by the standardization of Encrypted Media Extensions for DRM against the protests of civil society groups such as the Electronic Freedom Frontier Foundation and movements against this standard by Members of European Parliament like Julia Reda that were concerned over how DRM violated “fair use” rights in Europe as well as fundamental human rights.7 Even Tim Berners-Lee himself, the inventor of the Web, was beaten into submission by the powerful lobbying of Silicon Valley and ended up supporting the DRM standard being pushed by Netflix and Google, so sacrificing the rights of European consumers in order to maintain the relevance of the W3C in the face of the slow monopolization of the Web.8 This is not to say that European standards are a solution, as national European standards bodies have also failed to defend fundamental rights, as ETSI (European Telecommunications Standards Institute) specifically “forked” the IETF TLS 1.3 in order to create a “backdoor” explicitly rejected by the IETF, removing the encrypted SNIs supported by human rights activists and enforcing cryptography that would make monitoring of encrypted traffic by a “middlebox” between a user and a website possible.9 This “perhaps useful for quality-of-service and monitoring enterprise, but for most it seems useful for mass surveillance. With noone representing everyday citizens in these standards bodies, the governance of these technological platforms is threatened to be increasingly unfettered from human rights and override traditional forms of sovereignty in a permanent state of exception.10

7 Halpin, H., 2017, December. The Crisis of Standardizing DRM: The Case of W3C Encrypted Media Extensions. In International Conference on Security, Privacy, and Applied Cryptography Engineering (pp. 10-29). Springer, Cham. https://hal.inria.fr/hal-01673296/document8https://www.w3.org/blog/2017/02/on-eme-in-html5/9http://www.metzdowd.com/pipermail/cryptography//2018-December/034681.html10 Agamben, G., 2005. State of exception. University of Chicago Press.

6

Page 7: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

2. Can the General Data Protection Regulation be Enforced?

Supra-national legislation that specifically targets digital platforms seems to be the last, best chance for preserving fundamental rights of ordinary citizens and data sovereignty for the general public. The General Data Protection Regulation, the first global privacy framework that extends the rights of European citizens even into technological platforms in other national jurisdictions, is a historic and massive achievement of legislation. The General Data Protection Regulation is based on the core concept of informed consent, where in order to maintain user’s autonomy over their data, users must be informed of both what personal data is being collected by a particular web service, called the data controller, and how it is being processed by possiblethird-parites, called data processors. For example, when using a service such as accessing an online newspaper like Le Monde via their website lemonde.fr, the user mus be informed of their rights and then “opt-in” to the usage of their data and its distribution to ad networks such as Google’s advertisements, which are data processors. In theory, a citizen would then have rights and control over their data in order to maintain their digital autonomy. However, in pratice what the General Data Protection Regulation has led to is endless clicking of email and web-forms that claim to give a data controller consent, but the sheer number of forms and inscrutable way in which they are phrased, as well as an often “take it over leave it approach,” has overwhelmedusers. Citizens are cognitively overwhelmed at the scope and scale of the processing of their personal data, and while the goals of the General Data Protection Regulation were noble, the implementation leaves citizens forced to willingly comply to the stripping of their fundamental rights in order to use modern Web platforms, and unable to keep track of their data.

One of the fundamental problems of the General Data Protection Regulation is that as regulation, it is fundamentally based on the paradigm of the data controller being large centralized servers under the control of external entities in the “cloud.” Any processing of data being done by a discrete number of data processors, who are assumed to be other external third-party “cloud” servers, and that these data processors actually known in advance to the data controller and can be communicated to the user before the personal data is transferred to the data controller. These assumptions are manifestly false. First, GDPR Article 24(1) requires controllers to implement systems that can demonstrate the processing of personal data is performed in accordance with the GDPR. For server-side “cloud” based infrastructure, this seems manifestedly impossible, as the user or any legal body has no insight into the data processing going on in the server-side cloud, which is opaque by design. The flows between data processors also are thus unknown to the user, and often processing is done in new and unforeseen ways via shadowy networks of data processors whose complexity is difficult to grasp, much less enumerate – as it is nearly impossible to prevent the copying of data. Likewise, although concrete measures such as data minimisation and pseudonymization are discussed by Article 25, by it is unclear how they can be assessed based on a centralized architecture. Lastly, the General Data Protection Regulation hinges both on the promise of the server-side data controller being not only transparent, but also that consent for these increasingly opaque dataflows of the user can be meaningful consented to. It appears that

7

Page 8: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

attempting to put these constraints on current centralized cloud-based services is quixotic, as data protection relies on purely the good faith of the centralized platform, which in many cases is built on top of a business model that depends on data extractivism. For this very reason, centralized platforms like Facebook and Google would rather simply take fines from the European Commission or European countries rather than actually comply with the substance of Data Protection, as complying with the substance would require an entire re-architecture of theirtechnology.

A decentralized fix seems simple via using blockchains as the new technologicial architecture, but it is not. One possibility is re-architect the fundamental technologies to make a new generation of platforms that is inline with user rights by virtue of using P2P technologies: This alternative has recently been associated with the move away from centralized platforms to decentralized alternatives based on blockchain technologies. However, if it seems too good to be true it probably is: Blockchain technology is not a silver bullet and is in its current state fundamentally incompatible with data protection regulations. In particular, a report published by the European Commission’s Blockchain Observatory has remarked how the EU’s courts and data protection authorities have not conclusively settled these issues but highlight three main tensions between the distributed ledger technologies and the EU’s new data protection rules, namely: the difficulties in identifying the obligations of data controllers and processors; disagreements on when personal data should be anonymised; and the difficulty in exercising new data subject rights, like the right to be forgotten and the possibilities to erase certain data on blockchain technologies given that personal data shared on a blockchain prevents censorship (including changing or erasing the data by legal request of a data subject), is cryptographically intertwined into the entire blockchain by design, and is public by default.11 As asolution, the authors propose four rule-of-thumb principles: Technologies designing systems for public usage should “start with the big picture” and decide whether blockchain is needed and really meets their data-needs; avoid storing personal data on blockchain and use data obfuscation and encryption techniques instead; if blockchain can’t be avoided, favour private, permissioned blockchain networks; and be transparent with users. For this very reason, centralized platforms like Facebook and Google would rather simply take fines from the European Commission or European countries rather than actually comply with the substance of Data Protection, as complying with the substance would require an entire re-architecture of theirtechnology.

3. Enforcing fundamental rights through policy and technology: Three policy recommendations12

The question is what kind of technological architecture and what kinds of policy would be appropriate, if any, to enforce regulations like General Data Protection and fundamental rights

11https://www.eublockchainforum.eu/sites/default/files/reports/20181016_report_gdpr.pdf12 While the earlier sections are authored by Harry Halpin, parts of this section are derived from joint work between Francesca Musiani and Stéphane Bortzmeyer (AFNIC): https://www.afnic.fr/medias/documents/JCSA/2018/tutoriel-JCSA18.pdf

8

Page 9: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

more broadly? In order to get a handle on this question, there needs to be some clarity on what composes rights in general. The issue of the interlinking between human rights and Internet protocols is taking hold in a number of arenas, both political and technical (see the already-mentioned IRTF and its Human Rights Protocol Considerations research group). The idea is that the Internet is not just an object of consumption, that the customer would only want to be fast, cheap, reliable, as it would a car or the electrical grid. We do business, politics, we talk, we work, we get distracted, we date: The Internet is not a tool that we use, it is a space where our activities unfold, the nomos of the 21st century where the fundamental rights provided by the platforms and protocols of the internet are just as important for people as the rights guaranteed by governments. It is this perspective that we call net rights in NEXTLEAP, and exploring the various formulations of this proposition in D6.5.

Human rights are claimed to be universal, indivisible and inalienable rights, formalized in texts such as the 1948 Universal Declaration of Human Rights (UDHR). Of course, these are not sacred texts, but the result of a social and legal process. Like any human work, they can be improved and may change over time; however, often those who criticize them do not generally seek to improve, but to weaken or even destroy them. It must be noted that human rights are not absolute, as they may be in conflict with one another. For example, the right to freedom of expression may conflict with the right not to be insulted or harassed. Likewise, freedom of expression may conflict with the right to privacy, if we want to prevent the publication of personaldata. Historically, it has been the task of the legal system to determine the balance of rights. Thequestion is over whether technical space of the internet, including its rules, its limits, and the capabilities afforded by the Internet, have an influence on human rights or whether they transform human rights, and what concrete policy measures are needed.

The co-inventor of the Internet and Google evangelist Vint Cerf put forward the proposition in 2012 that “internet access is not a human right” arguing that “technology is an enabler of rights, not a right itself” as a human right “must be among the things we as humans need in order to lead healthy, meaningful lives, like freedom from torture or freedom of conscience” and so “it is a mistake to place any particular technology in this exalted category, since over time we will endup valuing the wrong things.” Nonetheless, countries in Europe such as Finland have made internet access a basic right13 and this sentiment has been echoed by the Constitutional Councilin France.14 Indeed, Berners-Lee and Halpin argued that it is access not to the Internet per se but to the capabilities engendered that the Internet that should be a right: “The real argument is not whether access via certain protocols to a technical infrastructure should be accorded the status of a human right, but whether the social capabilities that the Internet engenders in general should be considered a new kind of right” as people “have the right to build their own innovations when they imagine something new...that’s a far cry from just receiving and impartinginformation as pre-scribed by Article 19.” This is precisely the terrain of net rights, including technological proposals around network neutrality and encryption, at categorized in D6.5.

13 http://edition.cnn.com/2009/TECH/10/15/finland.internet.rights/index.html14 https://www.theguardian.com/technology/2013/jul/09/france-hadopi-law-anti-piracy

9

Page 10: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

These kinds of net rights may not be implausible, and Data Protection itself is a kind of anet right based on privacy that extends the notion of privacy to the digital age, and so fundamentally reshapes it, and also puts it into tension with new, networked forms of the right to free expression that have been similarly transformed. This transformation is non-trivial: What was the use of freedom of expression when information circulated only through media that the ordinary person does not have access to? On the contrary, the Internet allows expression by all and thus makes concrete, effective, freedom of expression, which was mostly theoretical before the advent of the Internet, and this freedom of expression relies on network neutrality, down to the technical details of having a public IP address. Theoretically, it can even be considered that the Internet is even an extension of our own mind, as given by Andy Clark’s extended mind thesis.15 The mind has historically been considered to be a private, individual sphere of freedom,where one of the fundamental values that seemed inalienable even by the most repressive of governments was the ability to think freely. While a person could be enslaved via control of their body, their mind was still free: Fundamental rights existed to prevent the coercion into slavery ofthe body and to encourage the free use of the mind. Today, the external scaffolding (or tertiary retentions as put by Bernard Stiegler) that support our cognitive processes are digital and composed of platforms that seek to control behavior, creating a new slavery of the mind that threatens the pre-Internet fundamentals of human rights. In order to protect the yet unbounded number of net rights produced by the new capabilities of the Internet, a number of clear policy recommendations makes sense in order to defend the future. These policy recommendations that support data sovereignty are:

• Use End-to-End Encryption

• Support Open Source/Open Standards

• Decentralize Data

Policy Recommendation n°1: No Backdoors in Encryption

Encryption can be a controversial issue, as many believe that encryption allows criminals to conceal the content of their communications from the judicial system and police. However, technologists have long held that without encryption the right to privacy remains purely theoretical given the ease of spying on digital communications. To pose debates over encryption policy as a debate pitting security at the cost of civil liberties against new technological freedoms that may pose security risks is wrong-headed. First, cryptography is not used to conceal information, i.e. to make data confidential, but also to provide checks on the integrity and authentication of data, even data that is public. For example, in order to check that data has not been modified either by accident or with ill-intent, integrity checks that use cryptographic hash functions, a function that shrinks data to a small code that can be checked independently, are normally used to check the integrity of the data. This common technique is used both in downloading files from the internet and in the “blocks” in blockchains. Cryptographycan also used with both private secrets called private keys that can work with public information,given as public keys, to both authenticate as well as encrypt data. With public keys and hash

15http://www.consc.net/papers/extended.html

10

Page 11: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

functions, we can create digital signatures to make sure that we know the identity of the entity that some data originates from. Again, this is useful not re just in hiding data, but also in everything from e-Signature schemes that can help reduce bureaucracy and also to help prevent the spread of false information. If a government has a policy to reduce the scope of encryption so that it’s police or intelligence forces can “read” digital messages, there is a clear danger that the government would accidentally prevent other incredibly important uses of encryption that would damage the ability of society to place any trust in data whatsoever. This would have immense negative economic consequences on everything from financial services tocreating trust in the public sector itself.

Our policy recommendation is also that there should be not a push to make the encryption of data illegal for any reason. Cryptography needs to be legal as digital technologies considerably increase the possibilities of surveillance in a fundamentally asymmetric manner that is incompatible with democracy. Vint Cerf has tried to justify the surveillance powers of his employer, Google, by comparing the Internet to the small village of yesteryear, where everyone knew everything about everything and where there was no real privacy. But this comparison implies two inaccuracies: The first is asymmetry. In a village, everyone was watching everyone, such that country policeman knew what the villagers were doing, but it was reciprocal. Today, Google and other Silicon Valley platforms gather a lot of information about us but these platforms are opaque to citizens. The same issue holds true of various state intelligence agencies. The second mistake in the comparison is one of scale. While Cerf cites the example of the mailman who knew everyone, and so knew that a particular person had received a postcard from a faraway country, he conveniently forgets that the mailman in question knew only this village, and he would have no idea what was happening in the next city, and so this postman could not find at a glance the complete list, with the exact dates, of all those who had received a postcard from such a country. Yet this data processing happens en masse. To present encryption as a mechanism that will prevent a state and a police force from investigations is dishonest intellectually. On the contrary, it must be remembered that the adventof digital technology has considerably reduced the privacy that was available before to citizens. Cryptography in fact partially restores the ideal kinds of social norms around communication that were expected prior to the advent of digital communication, so that messages are clearly given to be from a particular sender and can only be read by a particular recipient, with interference to the message being detectable. Encryption should be end-to-end, where the endpoints are the users, without the ability of any entity in the middle to decrypt the message.

One policy option that would allow the use of encryption but claim to also allow mass surveillance to be used, for example in case of investigation of a case of terrorism, is the idea ofa “backdoor” to allow decryption of encrypted messages. Technically, the proposal to allow selective weakening of encryption is absurd. The mathematical algorithms that form the core of encryption cannot work only in some cases, but not in others. Either they make it possible to access data to anyone that has the key, or they have a flaw, allowing access to anyone that knows the flaw. Both propositions have issues. In proposals for “backdoors,” one key to decrypt would be the key of the legitimate receiver, but there would be another key in the hands of the police, or the key control a “middlebox” that decrypts the message and re-encrypts it to the

11

Page 12: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

intended recipient. This key is considered to be under some sort of key escrow, i.e. stored by a third party such as the government, and perhaps only available in special circumstances. However, the problem is that such a solution obviously does not work in open source and free software, where review of any code would show the backdoor. It would be more realistic in a context where the user does not have control over his software, and where he is given software with the backdoor. Obviously, any real threat to the state will not use the latter tools, but search for software without backdoors. Yet this method can work with the honest citizen who, unlike theterrorist, trusts proprietary software, and who communicate with each other through Silicon Valley platforms. Therefore, the purpose of these anti-encryption campaigns for “backdoors” is revealed: it is not so much to find a miracle solution allowing the the police to decrypt at will in order to find terrorists, but it is to put pressure on Silicon Valley, for that they include a backdoor in their communication software so that there can be mass surveillance of ordinary citizens by the police. Given its history, Europe should be wary of mass surveillance by states, not only Silicon Valley corporations. Encryption allows fundamental freedoms to be exerted; as such, it should be recognized as a tool for countering the discretion and arbitrariness of state governments and dominant socio-economic actors such as Silicon Valley.

Indeed, the problem with any of these backdoors, as explained in the Keys under Doormats report,16 is that this backdoor can be used not only by the “legitimate” entity who is supposed to use the backdoor, but by any adversary with knowledge of the backdoor. This holds not just for explicit keys put in escrow by governments, but also for weaknesses that are deliberately introduced into encryption. Any of these weaknesses could be a “backdoor.” In an infamous case of subversion of a standards body, the Dual_EC pseudorandom number generator (used to generate keys), ratified by the US standards agency NIST, had a backdoor placed in it by the NSA.17 This backdoor Dual_EC was placed in Juniper routers, and then another unknown likely another nation-state like China or Russia, used this backdoor to compromise these routers and then install their own backdoor to decrypt network traffic. Backdoors do not have to be intentional to be dangerous. A zero-day is a hack that is not yet known, and there is suspicion that Europol is collecting zero-days.18 While there is not an explicit backdoor in the encryption and, as per the ePrivacy Directive, there is still “end to end” encryption, the fact of the matter is that the encryption has a known backdoor. By keeping this information secret, the police hope tobe able to use it to selectively decrypt data when needed. Yet this forces an arms-race of zero-day collection between competing public entities, such as intelligence and police forces, as well as private entities such as private hacking firms. These zero-day collections never end well, withthe exploits often being leaked on the Internet, as done by Wikileaks to the CIA zeroday collection. Therefore, for all nation-states and for Europe, given that the security of the Internet is a common good, it makes sense to reveal known attacks as soon as possible and fund the securing of internet infrastructure. The remaining question is: What should the police or legal system do if they cannot use mass surveillance or targeted attacks via backdoors? In general,

16 https://dspace.mit.edu/bitstream/handle/1721.1/97690/MIT-CSAIL-TR-2015-026.pdf17https://eprint.iacr.org/2015/767.pdf18See blog post by Bart Preneel at https://www.esat.kuleuven.be/cosic/magic-toolbox-new-front-crypto-war/

12

Page 13: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

just as cryptography restores the right to communicate in a private manner similar to the pre-digital era, police and law order should do what they have always done when needing to seize the content of communication: Simply get a court order and then seize the actual device, with the key material needed to decrypt communications, of the suspect.

In terms of preventing security problems like backdoors or just mistakes with cryptography, modern cryptography should be used, ideally with formally verified protocols. Cryptography is a detail-oriented mathematical discipline built on top of security assumptions that should not have to be understood to be deployed: No policy-maker should have to know the difference between the Decisional Diffie-Hellman and Computational Diffie-Hellman property. Privacy is even harderto get right, as there is not yet an exact science of the irreducibly social, and so contextual, notion of privacy. Privacy requires carefully defining the threat model and running simulations to see various empirical ways to measure unlinkability can be determined. Yet, rather than outsource security properties to private companies whose business models are based on data extractivism and whose closed-source software – or opaque Web service – is difficult, if not impossible, to judge in terms of preserving security and privacy. How can we tell if there is a backdoor in software? Open source software is unlikely to have a backdoor, but in order to be sure one needs to not just inspect, but formally verify the algorithms and protocols using cryptography. Today, work by academics on formally verification produces that secure protocols with public proofs not to have backdoors can be created. With advanced toolsets such as those created by Inria like F* for TLS, fast and running secure code can even be generated that is guaranteed to carry those properties.19 As privacy is harder to verify and does not fit within this formal verification frameworks as well, special need is necessary in order to determine exactly what kinds of privacy are being discussed and whether a given system can support it. This is often done via simulations that show that privacy is maintained.

Although the NSA has shown that national standards bodies like NIST have been corrupted andthe eTLS scandal shows that even ETSI has issue, the use of public cryptographic algorithms that have been verified to be best of breed via open competition is still the best known way to generate cryptographic standards (including for new realms such as post-quantum cryptography). It’s important to support these competitions, such as the CAESAR competition in Europe, and also to promote best practices. This was until recently done by ENISA in Europe,20 but now this function is being devolved down to the nation-states, who are unlikely to have as much cryptographic knowledge and may be tempted to put in backdoors. Therefore, Europe should support the IETF with its groups such as the CryptoForum Research Group in order to judge whether certain key sizes and cryptographic algorithms are safe. Only via cryptography and cryptographic protocols from open standards bodies where neutral academics verify their privacy and security properties. As an added bonus, in terms of Data Protection, using the lateststandards are “state of the art” and should be an approved way to provide meaningful privacy and security protection for personal data.

19https://www.fstar-lang.org/20 The last “Key Sizes and Parameters” recommendation was put forward by ENISA in 2014. https://www.enisa.europa.eu/publications/algorithms-key-size-and-parameters-report-2014

13

Page 14: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

Policy Recommendation n°2: Use Open Standards and Free Software

Open standards are produced by standard-bodies that allow anyone to participate in them, and thus maximize the collective intelligence of the “wisdom of the crowds” and maximize transparency, which is particularly useful in standards around security and cryptography. Standards are different than code insofar as they describe the specifications for code, and this code may then be independently implemented in conformance to the specification with varying licensing ranging from open source to proprietary options. These bodies produce technical standards like TCP/IP, HTML, and TLS that form the core of the internet. While national standards bodies and closed standards bodies are easy to manipulate, as shown by the abuse of Microsoft in forcing some of its formats to become an “open” standard via the ISO (a standards body composed entirely of countries), open standards bodies have more rigorous processes. Note some companies-only standards bodies, like ECMA, are open only to companies. Open standards bodies are open to both public bodies, private companies, universities, and individuals. Open standards bodies for the Internet include the IETF and for theWeb include the W3C. One important aspect of standards bodies is patent disclosures. Open standards, particularly those with a royalty-free licensing policy like those produced by the W3C,are also vital as they prohibit patent licensing fees. Other standards bodies like the IETF force patent disclosures, and try to prevent patents from entering into open standards. So while far from perfect, standards bodies should be supported.

In terms of concrete code, open source should be supported as a matter of policy, where open source describes is a license that allows one to read and possibly modify the code that one is used. For example, procurement should require the use of open source and open standards. We would recommend policy-makers go beyond open source and look at free software. Free software inscribes four fundamental freedoms in the code itself: 1) The freedom to run the program as you wish, for any purpose 2) The freedom to study how the program works, and change it so it does your computing as you wish 3) The freedom to redistribute copies so you can help your neighbor, and 4) The freedom to distribute copies of your modified versions to others. These freedoms mean that people can control the software for their own purposes by virtue of having access to the source code – as put by the Free Software Foundation, “free software is a matter of liberty, not price.” Free software is a political programme that goes beyond just “open source” and “open access” to code, although it provides open access to codeas it is necessary for freedom.

Free software was invented as a hack in American copyright law by the hacker Richard Stallman at MIT, who saw that the culture of sharing software developed by hackers was being enclosed by commercial ventures such as Microsoft. In order to create a legally-binding resistance to these new cognitive enclosures, Stallman created the GPL (General Public License). As the copyright of software is given by default to the developer, the developer can license their software to an unlimited number of people, thus preserving the four fundamental freedoms for posterity. The GPL license requires that all derivative works also use the GPL, so copyright's traditional required restrictions transforms into a copyleft that instead requires that the four freedoms be granted. Other “open source” licenses such the MIT license or most

14

Page 15: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

Creative Commons licenses (that directly assigns copyright to the public domain) do not preventderivative works from being enclosed in a proprietary manner. With the GPL license, not only can a particular piece of software be guaranteed to preserve human capabilities, the software commons can virally grow. The GPL has been a remarkably successful license and software methodology, as GNU/Linux, runs most of the Internet architecture today and even Google's Android is based on a free software kernel, although Google outsources vital components to its proprietary cloud.

The “virality” of the GPL can pose problems when there is the need to integrate with non-free software, as is often the case in public administration. However, this does not mean that the GPL should not be used and a weaker open source license like MIT or BSD should be used. If publically funded software uses an ordinary open source license as opposed to a free software license, nothing prevents a company such as Google from copying the software and then making their own proprietary version of it for sale or putting the software behind a centralized server in the cloud – and in some open sources software, the work of the original authors doesn’t have to be acknowledged. This kind of licensing is dangerous insofar as it allows the enclosure of open source software by private companies, and it is precisely due to this kind of open licensing that many companies fund programmers to work on open source projects. In fact, the best policy for software in the public interest is to maintain a “dual license” policy, with server side software being published using the Affero GPL (AGPL) and client software using theGPL v3.0. The AGPL software prevents free software from being made available as a service over a network, such as a web service, without releasing the code. If someone wants to use thissoftware in integrating against non-free software, then the creators of the software retain the right to do a non-exclusive and non-transferable “dual license” of the software. This license can be granted for free to public administrations, and for-profit companies can be changed for licensing. This is the business model originally used by Signal, and it maintains the best of both worlds: Allowing integration with non-free software on a case-by-case basis while maintaining the code itself as a free commons.

Free software solves previously insurmountable problems for those seeking technological sovereignty on both an individual and collective scale. First, it allows programmers and society to form new kinds of social solidarity via the collective programming of code, as opposed to proprietary software development that is kept in the silo of a single corporation. Second, free software users are enabled by design to become programmers themselves, as they have the capability to learn how to program and make modifications to the code. Third, open source is the only guarantee of security as it allows experts to audit the code. There are no licensing fees and security updates are free, preventing many cyber-attacks. Lastly, although code may be kept “in the cloud” (i.e., hosted on other people's computers), versions of the GPL such as AGPLcan guarantee that source code of software that runs on servers would be available as part of the commons. The GPL is a prerequisite for decentralizing the Internet and challenging the centralizing platforms of Silicon Valley.

The deployment of open source and open standards should be considered a requirement in order to maintain sovereignty by public administration. Some cities such as Barcelona, and even

15

Page 16: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

entire countries like Bolivia, have now legally mandated the use of open-source software for all public services.21 This is a remarkable legislative success, and should be copied. Even if open-source cannot be mandated due to a lack of availability of a particular piece of software, at leastopen standards should be mandated in terms of security and privacy. Furthermore, by mandating open standards, if a particular open source code-base being used by public administration stops being maintained and so falls into disrepair, the administration can easily move to another open-source codebase that implements the open standard. There are also pragmatic reasons why organizations of any type should move to open-source and open standard software: Lastly, although open source code may require costs for maintenance, this cost is vastly less costly than the cost of purchase and, more importantly, yearly maintenance charged by organizations like Microsoft and Oracle that essentially traps an organization into proprietary software. Using open standards and open source may require what appears and to be a difficult transition, but the long-term cost-benefit analysis is in favor of the public, rather than the company extracting a profit. Lastly, open source, in terms of algorithms and the processing of personal data, is an excellent way to mandate the transparency required by the Data Protection Regulation.

Policy Recommendation n°3: Deploy Decentralization

Decentralized architectures are a good solution in particular situations; but just as centralized architectures can have useful and rights-preserving aspects in some cases and be highly problematic in others, decentralized ones are not currently the ideal solution in all cases. It is important to underline that, when it comes to discussing the degree of (de-)centralization, protocols are not intrinsically good or bad. The idea of the case studies we have examined throughout past deliverables is to illustrate, in concrete cases, that technical decisions have political consequences.

Many Internet protocols have a client/server architecture. This means that the machines that communicate are not equivalent. One is a server, permanently on and waiting for connections, the other is a client, who connecting when it has something to ask. This is a logical mode of operation when the two communicating parties are "naturally" distinct, which is the case of the Web: when you visit a website, you are a reader, and the entity, which manages the website, produces content that many people want to read. Yet not all uses of the Internet fit into this model. Sometimes one want to send a message to an acquaintance who will answer you. Your communication, in this case, is not a reader-to-writer, one-way, but peer-to-peer. In this case, there should not be needed, in theory, a server, the machines of two humans should be able to communicate directly, something the Internet allows via the peer-to-peer architecture.

So why go through an intermediary when it is not strictly necessary? Usually it is because the intermediary is indeed useful for something. An example is the storage of messages in the case where the correspondent is absent, and his machine off. This is why the Simple Mail Transfer Protocol (SMTP), which is the basis of the e-mail service, does not provide for direct sending from Alice's machine to Bob's. The software that Alice uses (Thunderbird, Outlook, etc.) sends

21https://joinup.ec.europa.eu/news/testimonial-barcelona

16

Page 17: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

to an SMTP server (managed by its ISP or IT department) which then transmits to the SMTP server that Bob uses, which will then retrieve it via yet another protocol, probably the Internet Message Access Protocol (IMAP). What are the consequences of this architecture? Now, Alice and Bob depend on third parties, the managers of SMTP servers. These managers can stop theservice, limit it, block some messages (the fight against spam always causes collateral damage), and read what passes through their servers if Alice and Bob do not use encrypted e-mail protocols like PGP. That does not mean that they actually do it, but the possibility exists, and it's technically very simple to archive all the messages being transmitted.

Since passing through an intermediate server has consequences, one must ask which server to use. A personal machine that we install and manage ourselves? A personal machine of a friend who knows and takes care of everything? A local collective that manages servers? A public administration? Or a Silicon Valley platform like Gmail that now knows most of the email in the world? The choice is not obvious. Of course, from the point of view of privacy and freedom of expression, Silicon Valley may be the worst. These problems are also not reducible to individual problems: Even if one does not use Gmail, others using Gmail leak their contact with your non-Gmail e-mail to Google. Still, any third party that controls a server can abuse its power. For example, are machines that are managed by a particular individual, local company or even public administration may not be safer than Silicon Valley as they will simply have less resources to solve security and privacy. There is no simple answer, but generally, we must be wary of servers managed individually, and prefer cases where the administration of these machines is really collective and has long-term sustainability. A machine run by a nice, but overworked and not necessarily very competent amateur can present high risks, not because the amateur is not trustworthy, but because the system can be attacked successfully. However, it should be pointed out that today, professional servers are not necessarily safer, with a numberof recent hacks of very big companies showing that they present perhaps ultimately larger targets. Also, servers ran by the public administration could be one option, but the problem is then that even one well-meaning administration may be replaced by one that is intent on violating rights and liberties in the future. Federated systems allow servers to be easily interchanged and so have some benefits in terms of sustainability. For example, Mastodon, the decentralized microblogging service (à la Twitter), is made up of hundreds of independently managed servers, some of which are administered by an individual (and their future is uncertainif this individual abandons), others by associations, others by companies.

So, what are decentralized and peer-to-peer architectures’ implications, instead, for rights? First,it must be noted that this term does not designate a particular protocol, which could be analyzedin detail, but a family of very diverse protocols. The most well-known peer-to-peer application is obviously the exchange of media files, but peer-to-peer is a very general architecture, which canbe used for many things (Bitcoin, for example). And despite an accompanying rhetoric of openness and freedom (well described in D2.5 and D3.6), decentralized architectures also havetheir problems from the standpoint of net rights. At the era of Google and Facebook as centralized totalizing platforms that control all user interactions, peer-to-peer is often presented as the ideal solution to all problems, including censorship. But the situation is more complicated than that.

17

Page 18: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

First, peer-to-peer networks with no central certification authority for content are vulnerable to various forms of attacks, ranging from “fake data” to “fake users.” It should be remembered that at one time, rightholders circulated fake MP3s on peer to peer networks, with promising names and disappointing content that led the users downloading the files to be identified. An attacker can also relatively easily otherwise corrupt the data, or at the very least the routing that leads to it. Furthermore, in terms of a net neutrality, because peer-to-peer protocols account for a good deal of Internet traffic are often identifiable on the network, an ISP may be tempted to limit their traffic. Many peer-to-peer protocols do not hide the IP address of users. In BitTorrent, if you finda peer who has the file you are interested in, and you contact him, this peer will learn your IP address. This can be used as a basis for threatening letters or for legal proceedings (as with theHADOPI in France). There are peer-to-peer networks that deploy protection against this leak of personal information such as Freenet but they remain sparsely used.

Another danger specific to peer-to-peer networks are “fake users”, also called sybil attacks: in the absence of a verification that the identity is linked to something expensive or difficult to obtain, nothing prevents an attacker from creating millions of identities and thus subverting systems. It is to fight against this type of attack that Bitcoin uses “proof of work”, and that organizations like the CAcert certification authority or informal groups like PGP users, use certifications made during physical meetings, with verification of state identity, as "official" ways of joining a group. There is currently no general solution to the problems of the sybil attacks, if this solution is required to be ecologically sustainable (which proof of work is not) and fully peer-to-peer (which typical enrollment systems are not, as a privileged actor checks the participants for entry). Solutions based on "social networks" (like that of PGP, for example), are bad for privacy, since they expose the social graph of the participants, the list of their correspondents. ClaimChain, a protocol developed by NEXTLEAP, allows gossip in social networks to help verifyidentity while remaining privacy-preserving.

Decentralized networks that use end-to-end encryption and privacy enhancing-technologies, both federated and P2P, are far easier to be compliant with the GDPR and superior to simplistic applications of blockchain technology. In the case of P2P networks, the user is the data controller, and freely choses their own data processors and who to share data with. For federated networks, the data controller will be the entity that runs the server. For a public administration that runs open source software, there can be transparency about the compliance of the General Data Protection Directive even if the data is not end-to-end encrypted. If end-to-end encryption is used, then in decentralized systems the personal data should be hidden from the data controller in the federated setting and other peers in a P2P setting, and so uses of the data that are not explicitly authorized by the user are rendered technically impossible. Privacy-enhancing technologies can then be used to limit the scope of third-parties from determining even the metadata of communication, rendering the data anonymized by default and so not ableto be processed without consent. Privacy-enhanced analytics such as private information retrieval allow data processing on even anonymized data. Although more work must be done on fair and informed consent, decentralized and encrypted protocols provide a robust technological solution for Data Protection.

18

Page 19: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

4. Conclusions: Protocols from NEXTLEAP

The NEXTLEAP project has helped create both free software and open standards with multiple implementations that should allow any individual or organization, from the ordinary person on the street to an entire nation-state, to maintain the privacy and security of their communications and so respect their fundamental net rights. By decentralizing the control of data and communication to users with the appropriate protocols, we can both honor the fundamental net rights embodied the General Data Protection Regulation and re-enforce user-centric cybersecurity that strengthening Europe’s data sovereignty. In concert with the legal policy recommendations that are built on fundamental rights, we can present an alternative techno-social infrastructure that can defend Europe against data extractivism.

The first step in communication is to have a user or organization control their own identity in a secure and privacy-enhanced manner. In order to enable privacy, NEXTLEAP initially a privacy-enhanced federated identity system like OAuth, called UnlimitID, that provided the same properties as popular services such as Facebook Connect and Sign-In with Google without the identity provider (the provider holding user information, such as a list of contacts) knowing the service a person was using.22 However, this still centralizes user data in an identity provider, andfor true autonomy, the valuable data should be stored locally but capable of being verified by anyone in a privacy-enhanced manner: In other words, a decentralized and privacy-enhanced blockchain. For this use-case in decentralized and P2P frameworks, we presented the privacy-enhanced blockchain identity solution ClaimChain created by NEXTLEAP in D2.2.

Once the user identity is secured, the transport of messages between users need to be secured. As secure communication requires both arbitrary user-centric data and cryptographic keys, the ClaimChain protocol for identity also handles key management. In order to secure messages in transit , the Message Layer Security protocol was invented, that gives the same guarantees as secure messaging protocols like the Signal Protocol deployed by centralized messaging services such as WhatsApp and Skype – and probably more secure than Telegram – but supports even larger groups and decentralization. As the majority of messages used in organizations are still using e-mail, NEXTLEAP helped create Autocrypt to make encrypted e-mails both more secure and easy-to-use. Both MLS and Autocrypt can be used with ClaimChains and in federated environments (which we also count as “decentralized”) where the user or organization control their own server, as is often the case in public administrations. MLS and Claimchains can also be used in purely peer-to-peer environments. One objection to securecommunication is that local administrations will lose access to valuable data-driven isnights Although it is still an emerging field, privacy-enhancing data-mining and consensus algorithms, as given in D2.4, will allow these sort of insights to be obtained while respecting the fundamental rights of users.

22 Isaakidis, M., Halpin, H. and Danezis, G., 2016, October. UnlimitID: Privacy-preserving federated identity management using algebraic MACs. In Proceedings of the 2016 ACM on Workshop on Privacy in the Electronic Society (pp. 139-142). ACM.

19

Page 20: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

The technologies developed by NEXTLEAP are all compliant with the General Data Protection Directive by default. These technologies are presented in Table 1 where the software is listed below in the state of maturity, whether or not the software is open-source, and whether or not the software is on track to become an open standard.

Software Purpose Open Standard Open Source Decentralized

Claimchain Personal Data No Yes Yes

Autocrypt E-mail Semi Yes Yes

MLS Messaging Yes (in process) Yes Yes

● ClaimChain is blockchain without consensus that stores arbitrary data privately, including cryptographic keys used to communicate with PGP or newer protocols like MLS. It is decentralized, and so works like a blockchain, but is privacy-enhanced and so can store personal data safely and secure. It is not standardized yet, but is available as open source, including a formally verified implementation and has had extensive review from academia in terms of encryption and privacy. More information is here: https://claimchain.github.io/

● Autocrypt improves the open standard PGP that makes encrypted-e-mail both usable and more secure. It encourages key exchange to be opportunistic, and is supported by most open-source major e-mail clients although it is not yet a formal standard. Further development will focus on authentication, improving the security of e-mail even more, and e-mail is naturally federated. More information is here: https://autocrypt.org/

● Message Layer Security is a new standard for group messaging that offers superior security to encrypted e-mail for group messaging applications and the same level of security as Signal, with better scalability for groups. An IETF standard under process, it is expected to be finished within two years and have multiple open-source implementations, as well as support federation. More information is here: https://messaginglayersecurity.rocks/

As Europe is a federation of different nations and cities,and thus decentralization as a social and technical strategy makes sense as a winning strategy for Europe. Having open standards lets the “network effect” of Europe grow, without forcing different countries into the same platform and maintaining interoperability. By building these protocols on top of free software, Europe can prevent both capture by large platforms in Silicon Valley and assert its sovereignty against data extractivism, while preventing world-historical catastrophes such as mass surveillance in the hands of present – and future – authoritarian regimes. By deploying decentralized and privacy-preserving protocols, Europe is handing power to the collective intelligence of its citizens. Although there has always been concerns from policy-makers over the new capabilities given to citizens by technology, we should trust that our citizens will find new ways to create real value and empower society. After all, music sharing empowered by

20

Page 21: DELIVERABLE D4.6 Policy Recommendations for Decentralized ... · Grant No. 688722 Project Started 2016-01-01. Duration 36 months DELIVERABLE D4.6 Policy Recommendations for Decentralized

BitTorrent eventually changed the entire paradigm of digital music, moving to a per-song revenue model per iTunes and opening up music creation to the general population. Likewise, Bitcoin seems to be poised to change the entire paradigm of financial technology. Europe must take the bet that empowering future generations via decentralization, open source, and cryptography will enable yet unknown new kinds of social innovation that will steer us all throughthe years of crisis, engendering not just the Web we want, but the world we want.

21