The economics of meaningful assurance of computing services for civilian use

Embed Size (px)

Citation preview

  1. 1. Meaningful assurance of computing services? "Assurance": level of informed confidence (trustworthiness) that a given service will not function unexpectedly, because of errors, flaws or malicious actions. "Computing Service": It's always about computing service, as any critical vulnerabilities in hardware, firmware, software and process - involved in client or server-side in critical role in the lifecycle, provisioning and fruition - is sufficient to completely compromise a computing experience. All P2P solutions have service part. Meaningful: The existence of one undetectable remotely-exploitable critical vulnerability, known to a set of actors, has the same effect of having ten such vulnerabilities, resulting in self flagging and overuse of unsafe techs. Highest may be worse than Lowest if not high enough! How do we measure assurance?: There is no standards or methods to allow comparison of overall assurance of a product or service, even for most common usage scenarios. Highest-assurance standards (EAL, FIPS, Common Criteria, NIST) are of no use, as they measure against limited arbitrary parameters, do not measure end-to-end, or flawed (US) national standards.
  2. 2. Post-Snowden Privacy Techs: PERCEPTION There is a wide consensus shared by IT privacy experts, mainstream tech media and 3-letter US agencies: that techs are currently available and in use by few high-worth and/or highly- technical individuals, that are resistant to low-cost remote continuous surveillance, by both malicious state and private actors, as well as legitimate court-mandated crime inquiries. that such techs are costly and/or hard to use, so that most people can't afford the expense or the inconvenience to use them. that, therefore, the problem can be solved by convincing people to use them, and possibly making such techs cheaper, more user-friendly and incrementally safer. Blackphone, privacy apps, Merkel Phone.
  3. 3. Post-Snowden Privacy Techs: FACTS 1 A mid-level NSA contractor staff member, like Snowden, was able to access NSA most valuable information. Same could happen for therefore to the most restricted sophisticated surveillance methods. CIA was able to perform undetected continuous spying of US Senate Intelligence Committee. Snowden swears under oath he could remotely spy on each and every member of EU Parliamentary Committee. And such committees, by design, know more than any civilian on this matters, and have access to the best civilian and military technology. Imagine everybody else!. Federal inquiry has found that "several different groups" of unidentified hackers have "been operating freely" inside NASDAQ servers "for years". Snowden revelations: are dated; have been heavily filtered by him and journalists to not be damaging to US general interests; based on docs that do not include highest-clearance information.
  4. 4. Post-Snowden Privacy Techs: FACTS 2 Snowden: "Proper encryption works" but "endpoint security is so terrifically weak. NSA Turbine and Hacking Team platform enable efficient semi-automated management of hundreds millions of exploited end-user devices. Schneier: "we should assume all processors are compromised", knowingly or unknowingly of their vendors. Adi Shamir: "we should design systems that assume some crucial hardware components are undetectably compromised". Therefore: Continuous surveillance of millions, with very-low cost and detectability, even of the most secure civilian (and military?!) services and devices, is most likely probably feasible by an unknown, and probably high, number entities, individuals and groups.
  5. 5. Why is it so bad?! The most intractable problems that have prevented to date even to the best services to achieve meaningful assurance are: Hardware Fabrication: Fabrication of high-performance high-featured devices necessary to run latest secure or commercial OSs, necessarily happens in 4- 10bn 300mm or 450mm mega-fabs, with hundreds of workers, and involving loads of non-verifiable software, hardware and processes. US National Science Board: "trust cannot be added to integrated circuits after fabrication". Hardware Design: Large number of contractor, subcontracts, networks, devices are involved in the creation and manipulation of the designs of hardware components (sw or hw), that are then typically fabricated elsewhere. Flaws inserted in such process can be very hard to find. IPR and verifiability: hardware design and fabrication processes, and their firmware, involved technologies with cutting-edge performance and non- security that are almost always non independently verifiable for various market and competitive reasons. Hosting Room Access Mgmt: Snowden case and subsequent NSA 2-man's rule has proven how crucial insider threats, and hosting room access management policies are which need to be indirectly but solidly accountable to end users.
  6. 6. Top-expert Solution: Pick Your Enemy Bruce Schneier concludes that given the depth and number of those vulnerabilities, we can at best "pick our enemy", or enemies, public and private, trusting that they will do a good job at keeping their capabilities to themselves and not abusing them. Jovan Golic asks Are government agencies discouraging support of privacy by technical means, in order to enable lawful interception? It seems so, as Germany and Brazil, following the US tracks, are aiming at becoming your favorite enemy" - while still allowing their state security agencies to snoop when needed - through opaque processes to enforce national crypto standardization, define requirements for localizations of datacenters, and co-opting of leading private companies. This solution has several weaknesses: (a) it may be a good choice to protect heads of state and top executives of national corporations from foreign snooping, but it is not verifiable or trustworthy to ordinary citizens as it relies on trust of certain national security agencies and national private companies; (b) creates internationally non-interoperable services; (c) its very costly and inefficient for innovation in assurance levels and features. Is there another way?!
  7. 7. Alternative Solution: Trustless Computing 1. Assume no trust whatsoever on any (critical) components, person or entity; except in proper, effective, open and transparent organizational processes that affect every aspect of the service life-cycle, and whose validity is assessable by any willing informed citizen, and experts he trusts. 2. Ensure complete public verifiability and actual extreme verification relative to complexity, of all critical components and processes throughout life-cycle. 3. No need to experiment with radical inventions, but just apply current time- tested standards with successful mission-critical trustless organizational processes, such as: proper democratic election and mission-critical nuclear systems. 4. As opposed to Trusted Computing (tm), it does not rely on the trustworthiness (to the user and/or provider) of certain original trusted components and processes. Nice, but how expensive would that be? Even if it worked, wouldn't that be made illegal?!
  8. 8. Meaningful Assurance at Ultra-low Cost (6-15M) 1. Extreme simplicity of all software and hardware stack, and low performance and features, in order to enable adequate design verification and fabrication oversight, of critical components, at reasonable cost; Hardware is crucial, as even the highest-assurance services rely on the proficiency and good faith of several individuals, entities and/or groups involved in the design and fabrication of hardware, and verification of its firmware. Re-use free/open source and verifiable techs to attract very expert non-malicious volunteers. Rely on smaller, simpler, less IPR-encumbered 200mm foundries, that realistically can allow "user-controlled" complete oversight for fabrication of ICs and all critical components. 2. Extreme architecture compartmentalisation to reduce the number of critical components. Use hardware architecture and software virtualization techniques that reduce the number of critical components. 3. Handheld form-factor, in order to: (a) sustain same hardware architecture for end-user (mobile & desktop), routers and servers equipment; (b) provide portability, for desktop and mobile use; (c) drive large-scale user-adoption as smartphone companion devices.
  9. 9. Will it, or should it, be made illegal?! i.e. Can we enable citizens to self-enforce their constitutional privacy rights and at the same time increase the ability of state agencies to fight crimes?! Freedom and democracy require both security and privacy. On one side, most citizens, if forced to choose between privacy and security, will choose security. And rightly so I would add. On the other, if citizen cannot regain some of the privacy prescribed by our constitutions, we may loose democracy, and in turn (citizen) security. So we are stuck, either we get both or we get neither! But it's either one or the other, we're told: a zero-sum game. We need solutions that can provide both! We could learn from citizen-juries in several democratic countries' judiciary systems. For example, onsite public jurors' approval could be technologically required to allow access only to intercept of a given user of trustless computing services that are court-mandated, lawful AND constitutional, as interpreted by such jurors.
  10. 10. User Verified Social Telematics: a trustless organizational process
  11. 11. Thanks Dr. Rufo Guerreschi, Exec. Dir. Open Media Cluster [email protected]