113
Trading Privacy for Communication by Lila A. Fontes A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science University of Toronto c Copyright 2013-2014 by Lila A. Fontes

by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Embed Size (px)

Citation preview

Page 1: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Trading Privacy for Communication

by

Lila A. Fontes

A thesis submitted in conformity with the requirementsfor the degree of Doctor of Philosophy

Graduate Department of Computer ScienceUniversity of Toronto

c© Copyright 2013-2014 by Lila A. Fontes

Page 2: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

ii

Page 3: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Abstract

Trading Privacy for Communication

Lila A. Fontes

Doctor of Philosophy

Graduate Department of Computer Science

University of Toronto

2013-2014

There is a long history of the study of information-theoretic privacy within the context of commu-

nication complexity. Unfortunately, it has been shown that most interesting functions are not privately

computable [Kus89, BS08]. The unattainability of perfect privacy for many functions motivates the

study of approximate privacy.

This thesis explores several notions of approximate privacy. In both worst- and average-case situa-

tions, we obtain asymptotically tight bounds on the tradeoff between approximate privacy and commu-

nication cost. Further, we interrelate the disparate definitions of privacy, and link them to a standard

measure of information cost of protocols. This enables the proof of exponential lower bounds on sub-

jective approximate privacy, independent of communication cost. A new, operationalized definition of

approximate privacy based on hints is proposed (more closely resembling cryptographic definitions),

which may help answer some outstanding open questions in information theory and compression.

iii

Page 4: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

iv

Page 5: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

For my parents,

who generated all my vital private data

and me.

v

Page 6: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Acknowledgements

No dissertation springs unaided, Athena-like, from the mind of the Ph. D. student. Many thanks are

due to everyone who supported, encouraged, and helped me in this massive undertaking.

I have received the invaluable backing of my supervisors Toniann Pitassi and Stephen A. Cook, whose

questions and support have helped shape this project (and made it possible at all). They tolerated a

mid-study course correction which sent me into the fascinating world of privacy and resulted in the

document you hold in your hands (or view on your screen). Their feedback is always helpful and much

appreciated. Thanks to Anil Ada, Arkadev Chattopadhyay, Michal Koucky, Sophie Laplante, Vinod

Vaikuntanathan, and David Xiao for many interesting research discussions. Thanks also to NSERC,

the DCA, the Alfred B. Lehman scholarship, and the Helen Sawyer Hogg GAA, for helping to fund my

graduate research.

For many engaging conversations, both on-topic and off-, I thank my erstwhile officemates Yevgeniy

Vahlis, Justin Ward, and Wesley George. You helped me procrastinate and you helped me research, so

this is equal parts your fault and in spite of you. In a friendly way, on both counts.

Thanks are due, for the early inspirations which set me on this course, to the many educators who have

taught me and exposed me to the joys of learning. To Michael Mitzenmacher, whose single piece of advice

for graduate students (“start writing your dissertation as soon as possible”) I wholeheartedly endorse even

though I did not accomplish it; to Michael Rabin, whose courses on cryptography got me interested in

keeping secrets using mathematics; to Stuart Shieber, whose incredible slides are my continued aspiration;

to Steve Weissburg, for passing along the relish of proof-writing; and to my non-math teachers Daniel

Albright, Helen Spanswick, and Richard Anderson, who imbued in me an appreciation for writing and

the attitude that it could be both feasible and enjoyable.

The final thanks, as always, goes to my family, who serve as proofreaders, emotional support,

LATEXreference, sounding board, career guides, and cheer section. Not necessarily in that order.

vi

Page 7: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Contents

1 Introduction 1

1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Perfect privacy 7

2.1 Communication complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Perfect privacy (2-privacy) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.3 Characterizing perfect privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.4 More than two players . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.5 The privacy benefit of randomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.6 Vickrey auction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 Defining worst-case approximate privacy 25

3.1 Defining approximate privacy with no error . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.1 Privacy approximation ratio (PAR) . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.2 Strong h-privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.2 Defining approximate privacy with error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2.1 The privacy benefit of error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.2.2 (δ, t)-privacy with ǫ error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.2.3 Weak h-privacy and additional information . . . . . . . . . . . . . . . . . . . . . . 33

4 A worst-case privacy tradeoff 36

4.1 Bookkeeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.2 Adversary Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.4 Implications and extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.4.1 A tighter tradeoff? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.4.2 Worst-case approximate privacy hierarchy . . . . . . . . . . . . . . . . . . . . . . . 45

5 Defining average-case approximate privacy 46

5.1 Average PAR and PARǫ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5.2 Information theory review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.3 Information cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

5.4 PRIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

vii

Page 8: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

5.5 Information complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.6 Additional information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

6 An average-case privacy tradeoff 58

6.1 Counting cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

6.2 Simplifying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

6.3 The Ball Partition Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6.4 Average-case approximate privacy hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . 66

7 Privacy and information theory 67

7.1 IC, PRIV, Ii, and Ic−i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

7.2 PAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

7.3 Set Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

8 Privacy, advice, and error 74

8.1 Problems with weak and strong h-privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

8.2 Advice and hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

8.3 Relating hint length to IC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

9 Comparing definitions of privacy 88

9.1 Average-case measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

9.2 Separating information from communication . . . . . . . . . . . . . . . . . . . . . . . . . . 92

9.3 Worst-case measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

10 Conclusion 95

10.1 Open problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Bibliography 98

Index 102

viii

Page 9: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

List of Figures

2.16 Matrix for two-player function fg(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.17 A comparison of the 2- and k-player models. . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.24 Matrix for two-player n-bit Vickrey auction . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.26 Protocol tree for English auction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.28 Protocol tree for bisection protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.49 Abstract protocol tree for English auction . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.50 Abstract protocol tree for binary search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.58 Abstract protocol tree for Bisenglish protocol . . . . . . . . . . . . . . . . . . . . . . . . . 44

6.91 An arbitrary node in the ball-partitioning tree. . . . . . . . . . . . . . . . . . . . . . . . . 64

9.125Flow chart: which privacy measure should I use? . . . . . . . . . . . . . . . . . . . . . . . 89

9.126Table comparing average privacy measures. . . . . . . . . . . . . . . . . . . . . . . . . . . 90

9.127Deterministic and zero-error average privacy measures compared. . . . . . . . . . . . . . . 91

9.128Error-permitting average privacy measures compared. . . . . . . . . . . . . . . . . . . . . 91

9.130Table comparing worst-case privacy measures. . . . . . . . . . . . . . . . . . . . . . . . . . 93

ix

Page 10: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 1

Introduction

Many computational settings involve the participation and cooperation of multiple parties in order to

achieve a desired outcome. Indeed, this description is so general as to encompass much of computer

science. Such problems arise in communication complexity, cryptography, distributed computing, and

mechanism design (among other areas), and involve settings where information is partitioned amongst

many parties. The common thread is that parties must interact in order to solve such problems. Theo-

retical computer science is generally concerned with the relative difficulty of computing functions under

different constraints, so it is natural to ask: how difficult is it to perform distributed computations,

privately?

Consider a set of players who want to collaboratively compute a function of their joint inputs. This

generalization captures many common scenarios, for example employees computing their average income,

companies auctioning off spectrum, live bidding for digital advertising space, and interdomain network

routing. Many research questions arise from this interactive multiparty setting. What sort of problems

are computable (exactly or approximately)? at what communication cost? against which types of

adversarial behavior?

This dissertation considers these problems in the context of privacy: in addition to performing some

joint computation, the players each want to preserve the privacy of their own inputs. The additional

consideration of privacy makes these research questions relevant to real-world situations, in which such

a large and important consideration (“will my data remain private?”) cannot be ignored. There is

an important semantic distinction to be made between privacy, security, and anonymity (all modern

banner-headline words). Anonymity describes a situation in which participants cannot be identified1.

Security describes a situation in which malicious parties attempt to pervert the computation somehow2.

Privacy, by contrast, is much weaker: it simply asks that particular data not be used or revealed beyond

some specified bounds. The idea of privacy is much more fluid and reliant on specifics of the situation

at hand.

Privacy in a distributed setting is an increasingly important problem. A key application is the setting

of combinatorial auctions where many agents have private information (e.g., their preferences) but would

like to compute a function of their inputs without revealing any of their private information. There is

a large body of research examining which functions can be computed securely, and how. Many of these

results rely on an assumption, such as a computational complexity assumption, or the assumption that

1usually, identified by name2typically by decoding encoded messages, sending false messages which appear authentic, etc.

1

Page 11: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 1. Introduction 2

more than some fixed fraction m of the players are trustworthy, or the assumption that the auctioneer (a

3rd party) is trustworthy. As [BS08] point out, privacy which is based on an assumption of hardness can

become outdated as computers become faster and more powerful; security parameters (like key length)

need to be continuously updated to cope with increasing computational power. Hence, ideally one would

like privacy based on stronger assumptions. Auctions are a natural setting where we would doubt the

trustworthiness of fellow participants or an auctioneer. We nevertheless would like to compute on the

internet. In this work, we focus on situations where each player is deterministic and honest-but-curious.

Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to

gain knowledge about others’ input. (Honesty, while a large assumption, will be dependable when we

consider functions which are truthful — self-interested players will obey the protocol.)

[Kus89] initiated the study of information-theoretic privacy in communication complexity, which is an

appealing direction because it does not rely on computational assumptions discussed above. Informally,

a multi-player communication protocol for computing a function f(x1, x2, . . . , xk) is private if each

player does not learn any additional information (in an information theoretic sense) beyond what follows

from knowing his/her private input, and the function value f(~x).3 A complete characterization of the

privately computable functions was given, but unfortunately, early work ruled out private protocols for

most interesting functions [Kus89, BS08]. For example, private second-price auctions are not possible

with more than two participants,4 and are extremely inefficient even in the setting of two bidders

[CK89, BS08].

The definition of perfect privacy is obvious, in the sense that the connotation of “perfect privacy”

clearly implies that no player should learn any information beyond his own input and the function

output. Similarly, it is clear that a total loss of privacy is implied by a player learning all other players’

inputs (if this is beyond the function output). The unattainability of perfect privacy for many functions

motivates the study of approximate privacy. This thesis focuses on trading privacy, so we will need an

incremental measure of privacy to express the spectrum between perfect privacy and no privacy at all.

The relaxation from perfect to approximate privacy is appealing because it renders more functions

computable privately, and more closely mirrors real-world situations in which some privacy loss may be

acceptable. On the other hand, it is more subtle to capture the notion of approximate privacy. The

question of how to define an approximate measure of privacy will be a sticking point, and the reasons

why it is so challenging to settle on one definition will take the rest of this document to tease apart.

Under a relaxed notion of privacy, things are much more interesting [FJS10a, FJS10b, CDSS11]. For

example, Feigenbaum et al. [FJS10a, FJS10b] study the Vickrey auction problem, and reveal a possible

inherent tradeoff between privacy and communication complexity: they describe a family of protocols

such that the privacy loss approaches 1 (perfect privacy) as the length of the protocol approaches ex-

ponential. They also study several prominent Boolean functions with respect to approximate privacy.

Feigenbaum et al. also consider an average-case notion of approximate privacy as well, the “privacy

approximation ratio” (PAR). In this setting, we are interested in the average privacy loss over a distri-

bution on inputs. Here they describe a protocol for Vickrey auction that achieves exponentially smaller

average-case PAR than its worst-case PAR. A similar protocol was described by [Kla02].

One major focus will be to examine and compare many measures of approximate privacy. Each

3A similar notion of privacy considers limiting an eavesdropper to learning only the function value f(~x) from theprotocol, and nothing more.

4Even assuming that all players are honest-but-curious, secure multiparty computation is only applicable if < 1/2 theparties form a coalition [BOGW88].

Page 12: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 1. Introduction 3

should be categorized by answers to these thematic questions:

• Is the goal perfect privacy, or only a relaxation to an approximate version?

• Can we bound the worst-case privacy loss or the average-case privacy loss?

• Can the protocol make errors?

• Is the concern privacy loss to other players, or to an eavesdropper?

There exist privacy measures specifically tailored to nearly every combination of the possible answers

to this set of questions. Each different measure has its own advantage and characteristics. Mercifully

we consider only a subset in this work. Keep these four questions — approximate privacy? average-

case? error? eavesdropper? — in mind as we introduce and examine various measures of privacy. We

revisit these thematic questions in chapter 9, when they are accompanied by a flow chart (figure 9.125)

describing exactly which privacy measures apply for a given four-tuple of answers.

Page 13: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 1. Introduction 4

1.1 Contributions

We present several original lower bounds on the communication cost for achieving (approximate) privacy

and establish many relationships between approximate privacy and other known measures. We also

propose some new improved/extended measures. A major effort throughout the work is to organize

and systematically compare the different definitions of approximate privacy. The three theorems below

(tradeoff bounds and Set Intersection) were previously published in [ACC+12].

Theorem 51 For all n, for all p, 2 ≤ p ≤ n/4, any deterministic protocol for the two-player n-bit

Vickrey auction problem with communication cost (length) less than n2n4p−5 obtains privacy loss (worst-

case PARext) at least 2p−2.

This lower bound is technically interesting as it deals with super-polynomial communication proto-

cols. The usual communication complexity techniques aim at protocols that are at most linear in their

input size.

Our second contribution demonstrates a similar type of tradeoff for the case of average-case approxi-

mate privacy. We prove an asymptotically tight lower bound on the average-case approximate privacy of

the Vickrey auction problem, showing that the upper bounds from [FJS10a] are essentially tight. This

generalizes the result of [CDSS11] for Vickrey auctions. Again, [FJS10a] provided lower bounds only for

the special case of bisection-type protocols. As a side note, we positively resolve an open question from

[FJS10a] concerning arbitrary input distributions (proposition 85).

Theorem 82 For all n, r ≥ 1, any deterministic protocol for the two-player n-bit Vickrey auction

problem (over the uniform distribution of inputs) with communication cost (length) less than r obtains

average-case privacy loss5 (PARext) at least Ω( nlog(r/n) ).

Our lower bounds show that the approximate privacy of any polynomial length protocol is still as large

as Ω(n/(log n)). Indeed, such super-linear protocols have been devised by [Kla02], who proved upper

bounds for his measure of approximate-privacy. To the best of our knowledge, Theorem 82 provides the

first (tight) lower bounds on the communication cost of achieving good approximate privacy for Vickrey

auctions. The proof of the theorem relates the loss of privacy to a certain Ball Partition Problem that

may be of independent interest.

Furthermore, we modify the average-case privacy approximation measure of Feigenbaum et al. Our

modification provides a rather natural measure that was disregarded in [FJS10a], but coincides with

that of Feigenbaum et al. in the case of uniform distribution on the inputs. Our modified measure has

several advantages. It allows natural alternative characterizations, and it can be directly related to the

(information-theoretic) privacy measure of Klauck.6 We can quantitatively connect Klauck’s privacy

measure to well studied notions of (internal) information cost in communication complexity. This allows

us to prove a new lower bound on the average-case internal7 privacy approximation measure of [FJS10a],

and answers affirmatively a conjecture from their paper.

5Under the original definition [FJS10a] or our alternate definition 62.6Theorem 22 shows that Klauck’s measure provides a lower bound for average-case PAR. This bound is not tight:

upper bounds on Klauck’s measure do not necessarily upper-bound PAR.7“Internal” means that it measures privacy loss to the other player, not to the eavesdropper.

Page 14: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 1. Introduction 5

Theorem 104 For all n ≥ 1, and any protocol P computing the Set Intersection INTERSECn the

average-case internal privacy loss (PARint) is exponential in n under the uniform distribution:

avgU PARint(P ) = 2Ω(n)

We contend that any of the mentioned measures could serve as a reasonable measure of privacy.

Indeed, each of the measures seems to exhibit advantages over the other ones in some scenario, so each

of the measures captures certain aspect of privacy. For example, the English auction protocol for Vickrey

auction achieves perfect privacy (under any measure) but at exponential communication cost. On the

other hand, the Bisection protocol achieves linear average-case PAR with merely linear communication

cost. However, the difference between these two protocols is not reflected well in Klauck’s privacy

measure, where both protocols lose constant number of bits on the average. (And in general it is not

hard to come up with examples which are “distinguishable” — very far apart — using PAR, but the

same order using Klauck’s measure.)

Another contribution if this work is to compare and categorize the many measures of (approximate)

privacy. Most average-case privacy measures are relatable (theorems 95, 97, 99, 100, 101, and lemmas 98

and 118; all comparisons are summarized in section 9.1). Results obtained with one average-case privacy

measure can usually be translated into another average-case privacy measure with ease; this indicates

a consensus about the meaning and definition of privacy in the average case. By contrast, the various

worst-case measures of privacy are generally incomparable (except for trivial observations), a fact which

reflects the relative lack of consensus surrounding the meaning of “worst-case” approximate privacy. We

propose a new measure of worst-case privacy (maximum hint length) which captures a reasonable notion

of privacy, avoids the weaknesses of other worst-case privacy definitions, and may serve as a useful tool

for resolving open questions in compression.

Page 15: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 1. Introduction 6

1.2 Overview

The overture in chapter 2 presents the standard communication model and the long-standing definition

of perfect privacy (for two or more players), with related results and hierarchies. The Vickrey auction

problem defined here motivates approximate privacy and serves as our continual companion throughout

the rest of the dissertation. Our general progression is from simpler to more complicated, so for example

we consider zero error before protocols that make errors, perfect privacy before approximate privacy,

and worst-case before average-case. (The exception to this rule is that we consider two- and k-player

scenarios in chapter 2, and thereafter limit all discussion to two-player scenarios. Two players are

perplexing enough to sustain the rest of the document.)

Several worst-case notions of approximate privacy are defined in chapter 3. Chapter 4 demonstrates

the inherent tradeoff between worst-case privacy and communication complexity by proving a tradeoff

(theorem 51) between the two quantities.

Average-case notions of approximate privacy are defined in chapter 5. Many of these are relaxations

or generalizations of worst-case notions from earlier. Chapter 6 demonstrates another, similar trade-

off (theorem 82) between average-case privacy and communication complexity, although the averaging

mitigates the extremity of privacy loss (as one might expect8).

The remaining chapters deal with comparisons of privacy measures. Chapter 7 interrelates the mea-

sures of privacy based on information theory; in the average case, the various measures of approximate

privacy are interrelated. (Theorem 104 follows from these results.) This means that one may use

whichever measure is convenient. By contrast, in the worst case, approximate privacy measures are

divergent and incomparable. This motivates chapter 8, which illustrates the problems with some earlier

worst-case definitions of privacy (as “additional information”) and offers some alternative measures using

the idea of advice. Chapter 9 summarizes the known comparisons between different types of approximate

privacy.

The open problems arising throughout this study are summarized in chapter 10.

An index at the end of the document provides a useful way to locate to definitions, terms, and

symbols in the text.

8pun intended

Page 16: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2

Perfect privacy

This chapter presents the communication complexity computational model for two players and its gen-

eralization to k players. This longstanding model is well-examined for perfect privacy.

7

Page 17: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 8

2.1 Communication complexity

The usual number-in-hand communication complexity model serves as a basis for this research. In the

two-player setting, this is Yao’s basic model [Yao79].

Consider some function f : X × Y → Z of inputs distributed amongst two players: Alice and Bob,

who have some means of communicating to each other synchronously.1 Their task is to compute f(x, y)

and the resource of interest is communication between Alice and Bob. They will communicate according

to some fixed protocol π (depending on f) known to both parties.

In general, any protocol will proceed as follows. Alice knows x and sends a bitstring a1 to Bob; Bob

knows y and a1 and sends bitstring b1 to Alice; Alice knows x and b1 and sends a2; Bob knows y and a1

and a2 and sends b2, and so on. A protocol to compute f is a set of functions

gi(x, π1, π2, . . . , πi−1), hi(x, π1, π2, . . . , πi)|i = 1, 2, 3, . . .

specifying which messages Alice and Bob send: a1 = g1(x), b1 = h1(y, a1), a2 = g2(x, b1), b2 =

h2(y, a1, a2), etc. The protocol continues until one player knows the value f(x, y) and sends a spe-

cial “halt” message. The last message sent in the protocol is the value f(x, y). We assume that the

players obey the protocol, sending messages according to the instructions they receive. The transcript

of all messages sent in the protocol is π. (The protocol need not strictly alternate bits between the

players.)

Definition 1 [Kus89, Yao79, KN97] A protocol π over domain X× with range Z is a binary tree

where each internal node v is labeled either by a function av : X → 0, 1 or bv : Y → 0, 1, and each

leaf is labelled with an element z ∈ Z.

The value p(x, y) of the protocol π on input (x, y) is the label of the leaf reached by starting from

the root and walking on the tree. At each internal node v labeled by av, walk left to child node v0 if

av(x) = 0 and right otherwise (to v1); at each node labeled by bv, walk left (to v0) if bv(y) = 0 and right

otherwise (to v1).

The cost of π on input (x, y) is the length of the path taken on input (x, y), which generates transcript

π(x, y) listing each step (message sent). The communication cost CC(π) of the protocol π is the length

of the longest possible transcript (often measured by the height of the tree).

Communication cost is a worst-case measure. It is almost always based on the length of the input,

and common shorthand in the area uses this reference. For example, a protocol on inputs of length n

which has communication cost O(n) will be described as having “linear communication cost.” Similarly,

communication cost O(2n) is often simply stated as “exponential communication cost,” omitting the

particular mathematical expression. We will use these phrases below, and intend this specific meaning.

Most research describing protocols uses high-level English-language descriptions or pseudocode,

avoiding the cumbersome definitions of gi and hi. This custom is observed below. The protocol is

deterministic if gi and hi are all deterministic; the protocol is probabilistic if they are probabilistic.

Remark 2 (The last message of π(x, y) is f(x, y)?) The literature is divided concerning the value

f(x, y). In some papers, the transcript π(x, y) is required to include the full value f(x, y) explicitly (as

1This is the so-called number-in-hand model , which realistically represents many actual situations. The meaning of“privacy” in the number-on-forehead model is unclear, and not the subject of this discussion.

Page 18: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 9

specified above). Elsewhere, the transcript need not explicitly include the value f(x, y), as long as both

Alice and Bob are able to determine the correct value f(x, y) (using the transcript and their respective

inputs). These two models differ in communication cost by at most |f(x, y)|, a factor which will recur

throughout this document (as we compare results from both sides of this model divide). For example,

theorems 4 and 5 do not include this additive factor in their measure of communication complexity.

Definition 3 [KN97] Every function f : X × Y → Z has a matrix Mf such that Mf [x, y] = f(x, y).

A submatrix of Mf (or combinatorial rectangle, or simply rectangle) is a subset R ⊆ Mf such that

R = A×B for some A ⊆ X and B ⊆ Y . A rectangle is monochromatic if f is constant on inputs in

that rectangle.

Let π be a protocol and v be a node of the protocol tree Then T (v) is the rectangle of inputs that

reach node v. We write T (v) = TA(v) × TB(v) ⊆ X × Y . If the protocol is run on inputs in the

rectangle TA(v)× TB(v), then it will eventually reach node v during its execution. Thus the root node

r is associated with the entire matrix TA(r) × TB(r) = X × Y . Each leaf node l is associated with a

monochromatic submatrix TA(l)× TB(l).

Examples of decision tree matrices Mf for two protocols can be seen in figures 4.49 and 4.50 on

page 36 (or in more detail on pages 22 and 23).

A deterministic protocol to compute f consists of a series of partitions of the matrix Mf into rectan-

gles. The resulting protocol-induced tiling of the matrixMf is a partition into monochromatic rectangles,

which are precisely the rectangles associated with the leaves of the protocol’s decision tree.

Framing protocols as trees yields a basic lower and upper bound for all functions.

Theorem 4 [Yao79] Let Mf have m rectangles in its smallest partition into monochromatic rectangles.

Then f requires at least log2 m bits of communication.

Theorem 5 Any function f : X×Y → Z can be computed in log |X|+log |Y | bits of communication2 by

the trivial protocol: both players reveal all bits of their values, then compute the function independently.

The trivial protocol corresponds to the full binary tree of depth log |X| + log |Y |; each leaf of the

protocol tree corresponds to exactly one input (x, y). However, participants may wish to compute f

while preserving the privacy of their values, so the trivial protocol is not satisfactory. (There are several

functions for which this lower bound is essentially tight in the zero-error setting: identity, relative

primeness, ≤, set intersection, and set disjointness [Yao79, KN97]. These will be candidate functions for

consideration in settings with error or relaxed privacy requirements.)

Randomization. As above, Alice gets x and Bob gets y. In addition, each of them gets a coin

to flip, giving each access to a random binary string of arbitrary length. They may also have a public

coin, which both can see. Now the protocol tree has nodes labelled by functions of x and rA (or y and

rB). When randomization is allowed, protocols may err. (Usually we do not talk about deterministic

functions which err.) The communication complexity of a randomized protocol is the maximum, over

all (x, y), of the average length of a transcript π(x, y) over all coin flips.

Definition 6 (Protocols with error) [KN97, Bra11] Let π be a randomized protocol computing func-

tion p. Let µ be a distribution on X × Y . Let rA and rB denote the random strings of Alice and Bob,

respectively.

2This calculation discounts the additional log |Z| bits required to send f(x, y) as the final message in the protocol.

Page 19: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 10

π computes function f with zero error if for every (x, y):

PrrA,rB

[p(x, y) = f(x, y)] = 1

π computes function f with (standard, worst-case) error ǫ if for every (x, y):

PrrA,rB

[p(x, y) 6= f(x, y)] ≤ ǫ

π computes function f with ǫ distributional error if:

Prx,y∼µ

PrrA,rB

[p(x, y) 6= f(x, y)] ≤ ǫ

Standard error is a strong definition of error. If even one input has high probability of error, the overall

error ǫ must also be high. We will consider only the communication complexity CC(π), which is a

worst-case measure.3

Even for ǫ = 0 the randomized communication model is provably different from the deterministic

model with respect to perfect privacy, as we’ll see in section 2.5.

3It is also possible to define an averaged measure; see [Yao77, KN97].

Page 20: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 11

2.2 Perfect privacy (2-privacy)

We are concerned with the privacy of inputs : each player has a private input, and the players are trying

to jointly compute some function of these inputs. When computing some function of the players’ inputs,

the output of the function may reveal some players’ inputs partially or even fully. Since computing

the function is the ultimate goal, any useful definition of privacy must permit this revealed information

without classifying it as a loss of privacy. However, any unnecessary information revealed about player’s

inputs should be considered a loss of privacy.

The privacy of each player against the other player(s) is internal privacy. The privacy of the players

against an eavesdropper is external privacy. (Eavesdroppers can overhear all exchanged messages but

have no input and do not participate in the protocol.) These two notions – privacy from other players

and privacy from eavesdroppers – may be identical or differ, depending on the function.

The most basic setting for privacy is when all participants follow the protocol. In communication

complexity, this behavior is known as honest-but-curious: the players participate according to the

protocol’s rules, but keep track of received messages and attempt to extract information after the protocol

has finished.4 This is a reasonable assumption, which relies upon the fact that the players do want to

compute f correctly. Throughout this document, we assume that the players are honest-but-curious.

Privacy has many definitions. Beyond this chapter, we will focus exclusively on the two-player model.

In general, privacy is a property possessed by protocols; for some notion of private, a function is private

if it has a private protocol.

At a high level, privacy is a worst-case concept. If Alice can determine anything about Bob’s input

beyond what she knows from her input and the function output, then Bob has lost some privacy.

Protecting against loss of privacy inspires a strong definition of privacy.

Definition 7 (perfect privacy) [BOGW88, CK89] A function f : X×Y → Z is perfectly private

(or 2-private) if:

• ∃π a protocol computing f such that

∀x, y PrrX ,rY

[π errs] < 1/2

where the probability is over the random coins rX and rY of both players; and

• For every two inputs (x, y) and (x′, y′) such that f(x, y) = f(x′, y′) and x = x′,

π(x, y) = π(x′, y′)

where π(x, y) is the transcript of messages sent between players during the protocol. (For random-

ized protocols, the requirement is that the distributions of π(x, y) are identical.) And

• Similarly, for every two inputs (x, y) and (x′, y′) such that f(x, y) = f(x′, y′) and y = y′,

π(x, y) = π(x′, y′)

4This notion of “honesty” is different from the mechanism design concept of “truthfulness,” discussed later. Honest-but-curious is a property of players; truthfulness is a property of mechanisms. It is unfortunate that these words havesimilar connotations in English.

Page 21: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 12

Perfect privacy is an internal notion: perfect privacy means that the other player will not learn

anything other than the output of the function. This definition of privacy relies on distinguishability. A

protocol is private if it does not allow other players to distinguish between different inputs which yield

the same output for f .

Page 22: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 13

2.3 Characterizing perfect privacy

Which functions are perfectly privately computable? This basic question has been the subject of study

for many years. A necessary condition for privacy is given by lemma 8.

Lemma 8 (corners lemma) [CGGK94] Let X, Y , and Z be nonempty sets. Assume f : X ×Y → Z

is perfectly private. For all x, x′ ∈ X, y, y′ ∈ Y , and z ∈ Z, if f(x, y) = f(x, y′) = f(x′, y) = z then

f(x′, y′) = z.

This lemma describes the “corners” of a monochromatic rectangle in the matrix Mf . If f is perfectly

private and three of the corners of a rectangle have the same value for f , then the last must as well.

For the sake of simplicity, assume that f : X × Y → Z is a two-input function.

Definition 9 (≡) Given matrix M , define the relation ≡ as follows: x ≡ x′ iff ∃y such that M [x, y] =

M [x′, y]. Similarly y ≡ y′ iff ∃x such that M [x, y] = M [x, y′].

The symbols “≡” is used for historical reasons; note that ≡ is not an equivalence relation (unless Mf is

privately computable).

Definition 10 (forbidden matrix) Let M = X × Y be a matrix. M is forbidden if M is not

monochromatic, all its rows are ≡ equivalent, and all its columns are ≡ equivalent.

This yields an easy test (lemma 14) for whether two-input function f is not privately computable. A

forbidden matrix cannot be privately computed. Equivalent rows cannot be distinguished in a private

protocol, as to distinguish them would reveal information about the inputs x and x′ where x ≡ x′ and

the output of f is the same. Since neither equivalent rows nor columns can be separated in a private

protocol, the protocol cannot distinguish between any of the rows and columns – yet the matrix is not

monochromatic, so the protocol cannot correctly compute the function output! (This definition is easily

generalized to k-input functions.)

Definition 11 (row-decomposition, column-decomposition) A matrix M = C×D is called rows

decomposable if there exist nonempty sets C1 and C2 such that:

1. C1 and C2 partition C, and

2. ∀x, x′ ∈ C, if x ≡ x′ then x and x′ are in the same piece of the partition.

The column-decomposition is defined similarly.

Definition 12 (decomposable) Matrix M = C ×D is decomposable iff

• M is monochromatic; or

• C can be row-decomposed into C1, C2 and for all i, Ci ×D is decomposable; or

• D can be column-decomposed into D1, D2 and for all i, C ×Di is decomposable.

Theorem 13 [Kus89] Let f : X×Y → Z be an arbitrary function. Mf is decomposable iff f is perfectly

private. Further, if f is perfectly private, then f can be privately computed with a deterministic protocol.

(This is not the case for functions of more than 2 inputs.)

Page 23: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 14

1 2 3 · · · g(n) g(n) + 1 . . . 2n

1 (0, 1) (0, 1) (0, 1) . . . (0, 1) (0, 1) . . . (0, 1)

2 (1, 1) (0, 2) (0, 2) . . . (0, 2) (0, 2) . . . (0, 2)

3 (1, 1) (1, 2) (0, 3) . . . (0, 3) (0, 3) . . . (0, 3)

... (1, 1) (1, 2) (1, 3). . .

......

......

g(n) (1, 1) (1, 2) (1, 3) . . . (0, g(n)) (0, g(n)) . . . (0, g(n))

g(n) + 1 (1, 1) (1, 2) (1, 3) . . . (1, g(n)) 0 . . . 0

......

...... . . . (1, g(n))

......

2n (1, 1) (1, 2) (1, 2) . . . (1, g(n)) 0 . . . 0

Figure 2.16: Matrix for two-player function fg(n)

For two-player functions, randomization does not make more functions privately computable, nor shorten

the number of rounds in protocols for privately computable functions. This is not the case with functions

of more than two inputs; see section 2.5.

Lemma 14 [Kus89] Let f : X×Y → Z be an arbitrary function. If Mf contains a forbidden submatrix,

then f is not privately computable (even with a randomized protocol).

The privacy loss necessarily induced by a forbidden submatrix is both internal and external — the other

player and the eavesdropper will both be able to distinguish some (x, y) from (x′, y′) where either x = x′

or y = y′.

The converse is also true. However, this is a very costly test to verify that f is privately computable.

Checking that matrix Mf is decomposable is equivalent to finding a protocol to compute f privately. A

complete decomposition of Mf into monochromatic submatrices yields a privacy-preserving protocol for

computing f .

This characterization of private functions not only gives a tight bound on the bit and round complexity

of private functions, but also establishes an interesting privacy hierarchy.5

Theorem 15 [Kus89] For all g(n) such that 1 ≤ g(n) ≤ 2n+1, there exists fg(n) : X × Y → Z privately

computable in g(n) rounds but not g(n)− 1 rounds.

One function demonstrating theorem 15 is:

fg(n)(x, y) =

(0, x) x ≤ y ≤ g(n)

(1, y) y < x ≤ g(n)

0 otherwise

The matrix for this fg(n) is given in figure 2.16.

For some functions, the communication cost of a private protocol is exponentially higher than the

cost of a protocol which simply computes f (with no regard for privacy). Vickrey auctions are one such

function, discussed in section 2.6. This large disparity suggests a middle ground: perhaps some privacy

can be preserved — perhaps partial information about the inputs can be revealed — without such a

5This shows that t-privacy, definition 18 below, is a meaningful (non-trivial) measure of privacy.

Page 24: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 15

large blowup in communication cost. This will be the context for relaxing perfect privacy to approximate

privacy. We will examine several definitions of approximate privacy.

Page 25: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 16

2 players k playersinputs each player has private inputplayer behavior follow the protocol6

output correct value of the function

communication modelsynchronous

one channel pairwise secure channels

privacy concept for players

each player tries to learn players may form a coali-information about the tion to learn informationother player’s input about non-coalition playerbased on his own and inputs based on theirand the transcript inputs and transcripts

privacy concept for eavesdropperseavesdropper tries to learn information

about any inputs based on all transcripts

(optional) randomizationprivate individual randomnessshared public randomness

(optional) function computed with error ǫ

Figure 2.17: A comparison of the 2- and k-player models.

2.4 More than two players

Yao’s model is convenient, simple, and useful for the clean results it yields. The generalization (more

details in [CGGK94]) considers k players who evaluate a k-argument function

f :

k times︷ ︸︸ ︷

0, 1n × · · · × 0, 1n → Z

where the ith player only knows the ith argument xi. Each player has a personal source of randomness

(coin). Each pair of parties is connected by a communication channel. Players are synchronized and

computationally unbounded. Messages are sent in rounds; in each round, every player can send a message

to every other player (and this message can depend on his input xi, his coin flips ri, the messages received

in previous rounds, and the identity of the recipient).

Players in this setting are honest-but-curious and will follow the protocol, but may choose to form

coalitions after the protocol has finished. (Within a coalition, members share their inputs, coin flips, and

transcript of messages sent and received.) Players can send more than one bit per message and need not

strictly alternate. (We are usually interested in the number of rounds a multiplayer protocol takes, not

the number of bits.) The notions of monochromatic rectangle and protocol tree can be correspondingly

extended into the multiparty model.

Privacy loss in the multiplayer setting considers coalitions. Several players may convene to form a

coalition, which shares information with the goal of learning about the input(s) of some non-coalition

player(s).

Figure 2.17 summarizes the two- and k-player communication models discussed below. Where nec-

essary, the specific model will be noted (number of players, privacy concept, randomization, ǫ-error,

etc.).

If a single player’s input is (even partially) revealed unnecessarily,7 that is a loss of privacy. In

7Since all players learn the value of the function, some inputs may be fully or partially revealed without loss of privacy.For example, each player in a protocol to compute f(x, y) = x+ y can figure out the other player’s input – and this is nota loss of privacy.

Page 26: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 17

the multiplayer setting, coalitions of players may convene to attempt to infringe the privacy of some

non-coalition player(s). In this case, the honest-but-curious constraint means that the coalition can

share their information (their own inputs, as well as their transcripts of messages sent and received)

after the protocol, but cannot share their information during the protocol (this might allow them to

adversarially shape their participation in order to learn more information about some non-coalition

player(s)). Protecting against loss of privacy to coalitions inspires a strong definition of privacy.

2-privacy can be generalized to a worst-case definition of privacy against coalitions. This is a very

strong notion.

Definition 18 (t-private) [BOGW88, CK89] A function f : ~X → ~Z is t-private if:

• ∃π a protocol computing f such that

∀x, y PrrX ,ry

[π errs] < 1/2

where the probability is over the random coins of both players; and

• For every coalition T of size ≤ t, for every two inputs ~a,~b ∈ ~X such that f(~a) = f(~b) and they

agree on their T entries (if i ∈ T then ai = bi),

πT (~a) = πT (~b)

where πT (~x) is the transcript of messages sent between players in coalition T and players outside the

coalition. (For randomized protocols, the requirement is that the distributions of πT are identical.)

This definition generalizes the notion of internal privacy.

t-privacy is a strong requirement. It states that there is no coalition of size up to t that can ever learn

any information about the inputs of non-coalition players. (Members of the coalition share their inputs,

communication transcripts, and private randomness when trying to learn.) This definition means that,

for all inputs which appear identical to the coalition parties in T , the communication exchanged between

players in T and players in T is the same (or identically distributed).

In the multiplayer setting, a natural question is: which functions f are t-privately computable (and

for which t)? There is a complete characterization of the privately computable Boolean functions, given

by theorems 21 and 19. Our interest lies in determining which k-input functions are k-private, as this is

the strongest guarantee of privacy: no coalition of any size can break it. Strong guarantees on t-privacy

are possible using cryptographic techniques in the randomized setting. Theorems 19 and 21 answer this

question for Boolean functions using randomized protocols; privately-computable arbitrary functions are

characterized by theorem 22.

Theorem 19 [CK89] Every Boolean k-player function is ⌊k−12 ⌋-privately computable by a randomized

protocol. Any function more private than this8 is also private against coalitions of size ≤ k (using a

randomized protocol).

Note that k-private is the strongest possible privacy requirement for a k-player function. A k-private

function on k inputs is private against coalitions of arbitrary size. Additionally, k-private functions are

8“more private” in the sense of: private against larger coalitions

Page 27: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 18

private against eavesdroppers. No eavesdropper who can observe the communication channels can learn

any information about any of the inputs of a k-private function on k inputs.) The randomization is

necessary to prove theorem 19.

Lemma 20 (partition lemma) [CGGK94] Let f : X1 × · · · × Xk → Z be a k-player function. Let

S ⊂ 1, . . . , k be any subset of size t. Define the two-player function f ′ as:

f :

(∏

i∈S

Xi

)

×(∏

i∈S

Xi

)

→ Z

f ′([xi]i∈S , [xi]i∈S) = f(x1, . . . , xn)

If f is t-private, then f ′ is 1-private.

The partition lemma provides an easy way to show that multiplayer functions are not private. (Notice

that it as the multiplayer analog of the corners lemma 8.) The privacy of two-player functions is

better understood, and below we describe some easy tests which disqualify two-player functions from

private computation. Simply by demonstrating one set S which induces a two-player function f ′ for

which privacy is impossible, it is easy to show that the original multiplayer function f is not privately

computable. Thus the partition lemma simplifies analysis of privacy for multiplayer functions.

In addition to defining the privacy hierarchy for Boolean functions, [CK89] fully characterize which

functions are privately computable.

Theorem 21 [CK89] A Boolean k-player function f is ⌈k2 ⌉-private iff ∃ Boolean fi such that f(~x) =

⊕ifi(xi).

These results are hopeful: Boolean functions can be privately computed! (In fact, these results hold

even if the notion of privacy is relaxed to “weak” privacy [CK89], which allows for inputs to be distin-

guishable with probability ≤ δ, and for the function to be computed with ǫ-error (for small ǫ and δ).

See definition 42.) However, the proofs of theorems 19 and 21 strongly rely on the Boolean nature of

the functions, and cannot be generalized to arbitrary functions.

Theorem 22 [BOGW88] Every k-player function is ⌊k−12 ⌋-privately computable by a randomized pro-

tocol.

Arbitrary functions are less well-behaved than Boolean functions. Accordingly, characterizing the

privacy hierarchy for arbitrary functions is more complicated. The corners lemma can be applied to

k-player functions by considering the coalition T and non-coalition T groups as two players. In this

case it gives a necessary condition for t-private functions: the two-player function must be 1-private for

all coalitions T of size t. There is a full characterization of the privacy hierarchy for arbitrary-valued

functions in the style of theorem 19.

Theorem 23 [CGGK94] Let t be an integer in ⌈k/2⌉ ≤ t ≤ k − 2. There exists a k-argument function

ft which is t-private but not (t+ 1)− private.

This inspires hope that there will be some analog of theorem 21 for arbitrary functions. No such

straightforward characterization is known. At best, Kushilevitz gives a combinatorial characterization of

Page 28: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 19

private 2-argument functions [Kus89], along with an analysis of the minimum communication complexity

required to privately compute such functions.

Extending the rationale behind the corners lemma (lemma 8), Kushilevitz defines a forbidden sub-

matrix, a condition on Mf which will describe why privacy on f is unachievable [Kus89].

Page 29: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 20

2.5 The privacy benefit of randomization

Even with zero error, there is a privacy benefit to using a randomized protocol instead of a deterministic

one. (There is no benefit with only two players, as theorem 13 states.)

The canonical example of such a function/protocol pairing is XOR for k ≥ 3 players. The function

is defined as:

XOR :

k times︷ ︸︸ ︷

0, 1 × · · · × 0, 1 → 0, 1

XOR(x1, . . . , xk) = x1 ⊕ x2 ⊕ · · · ⊕ xk

A brief consideration should suffice to convince the reader that there is no deterministic, perfectly private

protocol for this function. However, a randomized and perfectly private protocol for k-player XOR exists

[CK89].

Thus, even without the addition of error, more functions are privately computable by randomized

protocols than by deterministic protocols.

Page 30: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 21

1 2 3 4 . . . 2n − 1 2n

1 (1, B) (1, B) (1, B) (1, B) . . . (1, B) (1, B)

2 (1, A) (2, B) (2, B) (2, B) . . . (2, B) (2, B)

3 (1, A) (2, A) (3, B) (3, B) . . . (3, B) (3, B)

4 (1, A) (2, A) (3, A) (4, B) . . . (4, B) (4, B)

......

......

... . . ....

...

2n − 1 (1, A) (2, A) (3, A) (4, A) . . . (2n − 1, B) (2n − 1, B)

2n (1, A) (2, A) (3, A) (4, A) . . . (2n − 1, A) (2n, B)

Figure 2.24: The matrix Mf for two-player n-bit Vickrey auction.

2.6 Vickrey auction

Vickrey auctions (also known known as 2nd-price auctions) arise in mechanism design, and are a canonical

example of a truthful mechanism: neither player has incentive to cheat, as long as the auction is computed

correctly. For a positive integer n, the two-player n-bit Vickrey auction is defined as f : 0, 1n ×0, 1n → 0, 1n × A,B where

f(x, y) =

(x,B), if x ≤ y

(y,A) if y < x

The function is usually framed as an auction for a single item. The two players, Alice and Bob, have

private values x and y, respectively. These private values indicate the amount of money that the item is

worth to each of them. If x ≤ y, then Bob wins, and the price that he pays is x. (Thus, f(x, y) = (x,B)

means that Bob wins and pays x for the item.) Similarly, if x > y, then Alice wins, and the price

that she pays is y. This mechanism is also called “2nd-price auction” because the selling price is the

2nd-highest bid. The matrix Mf of the n-bit Vickrey auction is shown in figure 2.24 (notice the similarity

to figure 2.16).

The k-player Vickrey auction outputs the 2nd-highest player’s bid and the name of the highest-bidding

player. Although Vickrey auctions remain truthful for more than two players, they are not computable

with perfect privacy.

Lemma 25 [BS08] There is no private protocol for second-price auctions with more than two players.

First note that, due to the way the outputs of the Vickrey auction function are defined, internal and

external privacy are the same for Vickrey auctions.

This lemma is easily proven by application of the partition lemma and the corners lemma. First,

partition the inputs so that k − 2 players are in one partition. Now we have a two-player function

f : X × Y → Z where X = 0, 1n × 0, 1n and Y = (0, 1n)k−2. Let x = (8, 4), x′ = (8, 1),

y = (4, 1, . . . , 1), and y′ = (1, 1, . . . , 1). Then f(x, y) = f(x′, y) == (4, A) (the first bidder Alice wins at

a price of 4) but f(x′, y′) = (1, A). Thus f ′ is not private (by the corners lemma), so f is not private

(by the partition lemma).

Perfect privacy for two-player Vickrey auctions is achieved by the successive English bidding protocol,

in which bids start at 1 and increase by 1 in each round, and the first player to drop out of bidding

Page 31: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 22

reveals his entire private value. (Note that this incurs no loss of privacy, since that value is part of the

function output.) The protocol tree for this protocol is given in figure 2.26. This protocol takes 2n+1

rounds for the n-bit Vickrey auction, and is known to be the only protocol which obtains perfect privacy

for Vickrey auctions [Kus89].

f(x, y) = (1, B)

x = 1 x 6= 1

f(x, y) = (1, A)

y = 1 y 6= 1

f(x, y) = (2, B)

x = 2 x 6= 2

f(x, y) = (2, A)

y = 2

f(x, y) =2n − 1, A)

y = 2n − 1

f(x, y) =2n − 1, B)

y 6= 2n − 1

Figure 2.26: The protocol tree for an English auction computing two-player Vickreyauction.

Theorem 27 [Kus89] Perfect privacy for two-player n-bit Vickrey is only achievable by the 2n+1-length

English auction.

Notice that the range of f is of size 2n+1 and that f is surjective, so that there must be at least 2n+1

distinct leaves in any protocol tree for f . Thus any protocol for f requires at least n + 1 rounds (and

at least n + 1 bits of communication, by theorem 4; see figure 4.50 on page 36). An example of such a

protocol is the bisection protocol [FJS10a], which proceeds like binary search. Alice and Bob perform

binary search until they know which player has the smaller value. Then this player reveals his entire

value. (Once they know who has the smaller value, the larger-valued player need not send any more bits

in the protocol; indeed, if he does, they are either useless or unnecessarily revealing of his private value.)

Thus the (simplified) protocol tree for the bisection protocol looks like figure 2.28.

However, note that this necessarily reveals substantial private information about the winner’s bid

(for example, if x = y it will reveal the entirety of the winner’s bid, a grave loss of privacy indeed).

Protocols for Vickrey auction are more closely described and analyzed in chapters 4 and 6.

These two extremes – on the one hand perfect privacy at exponential communication cost, and on the

other, large privacy loss at linear communication cost – suggest that there is a tradeoff between privacy

and communication for Vickrey auctions. The structure of the function itself suggests this tradeoff as

well. Any move which differs from the English protocol must divide some monochromatic region into

Page 32: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter2.

Perfectprivacy

23

x0 = 0

y0 = 0

x1 = 0

y1 = 0 y1 = 1

Alice sends x

x1 = 1

y1 = 0

Bob sends y

y1 = 1

y0 = 1

Alice sends x

x0 = 1

y0 = 0

Bob sends y

y0 = 1

x1 = 0

y1 = 0 y1 = 1

Alice sends x

x1 = 1

y1 = 0

Bob sends y

y1 = 1

Figure 2.28: The protocol tree for the bisection protocol, solving two-player Vickrey auction.

Page 33: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 2. Perfect privacy 24

two pieces. Thus inputs in the same monochromatic region are distinguishable by the protocol, and

some privacy is lost.

Different privacy loss is achievable depending on the nature of the protocol. In the next chapters, we’ll

see the privacy approximation ratio (PAR, definition 31) proposed by [FJS10a], who use it to examine

a family of Bisection-type protocols, and extend it to average-case PAR (definition 62) to differentiate

amongst these protocols. Such protocols obtain worst-case PAR varying from 1 to 2n, inversely related

to their length. This observation inspired the results of chapters 4 and 6. For all this material and more,

read on.

Page 34: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3

Defining worst-case approximate

privacy

This chapter presents several definitions of worst-case privacy loss. These will be used throughout the

thesis. The list is not meant to be comprehensive; an exhaustive collection of every different mathematical

measurement of privacy is beyond the scope of this work. The select definitions collected below represent

(what the author perceives as) the main definitions of privacy for this model.

Preexisting results about privacy are provided. Some attempt is made to summarize the motivations

for each of the various definitions of privacy. A summary of worst-case privacy measures, and how they

relate to one another, is provided in chapter 9. Overall, the challenge in defining worst-case approximate

privacy originates in how heavily worst-case privacy losses should be evaluated, and how they should

compare relatively. Each measures presented in this chapter is reasonable, but differs in motivations and

in the severity with which privacy loss is measured.

25

Page 35: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 26

3.1 Defining approximate privacy with no error

Several definitions of privacy have been proposed. We begin by examining the communication model

with no error.

3.1.1 Privacy approximation ratio (PAR)

Many functions are known to require linear communication complexity, including =, ≤, and set inter-

section [Yao79]. This is quite expensive in communication cost. The =, ≤, set disjointness and set

intersection functions all fail the corners lemma test, and so are not computable at all with perfect

privacy.

Variations of these problems which pass the corners lemma test do not fare much better. As we

have seen, Vickrey auctions (related to ≤) are computable in linear communication, but the only private

protocol requires exponential communication [Kus89]. Other similar auction problems are known to be

difficult or impossible to compute with perfect privacy [BS08, FJS10b] – in the sense of PAR = 1 for

two-player functions, or the stronger sense of k-privately for k-player functions.

This motivates a relaxation from perfect privacy to approximate privacy.1 The relaxation is appealing

because it renders more functions computable, and more closely mirrors real-world situations in which

some privacy loss may be acceptable. This more fine-grained approach to privacy allows us to distinguish

one protocol as relatively more private than another, even when neither protocol is perfectly private.

Approximate privacy offers a method to evaluate and compare protocols for any function (even a function

for which perfect privacy is unattainable).

In order to “relax” the privacy requirement, we must consider: What is privacy? The notion of

t-privacy is a requirement of indistinguishability : inputs in the same preimage of f should be in the

same preimage of π. To define approximate privacy, we will relax this inclusion.

Definition 29 Given f : X × Y → Z, each input (x, y) is associated with the region Rx,y of all inputs

in the preimage of f(x, y).

Rx,y = (x′, y′) ∈ X × Y | f(x, y) = f(x′, y′)

Ideally, the protocol would compute the output f(x, y) without revealing any additional information

about the inputs x and y. Thus input (x, y) would be indistinguishable from any other input in the

region R(x,y). Depending on the protocol, different inputs in this region may have differing transcripts,

despite yielding the same function output. In this case there is some loss of privacy to an eavesdropper,

captured by definition 31.

Definition 30 Let deterministic, zero-error protocol π compute function f : X × Y → Z. Let π(x, y)

be the transcript of messages exchanged when the protocol is run on input (x, y). Each input (x, y) is

associated with the protocol-induced rectangle Px,y of all inputs which yield the same transcript.

Px,y = (x′, y′) ∈ X × Y | f(x, y) = f(x′, y′) and π(x, y) = π(x′, y′)1The word “approximate” refers to the privacy and not the function’s outcome, which protocols must compute exactly

(until chapter 8).

Page 36: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 27

Note that Px,y ⊆ Rx,y. If these two sets are the same for all inputs, then the protocol is private and

reveals no additional information beyond the output of the function. A measure of privacy is obtained

by comparing the relative sizes of these sets. The set Px,y contains all the inputs which an eavesdropper

– who has access to the entire protocol transcript – cannot distinguish from (x, y).

Definition 31 [FJS10a] A protocol for f has worst-case privacy approximation ratio (PARext)

given by the maximum ratio, over all inputs, between the sizes of a region associated with an input and

the protocol-induced rectangle associated with that input.

(worst-case) PARext(π) = max(x,y)

|Rx,y||Px,y|

The worst-case PARext for f is the minimum, over all protocols π for f , of the worst-case PARext for

π.

From this definition, each input (x, y) has its own PARext(x, y) =|Rx,y||Px,y| . This is the privacy approxima-

tion ratio obtained for that input. The best PARext is 1; PARext is always ≥ 1, and the worst possible

PARext depends on the function. PARext is always upper-bounded by 22n. Since a larger PARext value is

worse, PARext should be thought of as measuring privacy loss. As suggested by the superscript, PARext

is an external measure ([FJS10a] sometimes call this “objective”).

Notice the nice parallels of PAR with the visual intuition of the Corners Lemma 8. Both are concerned

with regions and rectangles.

Definition 32 The internal privacy approximation ratio measures the amount of privacy lost from

the perspective of one of the players.

For player 1, define:

• the 1-ideal monochromatic regions R1x,y as:

R1x,y = (x, y′) ∈ X × Y | f(x, y) = f(x, y′)

• the 1-induced protocol rectangles P 1x,y as:

P 1x,y = (x, y′) ∈ X × Y | f(x, y) = f(x, y′) and π(x, y) = π(x, y′)

(where t(x, y) is the protocol transcript as in definition 30)

• the worst-case PARint with respect to 1 as: max(x,y)|R1

x,y||P 1

x,y|, and

PARint with respect to player 2 is defined analogously.

The worst-case PARint for a protocol is the maximum of the worst-case PARint with respect to 1

and the worst-case PARint with respect to 2. We will denote this PARint(π) where π is a protocol.

The worst-case internal PAR for f is the minimum, over all protocols π for f , of the worst-case

internal PAR for π.

[FJS10a] sometimes refer to internal privacy as “subjective”, to parallel external privacy’s moniker

“objective.” We will try to stick with “internal” and “external.”

Page 37: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 28

Internal privacy is a slightly more complicated notion of privacy than external privacy (definitions 31

and 62). The discussion below primarily considers external approximate privacy. External and internal

PAR measures for two-player Vickrey auction are covered at length in chapters 4 and 6.

3.1.2 Strong h-privacy

Bar-Yehuda et al. suggest yet another measure of privacy, with a different approach. Rather than

measuring the mutual information of the inputs and the transcript (as some measures we will see), or

comparing the sizes of rectangles (as PAR does), they suggest a function h on X ×Y which attempts to

capture the “revealed information” that the protocol tells about the input. Alternately, one can think

of h as revealing the unimportant2 parts of the input; a function is h-private if it reveals no more than

the information revealed by the value h(x, y).

Remark 33 (Do not confuse t-private with h-private) There is an unfortunate coincidence of nam-

ing style in preexisting literature. This makes t-private and h-private sound like related concepts. The t

in t-private refers to the number of players (in a coalition – recall definition 18); the h in h-private is a

function.

They are not related.

This approach to measuring privacy permits more flexibility than simple approximate privacy, because

it acknowledges the fact that players may care disproportionately about the privacy of some parts of

their input.

Definition 34 (strongly h-private) Let f , hA : 0, 1n×0, 1n → Z be two functions. A protocol π

for f is strongly hA-private for Alice when:

• π computes f with zero error, and

• ∀x, y, y′ ∈ 0, 1n, hA(x, y) = hA(x, y′) implies that for all Alice’s random coins rx and for all

possible transcripts t,

Prry[t = π(x, y)|rx] = Pr

ry[t = π(x, y′)|rx]

where ry is the random variable for Bob’s coin flips.

Strongly hB-private for Bob is defined analogously.

A protocol π is strongly (hA, hB)-private for f if it is strongly hA-private for Alice and strongly

hB-private for Bob.

A function is strongly h-private if it is strongly (h, h)-private.

Strong h-privacy is a guarantee of internal privacy.

The intuition here is that h must do something like subpartitioning the protocol-induced rectangles.

The regions of h (preimages) must all be unions of protocol-induced rectangles, although they need not

be rectangles themselves.

Privacy is then a measurement relying on the size range of the functions h and f . Bar-Yehuda

et al. then use this measure to establish a privacy hierarchy, comparing different functions according

2Here, “unimportant” describes “any part of the input where the players don’t care about privacy. For example, theleast significant digit of your bank balance is likely an “unimportant” piece of privacy.

Page 38: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 29

to the amount of additional information required to compute them using a communication protocol.

Further discussion of strong h-privacy continues in section 3.2.3, with the definition of weakly h-private.

Discussion of Bar-Yehuda et al.’s comparisons of the relative privacy achievable for different functions,

and the utility of the concept of “revealed information”, continues in chapter 8.

Page 39: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 30

3.2 Defining approximate privacy with error

Introducing error-prone protocols requires an adjustment in our measures of approximate privacy. It is

not immediately clear how to define approximate privacy in the presence of protocol errors. It seems

clear that no binary-output protocol can err more than 1/2 the time.

Remark 35 An erring protocol will always choose to output the majority-correct answer for any protocol-

induced rectangle.

However, this is not as straightforward as it seems. Protecting privacy against eavesdroppers or other

players may affect how remark 35 is implemented, since the players’ view (of subjective protocol-induced

rectangles) differs from the eavesdropper’s view (of external protocol-induced rectangles). For example,

certain values of x may fully determine the value f(x, y), so that Alice knows the actual answer, but if

the protocol makes an error, Bob and an eavesdropper do not.

3.2.1 The privacy benefit of error

The introduction of randomness suggests another relaxation: permitting protocols to make some ǫ frac-

tion of errors. It seems reasonable to expect that erring protocols do not reveal more private information

than always-correct protocols. Further, randomized protocols naturally lead to protocols with some er-

ror. (Especially if an analog of theorem 13 holds for approximate privacy — in that case, the only hope of

using randomness to shorten protocols would be to introduce error, because always-correct randomized

protocols could be replaced with always-correct deterministic protocols. See open problem 133.)

The basic question is this: can we sacrifice accuracy for privacy?

Consider the function f : [100]× [100]→ 1, 2, 3 defined as follows:

f(x, y) =

3, x = 7, y = 20

1, x 6= 7, y 6= 20, y < 50

2, y ≥ 50

This f motivates the use of error-prone protocols. Deterministically calculating the function f without

error would require some loss of privacy to the inputs (x, y) where y < 50, since a protocol-induced

rectangle is necessary to contain the single input pair (7, 20). However, if some error is allowed, then

the protocol P can, in one round, calculate the function p : [100]× [100]→ 1, 2, 3, defined as:

p(x, y) =

1, y < 50

2, y > 50

Notice that this protocol makes a single error3 on the input (7, 20), but overall demonstrates two useful

outcomes of error-prone computation:

• it finishes very quickly, in just one round, and

• it loses no privacy on the correctly-computed inputs.

3And therefore would give a poor showing for worst-case error (recall 6).

Page 40: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 31

This promising if simple example demonstrates that relaxing to a distributional ǫ-error protocol can

grant vast improvements in both communication cost and approximate privacy. (The promise of nearly-

correct, nearly-private computation is fulfilled for private multiparty approximations of many problems

[BCNW06].)

With worst-case error the examples are more sophisticated. The equality function EQ stands as an

example of possible benefits: it saves on both privacy and communication when allowed error.

Definition 36 (The EQ function) The equality function EQ : X × Y → 0, 1 is defined:

EQ(x, y) = 1 ⇐⇒ x = y

Definition 37 (The GT function) The greater than function GT : X × Y → 0, 1 is defined:

GT(x, y) = 1 ⇐⇒ x > y

Lemma 38 (EQ and GT communication complexity) [KN97] For both EQ and GT, the shortest

possible deterministic, zero error protocol on n-bit inputs requires n+ 1 bits of communication.

The deterministic case is proved using the basic fooling set argument. A similar argument shows that

the worst-case privacy is also bad. (For equality: the region where EQ = 1 is of size 2n and gets broken

into 2n rectangles. Alternately, the region where EQ = 0 is of size 2n(2n − 1). Consider the subset of

these consisting of the input pairs (x, x + 1). Each of these must be in its own induced rectangle, so

the EQ = 0 region is divided into at least 2n − 1 rectangles. For greater than: the fooling set (x, x)works here. Each input pair (x, x) must be in its own induced rectangle. Thus the GT(x, x) = 0 region

is divided into at least 2n rectangles. At best this division is even.)

Lemma 39 For both EQ and GT , the worst-case PARext is ≥ 2n.

On the other hand, short randomized erring protocols for EQ and GT are known. It is reasonable

to expect that a short protocol will be more private, since it has fewer bits or rounds during which to

lose privacy. We will see that this is the case.4

Lemma 40 There is a randomized public coin protocol τ with communication complexity O(log(n/ǫ))

such that on input two n-bit strings x and y, it outputs the first index i ∈ [n] such that xi 6= yi with

probability ≥ 1− ǫ if such i exists. (Or a message “x = y” otherwise.)

The two parties use hashing and binary search to locate i, backtracking when they detect earlier

mistakes.

Protocol 41 (Feige’s ≤ [FRPU94]) Fixing some constant C, define a labelled binary tree of depth

C log(n/ǫ) as follows. Each node will be labelled with a span of indices. (These represent the span the

players think contains the earliest index i ∈ [n] such that xi 6= yi.) The root is labelled with the interval

[1, n]. For j from 0 to log n− 1, every node at depth j labelled [a, b] has two children, each labelled with

4Actually, [Bra11] gives a randomized protocol for EQ with zero error and constant information cost, so equality is notan ideal example of the privacy benefit of error. Most protocols solving GT also solve EQ, but the converse is not true.The Braverman et al. protocol does not solve GT, so greater than remains an example of the privacy benefit of error (asfar as the author knows).

Page 41: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 32

half the interval: [a, b− n/2j−1] and [a+ n/2j−1, b]. Every node at depth ≥ log n has exactly one child,

with the same label as its parent.

This binary tree will serve as the protocol tree, with one notable modification: since the protocol is

randomized and may make errors, the players are allowed to walk up the tree, effectively “taking back”

a previous round.

Starting at the root of the tree:

1. The players are at some node v labelled [a, b] in the tree.

2. The players check whether i ∈ [a, b], that is, whether their previous moves were correct. They use

public randomness to pick hash functions and use these to compare the prefixes of x and y of length

a and b, respectively.

If the tests indicate that i ∈ [1, a] or i 6∈ [1, b], then the players move to the parent of v.

Otherwise, the players again use hash functions to decide which child of v to switch to. If these

tests are inconsistent, the players stay at v.

3. Repeat step two C log(n/ǫ) times.

4. If the final vertex is labelled [a, a], output a. Otherwise conclude that x = y.

Proof of Lemma 40: We proceed by analysis of protocol 41. Step two takes constant communication

and is repeated C log(n/ǫ) times, giving the desired communication complexity.

If x = y, then the protocol is always correct.

Suppose x 6= y and i is the first index where they differ. As long as the protocol finishes at some

node labelled [i, i] it will be correct; there are many of these nodes, in a long chain, at the bottom of the

tree. Consider directing the edges of the tree to be pointed towards this chain of nodes.

Notice that at each step, the players move from v to a node w which is closer to this chain with

probability at least 2/3. (The exact probability depends on the way public randomness is used to create

a hash function. See appendix C of [BBCR10] for a more detailed analysis.)

The protocol succeeds as long as the number of correct steps on the tree is at least log n plus the

number of incorrect steps. Let A be the number of correct, and B the number of incorrect steps

(A+B = C log(n/ǫ)). We want:

A ≥ B + log k

The expected value of A is 23C log(n/ǫ). The protocol errs only when A differs from this expectation by

16C log(n/ǫ) − 1

2 log n. Using Chernoff bounds, we can set C to be large enough that the probability of

erring is at most ǫ.

Thus there is a clear communication benefit of randomization. ‘Greater than’ goes from n + 1 (no

randomization) to O(log(n/ǫ)) (with randomization). And what about privacy? Because protocol 41 is

randomized and erring, it is more convenient to consider its privacy with some average-case measure.

Clearly the protocol cannot reveal more bits of information than there are bits of transcript. Hence an

easy upper bound of ICextµ ≤ O(log(n/ǫ)) (with error), a dramatic improvement over the O(n) bits of

privacy revealed (with no error and no randomization).

Page 42: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 33

3.2.2 (δ, t)-privacy with ǫ error

The notion of t-privacy can be extended to protocols which make mistakes. [CK89] relax this definition

even further, adding a δ parameter which measures how much privacy (distinguishability) is being

revealed. We will fix t = 2 since our main concern is the two-player case. These definitions can all be

generalized to the many player case in the obvious way.

Definition 42 ((δ, t)-private with ǫ error) [CK89] A function f : X×Y → Z is (δ, 2)-private with

ǫ error if there is a protocol π (computing function p) such that:

• (error condition)

∀(x, y) ∈ X × Y , Pr(p(x, y) 6= f(x, y)) ≤ ǫ, and

• (privacy condition)

For any two inputs (x, y) and (x′, y′) such that f(x, y) = f(x′, y′) and either x = x′ or y = y′,12

transcript s |Pr[s|x, y]− Pr[s|x′, y′]| ≤ δ.

With probability taken over the players’ randomness.

This is a measure of internal privacy.

This generalization relaxes t-privacy. In the multiplayer case, it means that no coalition of ≤ t

players (who share their inputs and transcripts) can learn “much” (depending on δ) about the inputs

of non-coalition players. When two inputs agree on their T entries, δ bounds the statistical distance

between the messages passed between T and T .

Note that every deterministic k-player protocol is either (0, k)-private or (1, k)-private. The mea-

surement (δ, t)-private seems most useful when studying randomized protocols. Even for only two-player

protocols, (δ, 2)-privacy is meaningful. This measure is relatable to t-privacy.

Theorem 43 [CK89] If there are ǫ, δ ≥ 0 such that ǫ + δ < 1/2 and k-input Boolean function f is

(δ, ⌈k2 ⌉)-private with ǫ error, then f is k-private.

There is an analog of the corners lemma for this weaker notion:

Lemma 44 [CK91] Let ǫ, δ ≥ 0 satisfy ǫ + δ < 1/2. Let X, Y , and Z be nonempty sets and let

f : X × Y → Z be a function that can be computed (δ, 1)-privately with ǫ error. For every z ∈ Z,

x, x′ ∈ X, and y, y′ ∈ Y , the following holds:

f(x, y) = f(x, y′) = f(x′, y) = z ⇒ f(x′, y′) = z

3.2.3 Weak h-privacy and additional information

Bar-Yehuda et al. define a version of h-privacy for protocols with error. This is another approximate

measure of privacy, since h can vary from h ≡ f to h ≡ 0, each conveying a different amount of

information about the inputs. Recall that h is a function on X × Y which captures the “revealed

information” that the protocol tells about the input; roughly, a function f is h-private if there is a

protocol computing f(x, y) which reveals at most h(x, y). (See section 3.1.2 for the definition of strongly

h-private, by contrast.)

Page 43: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 34

Definition 45 (weakly h-private) [BYCKO93] Let f , hA : 0, 1n × 0, 1n → Z be two functions.

A protocol π for f is weakly hA-private for Alice when ∀x, y, y′ ∈ 0, 1n, hA(x, y) = hA(x, y′) implies

that for possible transcripts t,

PrrX ,rY

[t = π(x, y)] = PrrX ,rY

[t = π(x, y′)]

where the probability is over the player’s coin flips.

Weakly h-private for Bob is defined analogously.

A protocol π is weakly (hA, hB)-private for f if it is weakly hA-private for Alice and weakly hB-private

for Bob.

A function is weakly h-private if it is weakly (h, h)-private.

Like strong h-privacy, weak h-privacy is an internal measure.

The main difference between strongly and weakly h-private protocols is that the latter can err.

([BYCKO93] constrain erring protocols to be correct ≥ 1/2 of the time.) Additionally, the probability

for strongly/weakly h-private protocols is taken over slightly different sets of coins; weak privacy considers

both players’ randomness. Both are worst-case privacy measures.

Of note while developing an intuition about h-privacy is the fact that the protocol for f does not need

to compute h. The function h can actually reveal very sophisticated information, e.g., whether x = y or

the first bit where xi 6= yi.

The notions of weakly h-private and strongly h-private are related.

Lemma 46 Let f and h be functions on X×Y . Let fh be their concatenation: fh(x, y) = f(x, y)h(x, y).The following hold:

1. If f is weakly h-private, then f is weakly fh-private.

2. If f is strongly h-private, then f is strongly fh-private.

3. If fh is weakly fh-private, then f is strongly fh-private. [BYCKO93]

4. f is strongly fh-private if and only if f is weakly fh-private. [BYCKO93]

5. If f is weakly fh private, then there is a function g with |range(g)| ≤ |range(h)|2 such that fg is

strongly fg private. [BYCKO93]

Note that the converse of 3 is not true, as a protocol for f need not compute the value of h.

The following extends theorem 13.

Lemma 47 [Kus89] Let f be a function on X × Y . The following are equivalent:

• f is weakly f -private.

• f is strongly f -private.

• Mf is decomposable.

• Mf does not contain any forbidden submatrix.

Page 44: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 3. Defining worst-case approximate privacy 35

This is interrelated with the proof that randomization does not help preserve privacy for 2-player func-

tions (theorem 13).

The notion of weak h-privacy is used to define Bar-Yehuda et al.’s three measures of additional

information.

Definition 48 (Additional information Ic(f)) [BYCKO93]

Let f and h be some functions on X × Y . Let µ be some distribution on X × Y . Let X and Y be

random variables drawn from X × Y according to µ. A protocol π computes f if:

• both parties can determine an output value p(x, y) from the transcript5, and

• Pr [p(y, y) = f(x, y)] > 1/2.

Define the combinatorial measure

Ic(f) = minlog | range(h)| s.t. f is weakly h-private

Idetc (f) = min

log | range(h)| s.t.

∃ deterministic protocol π for f s.t.

h(x, y) = h(x, y′)⇒ π(x, y) = π(x, y′)

h(x, y) = h(x′, y)⇒ π(x, y) = π(x′, y)

These definitions describe internal privacy.

There seem to be several problems with using this definition to evaluate approximate privacy. They

are explored in section 8.1.

5The transcript need not explicitly include the value p(x, y), which we think of as f(x, y); remember remark 2.

Page 45: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4

A worst-case privacy tradeoff

Recall the Vickrey auction function discussed in section 2.6. Two-player Vickrey auctions are known

to be perfectly privately computable (that is, PAR = 1), but only by an exponential length algorithm

(theorem 27, protocol tree figure 4.49). By contrast, they are computable in linear time if there

Figure 4.49: English bidding achieves PAR = 1 at exponential communication cost.

are no privacy restrictions (worst-case PARext = 2n). The bisection protocol (figure 2.28 on page 23)

resembles the full binary search protocol tree (figure 4.50), with some limbs pruned. Bisection protocol

proceeds like binary search on the smaller input. These two extremes – on the one hand PAR = 1

Figure 4.50: The perfectly balanced tree is the shortest possible protocol tree with2n+1 leaves. The bisection protocol has worst-case PARext = 2n (the maximumpossible) but linear communication cost (the minimum possible).

at exponential communication cost, and on the other, exponential PAR at linear communication cost –

suggest that there is a tradeoff between privacy and communication for Vickrey auctions. Theorem 51

establishes a tradeoff between protocol length and approximate privacy achieved for 2-player Vickrey

auctions. The intuition behind this theorem is that protocol steps which resemble those of the ascending

English bidding protocol partition the inputs in an unbalanced way, so that most inputs follow one

branch of the protocol tree, and few inputs follow the other branch. Such steps preserve privacy but

36

Page 46: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4. A worst-case privacy tradeoff 37

increase the length of the protocol, as the larger branch must be lengthened in order to accommodate

sufficient leaves to compute the function. On the other hand, protocol steps that resemble binary search

partition the inputs in a nearly balanced way. Such steps make good progress, as a balanced tree can

be shallower and still have sufficiently many leaves. But these steps are are bad for privacy. Dividing

the remaining inputs in half increases the PARext by a factor of 2.

Theorem 51 For all n, for all p, 2 ≤ p ≤ n/4, any deterministic zero-error protocol for the two-player

n-bit Vickrey auction problem with communication cost (length) less than n2n4p−5 obtains privacy loss

(worst-case PARext) at least 2p−2.

Here the variable p serves as a parameter, explicitly linking the protocol length to the achievable

PARext. For instance, if we put p =√n, then we conclude by theorem 51 that either the protocol

communicates 2Ω(√n) bits in the worst case, or the worst-case privacy loss is 2Ω(

√n). This theorem shows

that Vickrey auctions are expensive (in bits of communication) to compute while maintaining worst-case

approximate privacy. Thus, for some problems there is an inherent tradeoff between communication

complexity and privacy.1 This theorem is not tight; see the upper bound in lemma 59.

The rest of this chapter consists of a proof of theorem 51 and further study of worst-case PARext for

two-player Vickrey auctions.

We will assume without loss of generality that in the protocol, the players take turns and send one bit

per message. (Any protocol can be put into this form by at most doubling the length of the protocol.)

Moreover, the protocol is assumed to be deterministic and to have zero error.

Recall that every such protocol can be considered as a binary decision tree (see page 8). For every

deterministic zero-error communication protocol, we describe an adversary strategy that follows a path

through the protocol tree and finds some input(x, y) such that either: (i) the privacy loss of (x, y) is large

i.e., PARext(x, y) ≥ 2p−2, or (ii) the communication protocol on (x, y) requires at least n2n4p−5 bits to

compute. Which particular input (x, y) loses the worst privacy will vary depending upon the protocol;

for any fixed (x, y), it is easy to construct a short protocol which preserves privacy for that particular

input.

1Note that the same is not true of average-case privacy approximation ratios. Good (linear) average-case PARext isachievable for two-player Vickrey auction with short protocols. See chapter 6.

Page 47: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4. A worst-case privacy tradeoff 38

4.1 Bookkeeping

LetM denote the matrix corresponding to the Vickrey auction problem, as drawn in figure 2.24 (page 21).

Bob wins for inputs in the horizontal regions; Alice wins for inputs in the vertical regions. Fix a

communication protocol, and corresponding protocol tree, P . Recall from section 1 (page 8) that each

node v in the tree is associated with a set of inputs T (v) = TA(v)× TB(v). For every node v in P that

our adversary strategy selects, we will maintain three sets: S(v), AL(v), BL(v) ⊆ [2n]. At node v, the

adversary will be interested in tracking the privacy loss on the set of inputs S(v)×S(v). The privacy loss

for these inputs will be measured with the help of the two auxiliary sets AL(v) and BL(v), respectively.

Initially, at the root r of the protocol tree, S(r) = [2n−p]. This initial set of inputs S(r) × S(r) are

the “small” inputs that sit in the upper left submatrix of M (see figure 2.24 on page 21). As we move

down the protocol tree, we will update S(v) so that it is always a subset of [2n−p] ∩ TA(v) ∩ TB(v). We

are interested in these small inputs since the regions that they are contained in are very large, and thus

have the potential to incur a large (exponential) privacy loss. There is a careful balance to be struck.

S(r) needs to be large enough that not all inputs in S(r)×S(r) can obtain perfect privacy. S(r) needs to

be small enough that |S(r)| privacy loss can be given away without affecting the desired privacy bound.

The inputs in (x, y) ∈ S(r) × S(r) stand to lose a lot of privacy – they come from large regions (of

size ≥ 2n− 2n−p), and so the fixed numerator of the PARext is large (recall definition 31). Our strategy

will force the denominator to be small, resulting in a large (bad) loss of privacy. As we traverse the

protocol tree, we will update S(v) so that, at vertex v, S(v) is always a subset of [2n−p]∩TA(v)∩TB(v).

There will always be some input in S(v)× S(v) which has the potential for a large loss of privacy.

The set AL(v) is a subset of TA(v) , and similarly BL(v) is a subset of TB(v). The sets AL(r) and

BL(r) are initially [2n]\[2n−p], the “large” inputs. At vertex v, the set AL(v) describes the set of large

inputs of Alice that have survived so far; thus AL(v) = TA(v) ∩ [2n]\[2n−p]. Similarly, BL(v) describes

the set of large inputs of Bob that have survived so far; thus BL(v) = TB(v) ∩ [2n]\[2n−p]. As we

traverse the protocol tree, these sets track the loss of privacy for Alice and Bob (respectively) on inputs

in S(v)× S(v).

When we reach node v, S(v) × S(v) are those inputs in S(r) × S(r) that have survived so far, and

that lie in either large horizontal regions (regions where Bob wins) or large vertical regions (regions

where Alice wins). If the protocol has small cost, then it has to segment these large regions into many

small monochromatic rectangles. This segmentation incurs a large loss of privacy. The set AL(v) keeps

track of how badly the protocol is segmenting the vertical regions. The set BL(v) does likewise for the

horizontal regions.

We can measure the loss of privacy so far in the protocol. For any (x, y) ∈ T (v),

PARv(x, y) =|Rx,y|

|Rx,y ∩ T (v)| .

If v is a leaf, then for any (x, y) ∈ T (v), PARv(x, y) = PARext(x, y). The following simple claim will be

useful:

Claim 52 ∀(x, y) ∈ T (v), PARext(x, y) ≥ PARv(x, y).

This claim puts in mathematical terms the observation that the protocol can only reveal more about

the players’ inputs as it proceeds. Once it happens, privacy loss can never be “undone” by later steps

of the protocol.

Page 48: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4. A worst-case privacy tradeoff 39

In particular, the following fact is crucial to our argument, and motivates the strategy in the next

section.

Fact 53 For any (x, y) in S(r) × S(r) ∩ T (v), if (x, y) is in a vertical region (y < x, a win for Alice),

then

PARext(x, y) =|Rx,y||Px,y|

≥ PARv(x, y) ≥2n − 2n−p

|AL(v)|+ 2n−p.

This holds because |Rx,y| ≥ 2n−2n−p and |Rx,y∩T (v)| ≤ |AL(v)|+2n−p. Similarly, if (x, y) ∈ S(r)×S(r)is in a horizontal region (x ≤ y, a win for Bob), then

PARext(x, y) ≥ PARv(x, y) ≥2n − 2n−p

|BL(v)|+ 2n−p.

The above inequality shows how AL(v) and BL(v) track the privacy loss of inputs S(v)×S(v): for those

inputs (x, y) ∈ S(v)×S(v) where Alice wins, the privacy loss for (x, y) increases as AL(v) decreases, and

similarly for those inputs where Bob wins, the privacy loss increases as BL(v) decreases. We track the

privacy loss for Alice and Bob separately, because the protocol may have them perform asymmetrically.

Page 49: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4. A worst-case privacy tradeoff 40

4.2 Adversary Strategy

We are now ready to describe the adversary strategy. The strategy starts at the root of the protocol

tree and follows a path through the tree, stopping at some node which either indicates a large loss of

privacy or is at a deep level of the tree. The strategy uses a general rule to construct this path, deciding

at each node which of its children to pick (or to simply stop at that node).

There are two cases, depending on whether it is Alice’s or Bob’s turn to send a message. We will

first describe the case where at node v, it is Alice’s turn to speak. Alice sends Bob some bit b which

partitions her inputs TA(v) into two pieces. Since S(v) and AL(v) are always subsets of TA(v), this

induces a partition of S(v) into S0(v) and S1(v) and AL(v) into AL0 (v) and AL

1 (v).

Let α = 2−n4p . We determine if a step made progress or was useless in the following way:

• If α|S(v)| ≤ |S0(v)| ≤ (1 − α)|S(v)| (hence α|S(v)| ≤ |S1(v)| ≤ (1 − α)|S(v)|), then we say this

step made progress on S(v). In this case, the set S(v) is partitioned into roughly balanced pieces.

(Thus the inputs (x, y) ∈ Si × Si where Alice wins have lost a factor of > 2 in privacy, as the

denominator of the PARext ratio has decreased.)

Select i such that |ALi (v)| ≤ 1

2 |AL(v)|.

• Otherwise, pick i such that |Si(v)| ≥ (1− α)|S(v)|. In this case, we call it a useless step.

We update sets in the obvious way: if w is the new node in the protocol tree that we traverse to, then

S(w) = Si(v) and AL(w) = ALi (v).

The second case is when it is Bob’s turn to speak. Our adversary strategy is entirely symmetric. Now

TB(v) is partitioned into two pieces, inducing a partition of S(v) into S0(v) and S1(v), and a partition

of BL(v) into BL0 (v) and BL

1 (v). We pick i as before, but with ALi replaced with BL

i .

The strategy continues as described above, traversing the protocol tree until one of the two events

happens for the first time:

• Alice (or Bob) has made p progress steps, so AL(v) (or BL(v)) has been halved at least p times.

• The strategy reaches a leaf node, and can go no further.

This completes the description of the strategy.

Page 50: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4. A worst-case privacy tradeoff 41

4.3 Analysis

The following are the two main ideas in analyzing our strategy.

Lemma 54 Suppose our strategy reaches node v and finds that Alice (or Bob) took p progress steps on

the way. Then, for each (x, y) ∈ S(v)× S(v) such that x > y (or x ≤ y), PARv(x, y) ≥ 2p−2.

We can exit our strategy at this point and invoke Claim 52 to finish the argument. In the other case,

we make the following claim:

Lemma 55 If our strategy reaches a leaf node v without Alice or Bob taking p progress steps, then for

every (x, y) ∈ T (v), the protocol communicates at least ln 24 (n− 2p)n/4p bits.

Thus, we would conclude that in this case the cost of the protocol is larger than n2n4p−5 bits of

communication. (The extra factor of 1/2 is because we might have doubled the communication artificially

by forcing Alice and Bob to alternate.) Hence, all that remains to finish the proof of theorem 51 is to

prove lemma 54 and lemma 55.

Proof of Lemma 54: Let r be the root node of our protocol tree. For each input (x, y) ∈ S(r)×S(r),

note that Rx,y ≥ 2n− 2n−p and |AL(r)| = 2n− 2n−p. Let ϕ be the path in the protocol tree from r to v

that our strategy chooses such that Alice takes p progress steps along ϕ. Consider any pair of adjacent

nodes u,w in path ϕ such that Alice makes progress in going from u to w. Then, by definition of our

strategy, |AL(w)| ≤ 12 |AL(u)|. Hence, |AL(v)| ≤ 1

2p |AL(r)|. Thus, any input (x, y) in S(v) × S(v) on

which Alice would win is contained in an induced rectangle of size at most 2n−p + |AL(v)|. Claim 52

yields:

PARv(x, y) ≥2n − 2n−p

2n−2n−p

2p + 2n−p≥ 2p−2

The analysis when Bob makes p progress steps proceeds very similarly.

Proof of Lemma 55: The strategy reaches a leaf node v traversing a path ϕ, and |S(v)| = 1. (If

|S(v)| > 1, then there is more than one possible answer, and so the computation is not yet finished.)

In this case, Alice and Bob each took fewer than p progress steps. Let q be the total number of useless

steps followed to get to v. (The protocol is at most 2p+ q long.) On each progress step (u,w) in path ϕ,

by definition, |S(w)| ≥ α|S(u)|. On each useless step (u,w), the updated size of |S(w)| ≥ (1− α)|S(u)|.This gives a lower bound on the size of set S(v). Hence |S(v)| ≥ 2n−pα2p(1− α)q.

Assume that q < ln 24 (n− 2p)2

n4p and consider the size of S(v).

|S(v)| ≥ 2n−pα2p(1− α)q

> 2n−p(2−n/4p)2p(1− 2−n/4p)ln 24

(n−2p)2n4p

by assumption about q

= 2n/2−p(1− 2−n4p )

ln 24

(n−2p)2n/4p

simplifying algebra

> 2n2−pe−2·2−n/4p· ln 2

2(n−2p)2n/4p

as (1− x) > e−2x for x ∈ (0, 1/2]

= 2n2−pe−(ln 2)·(n

2−p) by simplification

= 1

This equation is satisfied for all 2 ≤ p ≤ n/4. This calculation shows that |S(v)| > 1, thus deriving a

contradiction to the fact that v is a leaf node where the protocol ends.

Page 51: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4. A worst-case privacy tradeoff 42

Thus the strategy proves theorem 51, either by finding some large loss of privacy or by finding an

input on which the protocol takes exponentially many steps. That is, either there is an input with

privacy loss at least 2p−2 or an input with communication at least ln 24 (n − 2p)2n/4p ≥ n

122n/4p bits.

Since we might have doubled the communication cost by alternating Alice and Bob bits, we obtain the

lower bound of theorem 51.

Page 52: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4. A worst-case privacy tradeoff 43

4.4 Implications and extensions

The proof of theorem 51 is subtly tricky. It deeply relies on the underlying structure of two-player Vickrey

auctions (especially the fact that, as long as there is a set S(v) of values which Alice and Bob both still

hold in consideration, the problem is self-similar: it is a Vickrey auction on inputs from S(v)× S(v)).

Tracking Alice’s and Bob’s privacy loss separately assures that, even for protocols in which the two

players act asymmetrically, the strategy still finds an input on which one of them loses a lot of privacy.

Asymmetrical protocols offer many more challenges than symmetrical protocols. The asymmetry of

privacy loss is also considered in [FJS10b], as a means to distinguish “fair” protocols from “unfair”

protocols (in which one player may be reluctant to participate). We do not consider it further in this

work.

The tradeoff of theorem 51 holds for both the external and internal definitions of PAR. For Vickrey

auctions they coincide, because all regions are rectangles with width or depth one.

Lemma 56 For n ≥ 1, let π be any protocol for two-player 2n-bit Vickrey auction. Then PARext(π) =

PARint(π).

Proof: PARext(π) ≤ PARint(π). Let (x, y) be the input which maximizes|Rx,y||Px,y| . If x ≤ y, then

PARext(π) =|Rx,y||Px,y| ≤ maxy′(

|Rx,y′∩x×Y ||Px,y′∩x×Y | ) ≤ PARint(π). The case for y < x is similar.

PARint(π) ≤ PARext(π). Let (x, y) be the input which maximizes PARint(π). If x ≤ y, then

PARint(π) = (|Rx,y′∩x×Y ||Px,y′∩x×Y | ) ≤ maxx,y

|Rx,y||Px,y| = PARext(π). The case for y < x is similar.

4.4.1 A tighter tradeoff?

The strategy described above suggests that a protocol can be constructed to achieve particular privacy-

communication bounds within the parameters of this tradeoff.

Protocol 57 (Bisenglish protocol) Set a parameter 0 ≤ ℓ ≤ n. Perform the bisection protocol for

ℓ rounds. If, after ℓ rounds, the players still do not know who has the lesser value, switch to English

bidding. Continue until finished.

The protocol tree for protocol 57 is given in figure 4.58. Analysis of this protocol is very similar to the

proof above. After ℓ rounds of bisection, the largest region has an induced rectangle of size 2n−ℓ. This

is the biggest loss of privacy of any region. Further English bidding steps result in no additional privacy

loss. The total length of the protocol is at most ℓ rounds (2ℓ bits) plus the remainder of the English

bidding. This remainder section is a (smaller) Vickrey auction on inputs of length n− ℓ.

Lemma 59 Protocol 57 has worst-case PARext = 2ℓ and communication cost 2ℓ+ 2n−ℓ.

Notice that this tradeoff does not exactly match the lower bound of theorem 51. The Bisenglish

protocol seems optimal from the viewpoint of the adversarial strategy above: it either takes steps which

make the most progress (by dividing exactly in half) or are perfectly useless (and perfectly privacy-

preserving). However, the analysis in the proof of theorem 51 does not obtain tight bounds which match

the Bisenglish protocol’s upper bounds. Several approximations may be the source of this gap.

Privacy calculation. Lemma 54 provides one possible reason for this “wiggle room” in the theorem’s

bounds: the estimate of PARext ≥ 2p−2 is not tight. Further, estimating that the privacy loss increases

Page 53: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter4.

Aworst-caseprivacytradeoff

44

start running

bisection protocol

switch

finish with

English bidding

Figure 4.58: The Bisenglish protocol switches from the shortest protocol to the most private protocol.

Page 54: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 4. A worst-case privacy tradeoff 45

by a factor of 2 on every progress step is imprecise; more closely tracking the actual privacy loss may

lead to tighter bounds.

Communication calculation. The more serious problem is the bound on communication. In the

calculations proving lemma 55, there is considerable rounding. The use of the approximation (1− x) ≥e−2x accounts for some of this lack of precision. Another source of approximation imprecision is the use

of real-valued approximation for the size of S(v). In an actual deterministic protocol, S(v) is partitioned

into two integer-sized pieces, whereas α|S(v)| and (1− α)|S(v)| are real numbers.

Open problem 60 Can theorem 51 be tightened to match the upper bound of lemma 59? That is,

does Vickrey auction represent a function for which tight worst-case privacy-communication tradeoffs

are provable?

4.4.2 Worst-case approximate privacy hierarchy

Recall that, for perfect privacy, the structure of the privacy hierarchy of functions is known: for arbitrary

functions there is a full rounds hierarchy with every level distinct (theorem 15, page 14).

Theorem 51 implies that the Vickrey auction function itself answers the hierarchy question for worst-

case PARext. We can restate the theorem in the style of theorem 15 for comparison, combining our upper-

and lower-bounds.

Theorem 61 For all 0 ≤ ℓ ≤ n4 − 2, there exists a function f on 0, 1n × 0, 1n for which worst-case

PARext = 2ℓ is achievable in 2ℓ+ 2n−ℓ bits of (deterministic) communication but not less than n2n4p−5

bits of communication.

Thus, for some problems there is an inherent tradeoff between communication complexity and privacy.

Notice that a resolution of problem 60 would tighten the gap in this theorem and make the hierarchy

more precise.

Page 55: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5

Defining average-case approximate

privacy

This chapter presents several definitions of average-case privacy loss. Many of these are averaged gen-

eralizations of worst-case measures from chapter 3. This list is not meant to be comprehensive; an

exhaustive collection of every different mathematical measurement of privacy is beyond the scope of this

work. The select definitions collected below represent (what the author perceives as) the main definitions

of privacy for this model.

Preexisting results about privacy are provided. Some attempt is made to summarize the motivations

for each of the various definitions of privacy. A summary of average-case privacy measures, and how

they relate to one another, is provided in chapters 7, 8, and 9.

46

Page 56: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 47

5.1 Average PAR and PARǫ

Recall the definitions of region (29), rectangle (30), and and worst-case PARext (definition 31). Now we

extend them to an average notion of privacy loss. Worst-case PARext is extremely sensitive: if a single

input loses privacy, it can balloon the worst-case PARext. It makes sense to average this measure so that

low-probability privacy loss has less of an influence on the overall evaluation of the privacy of a protocol.

For a probability distribution D on X × Y and a protocol P for a function f : X × Y → Z, [FJS10a]

define the average-case PARext as follows:

avgD PAR(P ) = ED

[ |Rx,y||Px,y|

]

(*)

where |S| measures the number of inputs in the set S.

We instead consider the following definition, which is weighted.

Definition 62 A protocol for f has average-case privacy approximation ratio associated with

distribution D (of inputs) given by the average ratio (weighted by D) between the sizes of a region

associated with an input and the protocol-induced rectangle associated with that input.

avgD PARext(π) = E(x,y)∈D|Rx,y|D|Px,y|D

The average-case PARext for f is the minimum, over all protocols π for f , of the average-case PARext

for π. |S|D is the total weight of elements in set S according to distribution D.

Remark 63 (Correctly defining average PARext)

As opposed to Feigenbaum et al. we measure the size of subsets of X × Y relative to the measure D.

This “corrected” definition coincides with the definition of [FJS10a] for the uniform distribution. Their

paper does not give any results for distributions other than uniform, so our definition is consistent with

their results. Similarly, most of our results for concrete functions are for the uniform distribution, so

they hold under both definitions.

Despite the fact that Feigenbaum et al. argue the opposite, we believe this definition by probability

mass is a natural choice. When the players (or an eavesdropper) have previous knowledge of D, the loss

of privacy of a protocol should be related to the perceived size of regions and not their actual size. This

is true for example when all the measure in each protocol rectangle is concentrated on a single input.

Then knowing D and the protocol transcript reveals everything about the input. Another example is

when all the measure of each region of a function is concentrated in a single protocol rectangle (where

we assume that all these rectangles are of the same size). Then the protocol reveals very little about its

actual input drawn from D except for the function value, and so ought to be considered “private”. The

original measure of Feigenbaum et al. does not make any distinction between these examples.

Definition 62 has interesting mathematical properties (see chapter 6) and is related to other known

measures (see chapter 9). For further discussion of alternative definitions of average-case PAR, see

section 8.1 of [FJS10a].

Worst-case PARext provides an upper bound for average-case PARext. Both worst- and average-case

PARext measure the privacy loss with respect to an eavesdropper who overhears the entire protocol

transcript, but has no additional access to the inputs. This means that perfect worst- or average-case

Page 57: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 48

PAR = 1 is a weaker requirement than perfect privacy (2-privacy) for functions of two inputs. It is

possible to define internal PARint, which corresponds more closely with the concept of t-privacy.

Definition 64 The internal privacy approximation ratio measures the amount of privacy lost from

the perspective of one of the players.

The average-case internal PAR for a protocol is the maximum of the average-case PARint with

respect to 1 and the average-case PARint with respect to 2. We will denote this avgD PARint(π) where

π is a protocol.

avgD PARint(π) = max

E(x,y)∈D

[ |Rx,y ∩X × y|D|Px,y ∩X × y|D

]

,

E(x,y)∈D

[ |Rx,y ∩ x × Y |D|Px,y ∩ x × Y |D

]

The average-case internal PAR for f is the minimum, over all protocols π for f , of the average-case

internal PAR for π.

Internal privacy is a slightly more complicated notion of privacy than external privacy (definitions 31

and 62). The discussion below primarily considers external approximate privacy. External and internal

PAR measures for two-player Vickrey auction are covered at length in chapters 4 and 6.

The deterministic lower bounds in chapter 2 motivate study of randomized protocols. The intuition

is that the players may be able to use their private random bits to protect their approximate privacy

against an eavesdropper; use of randomness may also shorten expensive, lengthy protocols.

The privacy concept PAR can be extended to the randomized setting. The following is a suggestion

for how to define PARǫ, the privacy approximation ratio when the protocol can err. A similar definition

was independently suggested by [KLX13]. This definition attempts to extend the visually intuitive

definition of PAR to situations with randomness and error.

Our motivation is as follows. Let f : X × Y → Z be some function which randomized/deterministic

erring protocol π computes. Protocol π can err, so it computes some function p : X × Y → Z which is

not exactly f . For some inputs (x, y), the final computed result p(x, y) 6= f(x, y). We essentially propose

measuring the PAR with respect to p (rather than the function f) and moving all consideration of error

into the error term ǫ (which will reflect how often p differs from f). The motive for measuring this term

is that the players and eavesdropper only learn the (possibly wrong) outcome of the protocol p(x, y), not

the function f(x, y). If p(x, y) 6= f(x, y) then the players and eavesdropper still learn something about

(x, y) – they learn that it lies in the rectangle P ǫx,y, which may help them distinguish it from other inputs

in the same region. (Even though they do not learn the correct output value.)

Remark 65 (π(x, y) determines p(x, y)) The PAR measure evaluates the information learned by an

eavesdropper who gets to see the computed output of the protocol. Hence we require that p(x, y) be

completely determined by the transcript π(x, y).

This avoids possible problems if an analog of theorem 13 does not hold for approximate privacy, that

is, if there are some two-party functions not approximately private with deterministic protocols, which

nevertheless have approximately private randomized protocols.

Definition 66 (PAR with error) Let f : X × Y → Z and let π be some protocol computing f over

distribution µ. Let p : X × Y → Z be the (possibly randomized) function which π actually computes.

Page 58: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 49

The region Rǫx,y associated with input (x, y) is the set of all inputs in the preimage of p(x, y).

Rǫx,y = (x′, y′) ∈ X × Y | p(x, y) = p(x′, y′)

Let Π(x, y) be the random variable for the transcript of messages exchanged on input (x, y). The

input (x, y) is associated with the protocol-induced rectangle P ǫx,y of all inputs which yield the same

transcript

P ǫx,y = (x′, y′) ∈ X × Y | p(x, y) = p(x′, y′) and Π(x, y) = Π(x′, y′)

The privacy approximation ratio is defined by comparing the likelihood of transcripts and outputs of

p. For some transcript t, let zt be the output of p which that transcript specifies.

avgµ PARǫ,ext(π) = E(x,y)∼µ,t

Prx,y,t(p(x, y) = zt)

Prx,y,t(π(x, y) = t)

The internal measure avgµ PARǫ,int can be defined by extension.

If the protocol π is randomized and computes a function p which is also randomized, it is unclear how

to organize a reasonable definition of worst-case PAR. In some sense, randomized functions innately

require average-case analysis, since a single run of the function may receive coins which make it run

poorly (e.g., by computing a transcript which contains the entire values of x and y), but the quality of

the protocol is considered as an average over all coin flips.

Notice that if π is deterministic or zero-error, this definition is essentially identical to the original

definition of PAR. [KLX13] define the internal measure with addition instead of maximization, to more

closely parallel IC.

The motivation for this definition is the extension of PAR. Defining against the computed function

p (instead of f) seems to capture the notion of privacy against an eavesdropper. Since the eavesdropper

knows π and can see the transcripts, it makes sense to measure the privacy loss against this knowledge,

and not against the actual preimages of f (the eavesdropper will not know which preimage is correct).

Like deterministic PAR, erring PARǫ ≥ 1. This separates consideration of accuracy (error) from

privacy. Note however that the players may learn that they are in an error case, even when the eaves-

dropper doesn’t know this. For example, suppose the protocol finishes in a large-size rectangle where

the answer (the value p computes) is 0, but there are a few inputs (x, y) inside this rectangle where

f(x, y) = 1. In this case, the eavesdropper learns that p(x, y) = 0 but the players may learn both that

p(x, y) = 0 and that f(x, y) = 1 (which each player is able to compute independently, based on their

private inputs). In this case, the players know that the protocol erred, but the eavesdropper does not.

(This insight is related to remark 35.) Consider for example:

f(x, y) =

1, if x=0 or y=0

0, otherwise

If X = Y = [2n] then the constant protocol π computing p(x, y) = 0 makes only exponentially few

errors on f . Vickrey auctions provide another example: if a protocol for two-player Vickrey auction

errs, it is possible that the wrong player would win, learning all of the other player’s information. To

the eavesdropper this looks like a usual auction; only the false winner knows that something is awry.

The issue of players learning more than the eavesdropper seems more closely related to game theory and

Page 59: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 50

mechanism design. (An erring implementation of Vickrey auction may no longer be truthful!) As long

as we are sated by definition 66, we can proceed to compare PARǫ to other measures of privacy with

error.

Just as for deterministic PAR, erring average PAR is trivially upper-bounded by the communication

cost. (Worst-case PAR, however, cannot be neatly bounded; it can be arbitrarily larger than communi-

cation, up to a maximum.)

Lemma 67 For any f on inputs in 0, 1n×0, 1n, for any distribution µ, for any protocol π computing

f with error ǫ,

avgµ PARǫ,ext(π) ≤ 2CC(π)

Proof: The worst-case proof is simple, as the maximum size of any region is 22n and the minimum

size of any rectangle is 1.

For the average case,

avgµ PARǫ,ext(π)

= E(x,y)∼µ,tPrx,y,t(p(X,Y ) = t)

Prx,y,t(π(X,Y ) = t)definition

=∑

t

x,y

Pr(π(X,Y ) = t) · Pr((X,Y ) = (x, y)|π(X,Y ) = t) · Prx,y,t(p(X,Y ) = t)

Prx,y,t(π(X,Y ) = t)expanding E

=∑

t

x,y

Pr((X,Y ) = (x, y)|π(X,Y ) = t) · Prx,y,t

(p(X,Y ) = t) cancel

≤ number of possible transcripts t always true: Pr ≤ 1

≤ 2CC(π)

Page 60: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 51

5.2 Information theory review

This section reviews some fundamentals of information theory. See [CT91] for more details on the origin

and study of information. Most uncited definitions in this area originate with [Sha48].

For any random variable X, we denote its probability distribution over its range X by µX . The

entropy of X, denoted by H(X), is defined as follows:

H(X) =def −∑

x∈XPrµX

[X = x

]log

(

PrµX

[X = x

])

= −EµX

[log(µX(x))

]

Let Y be another random variable over Y. For any y ∈ Y, the conditional entropy of X given Y = y is:

H(X|Y = y) ≡def −∑

x∈XPr

[X = x |Y = y

]log

(

Pr[X = x |Y = y

])

.

By extension we have the definition of conditional entropy of X given Y:

H(X|Y) =def EµY

[H(X|Y = y)

]

The mutual information between X and Y, denoted by I(X : Y), is a symmetric quantity, defined

as:

I(X ; Y) = H(X)−H(X|Y) = H(Y)−H(Y|X)

Mutual information is measured over a distribution of inputs, often omitted when the distribution is

obvious. Just like entropy, one can define the conditional mutual information between random variables.

Let W be another random variable with range W.

I(X ; Y |W) = H(X |W) − H(X |Y,W) = EµZ

[I(X ; Y |W = z

]

As intuition suggests, conditioning a random variable X on another random variable Y cannot

increase its uncertainty. Formally,

Fact 68 For any two random variables X and Y, H(X|Y

)≤ H

(X).

Fact 68 implies that mutual information between two random variables is always non-negative.

This fact will prove useful; see page 83.

Fact 69 For any variables A, B, and C,

H(A,B|C) = H(A,B,C)−H(C)

= H(A|B,C) +H(B,C)−H(C)

= H(A|B,C) +H(B|C)

The (standard) statistical distance measures the variation between two distributions.

Definition 70 (Statistical distance) For distributions P and Q over some set S, the statistical dis-

Page 61: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 52

tance between P and Q is:

δ(P,Q) =1

2

x∈S

|P (x)−Q(x)| = 1

2||P −Q||1

Statistical distance is symmetric, and will always be a value in [0, 1].

Another useful definition will be the Kullback-Leibler divergence.

Definition 71 (Kullback-Leibler divergence/relative entropy) For two distributions P and Q

over a finite set S, the Kullback-Leibler divergence (relative entropy) is:

DKL(P ||Q) =∑

x∈S

P (x) lnP (x)

Q(x)

The KL divergence is always non-negative, and is zero if and only if P ≡ Q. However, it is not symmetric

and does not satisfy the triangle inequality.

Intuitively, the KL divergence is often explained as the expected number of bits which must be used

to transmit a value drawn from P using a code for Q.

Fact 72 For two random variables A and B,

I(A;B) = DKL(p(a, b)||p(a)p(b)) = Ea DKL(p(b|a)||p(b))

Page 62: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 53

5.3 Information cost

An alternative approach to privacy measures the extracted information in terms of bits. This method

directly uses information-theoretic tools. Information theory provides powerful tools to reason about

random variables – in this case, the players’ inputs are the variables. The measurement of privacy in

this setting is how much information about the inputs can be extracted from the protocol transcript. As

usual, we will differentiate between privacy from an eavesdropper (external privacy) or another player

(internal privacy).

Mutual information can be used as a measure of privacy. The mutual information between the

protocol transcript π(X,Y) and the players’ inputs is essentially the information that the transcript

reveals about the inputs. Because we have specified that the transcript includes the value f(x, y), or at

least that the value f(x, y) is extractable from the transcript (see remark 2), any information which is

revealed by the output f(X,Y) is not considered a loss of privacy (just as in the definition of PAR).

The external information cost was defined in [CWYS01] where the internal cost was also used implic-

itly. Later, using this measure, [BYJKS04] obtained Ω(n) lower bounds on the randomized communi-

cation complexity of DISJn. The internal information cost was formalized in [BBCR10]; we follow their

standardized notation.

Definition 73 (Information cost) [BBCR10, BR10, Bra11] Given a two-party protocol π and a dis-

tribution µ of inputs, the information cost ICintµ (π) is defined as:

ICintµ (π) = Iµ(Y;π(X,Y) | X) + Iµ(X;π(X,Y) | Y)

Where π(x, y) denotes the transcript of protocol π on input (x, y).

The external information cost ICextµ (π) is defined as:

ICextµ (π) = Iµ(X,Y;π(X,Y))

Internal and external information cost are related. The eavesdropper will always learn at least as

much as the players themselves can learn from the transcript. (If µ is a product distribution, then he

learns exactly the same amount.)

Lemma 74 (Lemma 3.12 in [Bra11]) ∀π∀µ ICintµ (π) ≤ ICext

µ (π)

If the protocol can err, see the definition of information complexity below (definition 77).

Page 63: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 54

5.4 PRIV

Klauck et al. define another information-theoretic measure of privacy for zero-error protocols.

Definition 75 (PRIV) [Kla02] Let µ be a probability distribution on X × Y . Let (X,Y) ∼ µ be the

random variable obtained by sampling according to µ. For a function f on X × Y , its protocol π, and

inputs (x, y) ∈ X × Y , recall that we let π(x, y) be the transcript of the protocol on input (x, y). Then

Πµ(X,Y) (or simply Π(X,Y)) is the random variable for a transcript obtained by sampling a random

input according to µ and running π.

The privacy of π on distribution µ is measured as:

PRIVintµ (π) = maxI(X; Π(X,Y) |Y, f(X,Y)),

I(Y; Π(X,Y) |X, f(X,Y))

We can extend definition 75 to an external measure in the obvious way:

PRIVextµ (π) = I(X,Y; Π(X,Y) | f(X,Y))

As one can see the internal information cost is closely related to PRIV. The only substantial difference

is that PRIV is conditioned on the value of the function whereas IC is not. When f is a Boolean function,

they are asymptotically identical. This relationship is formalized in chapter 7.

Lemma 76 ∀π∀µPRIVintµ (π) ≤ PRIVext

µ (π).

This is easy to see directly or as a consequence of fact 68.

Page 64: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 55

5.5 Information complexity

Extending information cost (definition 73) yields a measure of privacy (in bits) for protocols that err.

Definition 77 (information complexity) [Bra11] Let f : X × Y → Z be a function, and µ be a

distribution over X × Y , and P (ǫ) be the set of all protocols π computing f with error ≤ ǫ. Let p be the

function computed by a protocol π.

P (ǫ) = protocol π : X × Y → Z | Pr(x,y)∼µ

[p(x, y) 6= f(x, y)] ≤ ǫ

The information complexity ICµ(f, ǫ) is defined as the infimum, over all protocols π computing f

with error ≤ ǫ on each input, of the information content of π.

ICµ(f, ǫ) = infπ∈P (ǫ)

ICintµ (π)

The prior-free information complexity IC(f, ǫ) of f with error ǫ is the value such that, for each

I > IC(f, ǫ), there is a protocol π which on each input computes f except with error ≤ ǫ, and for any

distribution µ of inputs, reveals at most I bits of information about the inputs to the players.

IC(f, ǫ) = infπ∈P (ǫ)

maxµ

ICintµ (π)

The max-distributional information complexity ICD(f, ǫ) of f with error ǫ is the value such that,

for each I > I2(f, ǫ), for each distribution µ on inputs, there is a protocol πµ,I that computes f with

error at most ǫ (with respect to µ), and reveals at most I bits of information about the inputs to the

players.

ICD(f, ǫ) = maxµ

ICµ(f, ǫ)

Because they are based on internal information cost, all three of these measures of information complexity

are internal.

IC(f, ǫ) is a strong definition of information complexity: one protocol must work for all distributions.

ICD(f, ǫ) is a weaker definition, where each distribution gets its own custom-tailored protocol. They are

related by lemma 79, so we can effectively choose whichever one is most convenient to our purposes.

Remark 78 Notice that information complexity uses a distributional notion of error: a protocol com-

putes f with error ǫ over a given input distribution µ. This differs from the standard definition of error

(see definition 6).

Lemma 79 [Bra11] Let f : X × Y → 0, 1 be any function, and ǫ ≥ 0. Then

IC(f, 0) = ICD(f, 0)

and

ICD(f, ǫ) ≤ IC(f, ǫ) ≤ 2 · ICD(f, ǫ/2)

Thus we can use the definition of information complexity that suits our purposes.

Page 65: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 56

5.6 Additional information

Bar-Yehuda et al. define an averaged notion of additional information (again, it is a measure of internal

privacy).

Definition 80 (Additional information Ii(f) and Ic−i(f)) [BYCKO93]

Let f and h be some functions on X × Y . Let µ be some distribution on X × Y . Let X and Y be

random variables drawn from X × Y according to µ. A protocol π computes f if:

• both parties can determine an output value p(x, y) from the transcript1, and

• Pr [p(y, y) = f(x, y)] > 1/2.

Define the following measures:

• the information-theoretic:

Ii(f) = supµ

(

minπ protocol for f

(maxI(X;π(X,Y) | Y), I(Y;π(X,Y) | X)))

Ideti (f) is defined identically, but for deterministic protocols only.

• the mixed:2

Ic−i(f) = minπ protocol for f

(

supµ

(maxI(X;π(X,Y) | Y), I(Y;π(X,Y) | X)))

Idetc−i(f) is defined identically, but for deterministic protocols only.

The motivation is to extend the worst-case definition 48 of additional information. Both Ideti and

Idetc−i are information-theoretic, but Ideti is intended to be closer to the combinatorial definition Idetc from

earlier. They write,

For Idetc−i to be low, f must have a single protocol that divulges little information regardless

of the probability distribution underlying (X,Y ). For Ideti to be low, it suffices that for every

distribution, there is a protocol divulging little information. Ideti is more pertinent when the

probability distribution underlying (X,Y ) is known before the communication protocol is

designed. Idetc−i is more applicable when the underlying distribution is not known, and in that

respect, is more closely related to the combinatorial measure Idetc , thereby further justifying

the notation. [BYCKO93]

Notice that the information-theoretic and mixed definitions look a bit like versions of information

complexity (definition 77 above), with a few differences. These average-case measures of privacy are

related in chapter 7.

Theorem 81 [BYCKO93] For all functions f ,

Ideti ≤ Idetc−i ≤ Idetc

Ii ≤ Ic−i ≤ Ic1The transcript need not explicitly include the value p(x, y), which we think of as f(x, y); remember remark 2.2The subscript here is admittedly confusing. No actual substitution is intended. We follow the original naming

convention for historical purposes.

Page 66: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 5. Defining average-case approximate privacy 57

Chapter 8 continues the discussion of weak and strong h-privacy, the utility of the concept of “re-

vealed information”, and Bar-Yehuda et al.’s comparisons of the relative privacy achievable for different

functions.

Page 67: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6

An average-case privacy tradeoff

In this chapter we prove a tradeoff for the two-player Vickrey auction function between communication

complexity and average-case privacy. For notational convenience, capital N = 2n below. For a distri-

bution D and a region or rectangle R, let |R|D denote the probability mass that D places on inputs in

R.

For the uniform distribution we can prove the following tradeoff between the length and average-case

PARext of any protocol. This is the average-case analog of theorem 51.

Theorem 82 For all n, r ≥ 1, any deterministic protocol for the two-player n-bit Vickrey auction

problem (over the uniform distribution of inputs) with communication cost (length) less than r obtains

average-case PARext at least Ω( nlog(r/n) ).

This is a better tradeoff than is possible in the worst-case, as one might expect: a single input with

large privacy loss does not dominate the average, and in order for the average privacy loss to be high,

many inputs must have large privacy loss.

This bound is asymptotically tight for the uniform distribution (the n/r-bisection protocol, defined

below, achieves asymptotically the same upper-bound). Our lower bound holds only for the uniform

distribution on inputs. This is not surprising; if the distribution is concentrated (e.g., on a single input),

we do not expect large loss of privacy. (The assumption is that the players and the eavesdropper know

the distribution on inputs.) We can extend this tradeoff to average-case internal PAR using lemma 86.

Protocol 83 (k-bisection protocol) [FJS10a] The k-bisection protocol (for two-player Vickrey auc-

tion) proceeds as follows. The parameter k ∈ (0, 1) is a fraction. In each round, each player sends 0 if

his input lies in the first k of the inputs. (E.g., in the first round, send 0 if your input is in the first

[0, k(2n − 1)].) The players continue until they reach a round where they send different bits. Then the

player with the smaller input sends his full input value.

Thus the standard bisection protocol is a 1/2-bisection protocol.

58

Page 68: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6. An average-case privacy tradeoff 59

6.1 Counting cuts

We begin by examining the definition of average-case PARext (definition 62). One benefit to our improved

definition is that it makes average-case PARext easily relatable to another natural measure on protocols.

Consider a protocol π for a function f . For a region Rz = (x, y)|f(x, y) = z of f , let

cutπ(R) = |Px,y | (x, y) ∈ R|

be the number of protocol-induced rectangles contained within R.

Proposition 84 For any function f : X × Y → Z, protocol π for f and any probability distribution D

on X × Y ,

avgD PARext(π) =∑

R∈R(f)

|R|D · cutπ(R)

avgD PARint(π) = max

y∈Y,R∈R(f)

|R ∩X × y|D · cutπ(R ∩X × y),

x∈X,R∈R(f)

|R ∩ x × Y |D · cutπ(R ∩ x × Y )

Proof: For any protocol-induced rectangle A,∑

(x,y)∈A D(x, y) · 1|A|D = 1. Hence,

avgD PARext(π) = ED

[ |Rx,y|D|Px,y|D

]

=∑

(x,y)∈X×Y

D(x, y) · |Rx,y|D|Px,y|D

=∑

R∈R(f)

(x,y)∈R

D(x, y) · |R|D|Px,y|D

=∑

R∈R(f)

|R|D(

(x,y)∈R

D(x, y) · 1|Px,y|D

)

=∑

R∈R(f)

|R|D · cutπ(R).

The case of internal PAR is analogous.

In the setting of our definition, this characterization of average-case external PAR provides a simple

answer to the conjecture [FJS10a] that for any probability distribution D on inputs, there is a protocol

that has average-case PARext at most n for the n-bit Vickrey auction. Recall that the bisection protocol

for the Vickrey auction proceeds by binary search on the input domain (see page 23).

Proposition 85 For any probability distribution D, the bisection protocol for the two-player n-bit Vick-

rey auction satisfies:

avgD PARext(bisection protocol) ≤ n+ 1.

Proof: Each region R of the n-bit Vickrey auction is covered by at most n + 1 rectangles induced

by the Bisection Protocol, i.e., cutBisection Protocol(R) ≤ n + 1. The claim follows by the previous

Page 69: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6. An average-case privacy tradeoff 60

proposition.

The relation between external and internal privacy approximation ratios (lemma 56) for Vickrey

auctions extends to the average-case setting.

Lemma 86 For n ≥ 1, let π be any protocol for two-player n-bit Vickrey auction. If U is the uniform

probability distribution, then

avgU PARint(π) ≤ avgU PARext(π) ≤ 2 avgU PARint(π).

Proof: To prove the relationship for the average-case PAR, consider input (x, y). If x ≤ y then

Rx,y ∩ x × Y = Rx,y and Rx,y ∩ X × y = (x, y). If x > y then Rx,y ∩ x × Y = (x, y) and

Rx,y ∩X × y = Rx,y. Identically for Px,y instead of Rx,y. Hence,

|Rx,y ∩X × y||Px,y ∩X × y| =

|Rx,y||Px,y|

if x ≤ y, and|Rx,y ∩X × y||Px,y ∩X × y| = 1 ≤ |Rx,y|

|Px,y|otherwise. On the other hand

|Rx,y ∩ x × Y ||Px,y ∩ x × Y | = 1 ≤ |Rx,y|

|Px,y|if x ≤ y and if x > y then

|Rx,y ∩ x × Y ||Px,y ∩ x × Y | =

|Rx,y||Px,y|

.

Thus, avgU PARint(π) ≤ avgU PARext(π). For the upper bound

x,y

1

N2· |Rx,y||Px,y|

=∑

x≤y

1

N2· |Rx,y ∩X × y||Px,y ∩X × y|

+∑

x>y

1

N2· |Rx,y ∩ x × Y ||Px,y ∩ x × Y | .

Hence, avgU PARext(π) ≤ 2 avgU PARint(π).

Page 70: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6. An average-case privacy tradeoff 61

6.2 Simplifying

The rest of this chapter is devoted to the proof of theorem 82.

Note that for average-case PARext we cannot devise a strategy like that in chapter 4. It is not

sufficient to find a single input which loses privacy or requires lengthy communication. The calculation

is on average, and so our bookkeeping must keep track of many inputs throughout the protocol. Several

simplifications will make our calculations easier.

• We will use the cut-characterization of average-case PARext given by proposition 84 (since counting

cuts is usually easier than tracking the size of all the different induced rectangles).

• We will sum only over regions Rx,y for x, y ≤ 2n−1. Call this collection of regions L. These are

the largest regions in X × Y , and together cover 34 the area of X × Y . (Since U is uniform, this is

34 of the total probability mass.) Hence the loss of privacy on these regions will be significant in

calculating the overall average privacy loss. Each of the regions is of size between 2n−1 and 2n, so

they all have the same weight up to a factor of at most 2.

• To estimate cutπ(R) for various regions R we will track only the set of “diagonal” inputs Diag =

(x, x) | x ∈ [2n−1] as they progress in the protocol tree, and count protocol-induced rectangles

that intersect regions Rx,x and Rx+1,x. (Thus we undercount the number of cuts.)

Combining these simplifications gives a lower bound on the average-case PARext for the uniform

distribution and any protocol π:

avgU PARext(π) ≥ 2n−1

4n

R∈L

cutπ(R). (6.2.1)

Note that each input pair (x, x) ∈ Diag must finish the protocol in a separate induced rectangle.

The problem of counting the cuts of interest (in order to get a lower bound) can be abstracted away

into the Ball Partition Problem. By lemma 90, a lower bound on the Ball Partition Problem will yield

a lower bound on the average-case PARext for the uniform distribution on Vickrey auctions.

Definition 87 (Ball Partition Problem) For integers N and r ≥ 1, there are N balls and r rounds.

All of the balls begin in one big set. In each round, the balls in each current set are partitioned into (at

most) two new sets. The cost of partitioning the balls in any set S into sets S1 and S2 is min(|S1|, |S2|).After r rounds, each of the N balls shall be in a singleton set. The total cost of the game is the sum of

the cost, over all r rounds, of every partition made during each round. We denote the minimal possible

cost by B(N, r).

The interesting values of r lie in a particular range. For r < log2 N , the game cannot be finished at

any cost. For r > N , the game can easily be finished with minimal cost B(N, r) = N − 1: cut away 1

ball from the largest set at every round. However, for intermediate values logN ≤ r ≤ N , one might

ask: what is the smallest possible cost c achievable in r rounds?

Theorem 88 For the Ball Partition Problem, B(N, r) ≥ N logN4 log( 4r

log N ).

This lower bound is asymptotically optimal.

Page 71: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6. An average-case privacy tradeoff 62

Proposition 89 Let N and r be integers such that 2 logN ≤ r. For the Ball Partition Problem,

B(N, r) ≤ O(

N logNlog( r

log N )

)

.

Proof: Ignoring the rounding issues, at each round we can split each non-singleton set S into two sets

of sizes α|S| and (1−α)|S|, for α = (logN)/r ≤ 1/2. It follows that within r rounds, each set contains at

most one element as (1− α)rN ≤ Ne−αr < 1. The total cost of the ball partitioning is the sum of sizes

of all the smaller sets obtained in each partition. This corresponds to the number of elements in these

sets (counting multiplicity). Each element can appear in at most log1/α N = (logN)/ log(r/ logN) of

the smaller sets as the size of the set containing the element shrinks by factor of α on each such occasion.

Hence, the total cost is at most N · (logN)/ log(r/ logN). Always rounding the size of the smaller set

up will introduce a constant factor in the final bound.

Lemma 90 relates a lower bound for the Ball Partition Problem (N balls in r rounds) to a lower

bound for the average-case Vickrey auction on the uniform distribution (N possible inputs for each

player and r bits of communication).

Lemma 90 Let N, r ≥ 1 be integers where N = 2n is a power of two. Let B(N, r) be the minimal

cost of the Ball Partition Problem on N balls in r rounds. Then for any deterministic r-bit protocol

P for two-player n-bit Vickrey auction over the uniform distribution U , the average-case PARext is

avgU PARext(π) ≥ B(N/2,r)2N .

Intuition for lemma 90. We think of each ball as an input pair in

D = (x, y) | x = y ≤ 2n−1 or x+ 1 = y ≤ 2n−1.

These are the “diagonal” inputs from the larger regions of the Vickrey auction. (The x = y pairs are

when Bob wins, and the x + 1 = y pairs are when Alice wins.) Any protocol which solves the 2-player

Vickrey auction must have a separate induced rectangle for each of these inputs (the balls must be

partitioned into singleton sets). Hence any protocol which solves the two-player n-bit Vickrey auction

also solves the Ball Partition Problem (on up to N = 2n balls). Giving a lower bound on protocols

which solve the Ball Partition Problem will also lower-bound protocols which solve the Vickrey auction.

N.b.: The converse is not true; a protocol for the ball partition problem is not necessarily a protocol for

Vickrey auctions.

Proof of Lemma 90: Our goal is to establish that

R∈L

cutπ(R) ≥ B(N/2, r).

The lemma easily follows from this since each region R in L contains probability mass at least 1/2N

under the uniform distribution.

The Ball Partition Problem is an abstraction of the calculation of average-case PARext for Vickrey

auctions. Recall the following notation used in the proof of theorem 51. Protocol π is associated with a

protocol tree where each node v corresponds to a combinatorial rectangle T (v) = TA(v)×TB(v) ⊆ X×Y .

For t = 0, . . . , r, let R(π, t) be the set of rectangles associated with nodes at level t of the tree. The root

is level 0. For R ⊆ X × Y , let

cutπ(R, t) = |S ∈ R(π, t);S ∩R 6= ∅|

Page 72: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6. An average-case privacy tradeoff 63

be the number of rectangles intersecting R after round t of the protocol. Clearly, cutπ(R, r) = cutπ(R).

We want to estimate from below∑

cutπ(R) over R ∈ L.

We associate every node v of the protocol tree with two sets

Dv = [N/2] ∩ TA(v) ∩ TB(v)

Lv = Rx,y;x, y ∈ Dv

For each leaf node v, |Dv| ≤ 1 as no two distinct inputs (x, x) and (x′, x′) can finish in the same protocol-

induced rectangle of the leaf. Notice, Lv ⊆ L. It is easy to see by induction on the level of the tree

that sets Dv associated with nodes at the same level partition [N/2] and hence, sets Lv associated with

nodes at the same level are disjoint. Let v be a node at level t, 0 ≤ t < r, with Dv 6= ∅. Let v1 and v2

be its two children. If Dv16= ∅ 6= Dv2

then we claim that

R∈Lv

cutπ(R, t+ 1) ≥∑

R∈Lv

cutπ(R, t)

+min(|Dv1|, |Dv2

|)− 1.

We prove the claim. Assume that v is a node where Alice speaks. Hence,

TA(v) = TA(v1)⋃

TA(v2)

TB(v) = TB(v1) = TB(v2)

Clearly, Dv = Dv1

˙⋃Dv2. Let x1 = max(Dv1

) and x2 = max(Dv2). Without loss of generality, x1 < x2.

For every y ∈ Dv1, y 6= x1, (x1, y) ∈ Ry+1,y ∩ T (v1) and also (x2, y) ∈ Ry+1,y ∩ T (v2), so both are

non-empty. Thus

cutP (Ry+1,y, t+ 1) ≥ cutP (Ry+1,y, t) + 1

As there are |Dv1| − 1 such ys, the claim follows in this case.

If v is a node where Bob speaks, the argument is similar. Let y1 = max(Dv1) and y2 = max(Dv2

),

and assume without loss of generality that y1 < y2. Then for every x ∈ Dv1, (x, y1) ∈ Rx,x ∩ T (v1) and

also (x, y2) ∈ Rx,x ∩ T (v2). Thus in this case one does not even lose the −1 additive term.

Hence, each node v, for which Dv is split into two non-empty sets Dv1and Dv2

, contributes by at

least min(|Dv1|, |Dv2

|)−1 to the increase of∑

R∈L cutP (R) overall. There are exactly N/2−1 nodes like

that as |Droot| = N/2. These sets Dv constitute a solution to the Ball Partition Problem in r rounds,

and given the cost function for the Ball Partition Problem it is immediate that the overall increase

of∑

R∈L cutP (R) is thus at least B(N/2, r) − (N/2 − 1) as the −1 terms add up to N/2 − 1. Since∑

R∈L cutP (R, 0) = N − 1 we get∑

R∈L cutP (R) ≥ B(N/2, r).

Page 73: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6. An average-case privacy tradeoff 64

Ni

ciNi

ci

(1− ci)Ni

1− ci

Figure 6.91: An arbitrary node in the ball-partitioning tree.

6.3 The Ball Partition Problem

All that remains to prove the lower bound on average-case PARext for Vickrey auctions (theorem 82) is

to prove the lower bound on the Ball Partition Problem (theorem 88).

Proof of Theorem 88: We will examine the entropy of the partitions at each round. This permits an

abstraction away from a particular ball-partitioning instance, in order to obtain general properties. This

will lead to a lower bound on the objective function B(N, r), the cost of the Ball Partition Problem.

It will be useful to associate with the Ball Partition Problem in r rounds a full binary tree of depth

r where each set obtained at round t is associated to a distinct node at level t, and remaining nodes are

associated with the empty set. The association should be so that a node associated with a set S has its

children associated with sets S1 and S2 obtained from S during the partitioning. We label each node i,

by the size of the associated set Ni, and we label edges by the fraction of balls that travel “over” that

edge from the parent to the child node. (See figure 6.91: a node labelled Ni with children labelled ciNi

and (1− ci)Ni will have edges to those children labelled ci and 1− ci, respectively.)

The tree’s root node is labelled N ; each leaf is labelled 1 or 0. (The 0 leaves are a result of assuming

the binary tree is full; if some ball is partitioned into a singleton set in round i < r, then in each

subsequent round it is “partitioned” into two sets: the singleton set and the empty set.)

Remark 92 At each level of the tree, the sum of the node labels = N . Thus the sum of labels of all

the non-leaf nodes in the tree is rN .

Consider the path followed by any ball b from the root to a leaf. It traverses edges labelled

db1, db2, . . . , d

br, where

∏ri=1 d

bi =

1N .

Multiplying this number for all balls gives a nice symmetrization which is true for all trees representing

solutions to the Ball Partition Problem.

(1

N

)N

=∏

b a ball

r∏

i=1

dbi (6.3.1)

Consider some non-leaf node i of the tree, with edges to its children labelled ci and 1−ci (figure 6.91).Together, these edges contribute (ci)

ciNi(1 − ci)(1−ci)Ni to the right-hand side of equation (6.3.1). (If

ci = 0 this term equals 1 by definition.) Without loss of generality assume each ci ≤ 1/2. Equation

Page 74: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6. An average-case privacy tradeoff 65

(6.3.1) can be rewritten as:

(1

N

)N

=∏

non-leaf node i

(ci)ciNi(1− ci)

(1−ci)Ni

−N logN =∑

i

Ni(−H(ci)) (6.3.2)

Here H(x) = x log 1x + (1− x) log 1

1−x is the binary entropy of x.

Since the leaf nodes are not included in the sum,∑

non-leaf node i Ni = rN (by remark 92). Let

c =∑

iciNi

rN be the average cost of a cut in the Ball Partition Problem. Then the cost of the entire tree

is B(N, r) = crN . Since H is concave,∑

iNi

rNH(ci) ≤ H(∑

iciNi

rN ) = H(c).

N logN = rN∑

i

Ni

rNH(ci) ≤ rNH(c) (6.3.3)

For the sake of contradiction, suppose that the cost of the tree B(N, r) = crN < N logN4 log( 4r

log N ). Then the

average cost of a cut is c < logN4r log( 4r

log N ). This c can be rewritten as c = x

− log x for x = logN4r . Combining

equation (6.3.3) and lemma 93 (below),

logN

r≤ H(c) = H

( x

− log x

)

< 4x = 4logN

4r=

logN

r

The inequality makes this a contradiction. Therefore every tree of depth≤ r must incur cost≥ N logN4 log( 4r

log N ).

Lemma 93 For 0 < x ≤ 12 , the binary entropy H

(x

− log x

)< 4x.

Proof: For 0 < x ≤ 12 , log

1x ≥ 1 so clearly 0 <

(x

− log x

)≤ 1

2 . Let y = x− log x .

Expanding,

H(y) = y log1

y+ (1− y) log

1

1− y

For 0 < y ≤ 12 , it is not difficult to see that − log(1− y) ≤ 2y and 1− y < 1.

H(y) ≤ y log1

y+ (1− y)2y < y log

1

y+ 2y

Substituting for y and expanding,

H

(x

log 1x

)

< x

(log log 1

x

log 1x

)

+ x

(log 1

x

log 1x

)

+ 2x

(1

log 1x

)

Examination reveals that for 0 < x ≤ 12 , the parenthesized coefficients are each ≤ 1. Hence H( x

log 1x

) <

4x.

Page 75: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 6. An average-case privacy tradeoff 66

6.4 Average-case approximate privacy hierarchy

As in section 4.4.2, we can again apply our privacy tradeoff to establish a PAR-communication hierarchy.

Recall that, for perfect privacy, the structure of the privacy hierarchy of functions is known: for arbitrary

functions there is a full rounds hierarchy with every level distinct (theorem 15, page 14).

Theorem 82 implies that the Vickrey auction function partially answers the hierarchy question for

average-case PARext. We can restate the theorem in the style of theorem 15 for comparison.

Theorem 94 For all n, r ≥ 1 such that r ≤ n, there exists a function f on 0, 1n × 0, 1n which in r

rounds can obtain at best Ω( nlog(r/n) ) average-case PARext over the uniform distribution.

The hierarchy for average-case PARext may be further refined by considering other functions and

other distributions.

Page 76: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 7

Privacy and information theory

In this chapter, we relate average PAR to information cost (both internal and external measures). Of

necessity we will confine ourselves to deterministic computations of two players with zero error (so

that PAR and ICµ(π) are defined). The complications of randomization and error are postponed until

chapter 8. Worst-case measures of privacy are discussed in chapter 9.

67

Page 77: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 7. Privacy and information theory 68

7.1 IC, PRIV, Ii, and Ic−iWe have described several information-theoretic measures of privacy: information cost ICint

µ (π) and

ICextµ (π) (definition 73), PRIVµ(π) (definition 75), and additional information Ii(f), and Ic−i(f) (def-

inition 48. These measures use the mutual information between the inputs and the transcript as the

basis of measuring privacy loss. The main difference is whether the mutual information is conditioned

on the value of f , and whether the sum or maximum is considered.

Theorem 95 For any probability distribution µ on X×Y and any protocol π for a function f : X×Y →Z:

PRIVintµ (π)− log |Z|≤ ICint

µ (π)≤2 · (PRIVintµ (π) + log |Z|).

Notice that this is the difference highlighted by remark 2: the two measures differ by the information

contained in the output f(X,Y). Thus it may be convenient to use PRIV for models in which the function

output is contained in the transcript, and information cost for differing models. The proposition follows

from claim 96.

Claim 96 Let X,Y,V,W be any random variables. Then

∣∣I(X ; Y|W)− I(X ; Y |W,V)

∣∣ ≤ H(V)

Proof: First, notice that

I(X;Y |W)− I(X;Y |W,V)

=(H(X|W)−H(X|W,V)

)−(H(X|W,Y)−H(X|Y,W,V)

)

The first quantity in brackets above is

(H(X|W)−H(X|W,V)

)= I(X;V |W)

= H(V|W)−H(V|W,X)

≤ H(V|W)

≤ H(V)

The first equality uses the symmetry of information.

The second bracketed quantity can be likewise re-written using the symmetry of information:

(H(X|W,Y)−H(X|Y,W,V)

)= EµY

[H(X|W,Y = y)−H(X|V,W,Y = y)

]

= EµY

[I(V;X |W,Y = y)

]≤ H(V)

Combining the two, we are done.

Theorem 97 For any probability distribution µ on X×Y and any protocol π for a function f : X×Y →Z:

PRIVextµ (π) ≤ ICext

µ (π) ≤ PRIVextµ (π) + log |Z|

Page 78: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 7. Privacy and information theory 69

Proof: By simple expansion of definitions.

PRIVextµ (π)

= I(X,Y; Π(X,Y) | f(X,Y))

= H(X,Y | f(X,Y))−H(X,Y | f(X,Y),Π(X,Y))

= H(X,Y | f(X,Y))−H(X,Y |Π(X,Y)) the transcript determines f

≤ H(X,Y)−H(X,Y |Π(X,Y)) by fact 68

= I(X,Y; Π(X,Y)) = ICextµ (π) definition 73

Similarly,

ICextµ (π)− PRIVext

µ (π) = H(X,Y)−H(X,Y | f(X,Y))

= I(X,Y; f(X,Y))

≤ log |Z|

“Additional information” blends the maximum-taking of PRIVint(π) and the omission of the given

correct value f(x, y) used in ICm u(π). Also, “additional information” considers all protocols computing

f with error < 1/2, whereas information complexity parametrizes the error.

Lemma 98 Let f be a function on X × Y and µ be a distribution on X × Y .

Ii(f) ≤ IC(f, 1/2) ≤ 2 · Ii(f)

Ic−i(f) ≤ IC(f, 1/2) ≤ 2 · Ic−i(f)

The proof is by examination of definitions. This is very similar to the relation between PRIVintµ (π) and

ICintµ (π) (theorem 95).

Since information complexity is a more general notion than additional information, we will prefer it

in this document.

Page 79: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 7. Privacy and information theory 70

7.2 PAR

Average-case PAR (definition 62) is closely related to previously studied concepts in communication com-

plexity such as information cost (definition 73) and information-theoretic privacy PRIV (definition 75).

The main distinction is that these concepts measure in terms of bits, and PAR does not. We show the

relationship of these measures to average-case PAR. This connection can be used to prove new lower

bounds for average-case PAR (below).

Among these notions, Klauck’s privacy measure [Kla02] is most closely related to average-case PAR.

The relationship between PRIV and average-case PAR is given by the following theorem.

Theorem 99 For a probability distribution µ on X×Y and a protocol π for a function f : X×Y → Z,

the following holds:

PRIVintµ (π) ≤ log(avgµ PAR

int(π))

Proof: By symmetry, it suffices to show that I(X; Πµ(X,Y)|Y, f(X,Y)) ≤ log(avgµ PARint(π)).

I(X; Πµ(X,Y)|Y, f(X,Y)) ≤ H(Πµ(X,Y)|Y, f(X,Y))

≤∑

y∈Y,z∈Z

|Rz ∩X × y|µ · log(cutπ(Rz ∩X × y))

≤ log(avgµ PARint(π)),

The first inequality holds by simple algebra. The second inequality holds because, for any y ∈ Y and

z ∈ Z,

Pr[Y = y, f(X,Y) = z] = |Rz ∩X × y|D

and

H(ΠP (X,Y)|Y = y, f(X,Y) = z) ≤ log(cutπ(Rz ∩X × y))

The final inequality follows from concavity of logarithm.

Hence, one can use lower bounds on PRIV to derive lower bounds for average-case PAR. For example,

consider the function DISJn : 0, 1n × 0, 1n → 0, 1 on inputs x, y ∈ 0, 1n, which is defined to be

one if i ∈ [n];xi = yi = 1 is empty and zero otherwise. [Kla02] shows that for any protocol P for the

disjointness problem, PRIVD(P ) ∈ Ω(√n/ log n), where D is uniform on strings of hamming weight

√n.

Using the above lower bound, we immediately obtain avgD PARint(P ) ∈ 2Ω(√n/ logn) for any protocol P

for DISJn.

Theorem 100 For a distribution µ on X×Y and a deterministic protocol π for function f : X×Y → Z,

the following holds:

PRIVextµ (π) ≤ log(avgµ PAR

ext(π))

This theorem was independently shown by [KLX13].

Proof of Theorem 100: If π is deterministic, the inputs x and y completely determine the transcript

π(x, y). Hence H(Π(X,Y) |X,Y) = 0 and the first inequality is tight. Otherwise, the first inequality is

by algebra.

Page 80: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 7. Privacy and information theory 71

So simplify, as above in theorem 99:

I(X,Y; Π(X,Y) | f(X,Y))

= H(Π(X,Y) | f(X,Y))−H(Π(X,Y) |X,Y)

≤ H(Π(X,Y) | f(X,Y))

= Ez∈Z |Rz|µH(Π(X,Y) | f(X,Y) = z) as Pr [f(X,Y) = z] = |Rz|µ

≤ Ez∈Z |Rz|µ log(cutπ(Rz)) as H(. . .) ≤ log(cutπ(Rz))

≤ log avgµ PARext(π) by concavity of log

As [KLX13] point out, an analog of theorems 99 and 100) holds if PRIV is redefined as given p(x, y),

not f(x, y).

Theorem 101 [KLX13] In the style of definition 75, define

PRIVǫ,intµ (π) = maxI(X; Π(X,Y) |Y, p(X,Y)),

I(Y; Π(X,Y) |X, p(X,Y))

PRIVǫ,extµ (π) == I(X,Y; Π(X,Y) | p(X,Y))

For a probability distribution µ on X × Y and a protocol π for a function f : X × Y → Z, the following

holds:

PRIVǫintµ (π) ≤ log(avgµ PAR

ǫ,int(π))

PRIVǫ,extµ (π) ≤ log(avgµ PAR

ǫ,ext(π))

Note that PRIVǫ retains the close relationship to IC from above (theorem 97). Information complex-

ity is the most general and widespread measure. This vote amongst the literature indicates its benefits

(and ease of use) over other measures, and suggests that we follow suit, if for no other reason than to

be easily comparable with the plurality of other results. Hopefully the comparisons in chapter 9 will

provide a nail in the coffin of any doubt as to the broad utility of information complexity.

Page 81: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 7. Privacy and information theory 72

7.3 Set Intersection

The relationship described in Proposition 95, together with the known lower bounds on internal infor-

mation cost of DISJn, allows us to prove one of the conjectures of [FJS10a] for the intersection function

INTERSECn. Function INTERSECn : 0, 1n × 0, 1n → P([n]) on inputs x, y ∈ 0, 1n gives the set

i ∈ [n];xi = yi = 1.Feigenbaum et al. conjecture that the average-case internal PAR for the intersection function under

the uniform distribution is exponential in n. This can be proven using the above tools and the following

result, which strengthens an earlier work by [BYJKS04]. Let ν be the uniform distribution supported

on (0, 1), (1, 0), (0, 0). Let τ be the distribution generated by taking the n-fold product of ν. In other

words, τ is the uniform distribution supported on pairs of strings that are disjoint.

Theorem 102 [Bra11] Let P be any randomized protocol that computes disjointness DISJn with error

probability < 1/3. Then, ICτ

(P)= Ω(n).

Using the above theorem, we show the following bound for Intersection.

Theorem 103 Let P be any deterministic protocol that computes set intersection INTERSECn. Then,

for U the uniform distribution, PRIVU

(P)= Ω(n).

Proof: We prove this by a contradiction. Assume that we have a protocol P to solve INTERSECm

on m-bit inputs with little privacy loss under the uniform distribution. The main idea of the argument

is to come up with an appropriate reduction from set disjointness DISJn on n bits to set intersection

INTERSECm. This reduction will need to satisfy the following features: solving intersection on the

reduced instance should solve set-disjointness on the original input instance. The reduced instance

should not blow up too much in size, i.e. m = Θ(n). Finally, and most importantly, distribution τ

on input instances to set-disjointness should generate (by our reduction) the uniform distribution on

Intersection. This last step seems difficult to do via a deterministic reduction. So we aim to get a

workaround as follows.

Let Π be the random variable denoting the transcript generated by P . Then, our assumption on P

gives the following for some constant β which we fix at the end: β m > IU(X : Π |Y, INTERSEC(X,Y)

)+

IU(Y : Π |X, INTERSEC(X,Y)

).

The uniformly distributed pairs of m-bit random strings (X,Y) can be alternatively generated by

first selecting a random subset A of [m] where each element is in the set independently with probability

1/4. For each i ∈ A, we set (Xi,Yi) = (1, 1). Then, for each coordinate i ∈ A = [m] −A, (Xi,Yi) is

picked independently according to ν. Let τ denote the joint distribution (X,Y,A) sampled as described.

Let (X,Y|A) denote pair of random variables that are distributed according to X,Y conditioned on A

as above and the underlying distribution on this pair be denoted by τA. Thus, our assumption becomes

equivalently:

EµA

[

IτA(X : Π |Y,A

)+ IτA

(Y : Π |X,A

)]

< βm,

where µA is the distribution on A. Applying the Chernoff bound on the deviation of |A| from its

expectation, one concludes:

EµA

[

IτA(X : Π |Y,A

)+ IτA

(Y : Π |X,A

)∣∣∣∣|A| ≤ m/2

]

<βm

1− exp(−Ω(m))

Page 82: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 7. Privacy and information theory 73

Thus, there exists some fixed set a of size at most m/2 such that

Iτa(X : Π |Y,A = a

)+ Iτa

(Y : Π |X,A = a

)< β′m. (7.3.1)

This set a is going to provide us with the workaround needed for the deterministic reduction. We

define our reduction now w.r.t a. Set n = m−|a| ≥ m/2. Let P ′ be a protocol that solves set-disjointness

as follows: Given two n-bit strings (u, v), protocol P ′ first embeds u and v naturally into ac = [m]− a.

Let the embedded strings be called X(u) and Y (v) which each player can generate privately on its own.

Then, the players run the protocol P on(X(u), Y (v)

). Let J be the intersection set that P returns.

Clearly, DISJn(u, v) = 1 iff |J | = |a|. Finally, note if (U,V) are generated according to τ , then the

mapped strings(X(U),Y(V)

)∼ (X,Y|A = a). Hence, (7.3.1) implies that ICτ (P ) ≤ β′m ≤ 2β′n. By

setting β′ to be a small enough constant, we derive a contradiction to Theorem 102. This completes the

argument.

By using theorem 99, this immediately yields the following theorem.

Theorem 104 (Conjectured by [FJS10b].) For all n ≥ 1, and any protocol P computing the Set In-

tersection INTERSECn on n bits, the average-case internal PAR is exponential in n under the uniform

distribution: avgU PARint(P ) = 2Ω(n).

This theorem differs from the other main privacy results. Theorems 51 and 82 demonstrate privacy

tradeoffs : for any protocol with some specified communication complexity, they restrict the degree

of privacy that is achievable by that protocol. Theorem 104 is different: the communication cost of

the protocol does not affect the limitation on achievable approximate privacy. This is traceable to

theorem 102, which holds for any protocol (subject to conditions), regardless of communication cost.

Page 83: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8

Privacy, advice, and error

In this chapter, we consider privacy in the setting with randomization and error. Our main motivation is

the lack of a good (or at least consensus) definition of approximate privacy in the worst-case setting with

error. We discuss some issues with the preexisting definitions of strong and weak h-privacy. Recall that

h-privacy is an attempt to capture some privacy loss, by examining the function h, which can either be

thought of as revealing the “unimportant” parts of the input or as describing the “revealed information”

from computing f . Both strong and weak h-privacy are worst-case measures.

We propose instead an average-case measurement of privacy, using the idea of a function h which

captures the “advice” necessary for computing f . In a situation where players are considering the privacy

of part of their inputs — the important part — as with weak and strong h-privacy, an average measure

seems more useful than a worst-case measure. We relate this new average-case measure to the other

preexisting average-case privacy measures.

74

Page 84: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 75

8.1 Problems with weak and strong h-privacy

Recall the definitions of strong and weak h-privacy (definitions 34 and 45), as well as the related definition

of worst-case additional information Ic(f) and Idetc (f) (definition 48). This section examines several

issues with these definitions; most stem from the use of strong and weak h-privacy as worst-case measures.

The subsequent sections propose improved definitions to achieve the intended ends, using average-case

measures.

One main problem with using strong and weak h-privacy as a way to compare the relative privacy

possible for different protocols or for different functions is measuring the range of h (often to compare it

to the range of f). We contend that this is not a useful measure, as it may hide important considerations.

Consider the h ‘everything’ function he : X × Y → argminS∈X,Y |S| defined as:

h(x, y) = x+ y mod min(|X|, |Y |)

The precondition for being he-private is trivially true: if he(x, y) = he(x, y′) then y = y′ and so the

transcripts on the two sets of inputs will necessarily be identically distributed. (And similarly for the

he(x, y) = he(x′, y) condition.) Note that range(he) = argminS∈X,Y |S| is maximal here. Because

he(x, y) reveals each player’s full input value to the other player, every function f on X × Y is strongly

he-private. But this is not saying anything useful. (Much like saying every function reveals at most

linearly many bits of information about its inputs, or has at privacy loss at most exponential PAR.) If

f is h-private for some h with range(h) > range(he), the range of h can be ‘deflated’ to coincide with

he.

Another problem with measuring range to evaluate privacy is that the range of f can be artificially

inflated. In [BYCKO93] the model considered requires only that both parties can determine f(x, y) from

the transcript (which thus need not explicitly include the value f(x, y); remember remark 35?). Consider

f(x, y) = |x− y| and f ′(x, y) = x− y. Any protocol computing f ′ is also a protocol computing f , under

the requirement that both players can compute the output from the transcript. The two functions will

be h-private for similar h. However, | range(f)| = 2n and | range(f ′)| = 2n+1. Hence comparing the

range seems meaningless. Some illustrative examples follow.

The following is falsely claimed in [BYCKO93]:

False proposition 105 [BYCKO93] A function f is privately computable if and only if

log(range(f)) = minlog | range(h)| s.t. f is weakly h-private

This proposition is not true in either direction. It was claimed (without proof) in [BYCKO93].

We assume that “privately computable” takes its standard meaning, namely, f on X × Y is privately

computable if and only if it is perfectly private (in the sense of definition 18). This is a standard baseline

of worst-case privacy, authored by many of the same authors as the suspect proposition above.

Insofar as the authors’ intent can be guessed, they might reasonably have meant, “a function f is

privately computable if and only if it is f -private.” This seems true (under our assumption that both

participants and the eavesdropper get to learn the value f(x, y) at the end of the protocol), and is proved

as proposition 106. However, the phrasing above (false proposition 105) makes the statement untrue.

Disproof: We will disprove both directions using the domain 0, 1×0, 1 and the function he(x, y) =

Page 85: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 76

x ⊕ y. The protocol π will be the deterministic, zero-error protocol where both players simply reveal

their inputs.

6⇐ Let f(x, y) = min(x, y) = x ∧ y. This f is a canonical example of a not-privately-computable

function (consider the corners lemma 8). Now π is a strongly he-private protocol for f . And

range(f) = range(he). (Note also that every h with a smaller range is the constant function, and

f is not h-private for any constant h.)

6⇒ Let f(x, y) = x + y. This f is privately computable with protocol π. Again, π is a strongly

he-private protocol for f . (And f is not strongly h-private for any h with smaller range.) But

range(f) = 4 6= range(he) = 2.

Proposition 106 A two-party function f is privately computable if and only if f is weakly f -private.

What follows is a direct proof. (Note that this proposition is a direct consequence of lemma 47 and

theorem 13.)

Proof: Note that an intuitive explanation of “f is weakly h-private” interprets h as a function which

partitions the rectangle Mf . The definition of weakly h-private requires that h’s induced partition is a

refinement of the partition induced by some protocol π. Thus if f fails the corners lemma (and thus is

not privately computable), f also fails to be a refinement of a protocol partition, so f cannot be weakly

f -private.

Observe that for two-player functions, the only cases which need to be considered for 1-privacy

(definition 18) are when f(x, y) = f(x, y′) or f(x, y) = f(x′, y).

⇒ Suppose f is privately computable. Then f is perfectly private. Thus by theorem 13 Mf is

decomposable, and f is privately computable by a deterministic protocol π with no error. That is,

f(x, y) = f(x, y′)⇒ π(x, y) = π(x, y′)

f(x, y) = f(x′, y)⇒ π(x, y) = π(x′, y)

Since π makes no errors and 0 < 1/2, this suffices to satisfy the definition: f is weakly f -private.

(Even more, f is strongly f -private.)

⇐ Suppose f is weakly f -private. Then there exists a protocol π with error ǫ < 1/2 such that

∀x, x′, y, y′ and for all possible transcripts t,

f(x, y) = f(x, y′)⇒ PrrX ,rY

[t = π(x, y)] = PrrX ,rY

[t = π(x, y′)] (8.1.1)

f(x, y) = f(x′, y)⇒ PrrX ,rY

[t = π(x, y)] = PrrX ,rY

[t = π(x′, y)] (8.1.2)

Notice that the cases f(x, y) = f(x, y′) and f(x, y) = f(x′, y) are all possible cases for coalitions

of size 1. Thus f is perfectly private (i.e., 1-private — recall definition 18).

(Alternately, if f is Boolean we can use theorem 43 to show that f is 1-private. There are two

Page 86: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 77

cases to consider, both easily true:

f(x, y) = f(x, y′)⇒ 1

2

t

| PrrX ,rY

[t|x, y]− PrrX ,rY

[t|x, y′] = 0 by (8.1.1) above

f(x, y) = f(x′, y)⇒ 1

2

t

| PrrX ,rY

[t|x, y]− PrrX ,rY

[t|x′, y] = 0 by (8.1.2) above

Thus f is (δ = 0, 1)-private with ǫ < 1/2 error (in the sense of definition 42).)

Page 87: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 78

8.2 Advice and hints

Motivated by cryptographic definitions, we define an “operationalized” notion of privacy. The idea

is to measure the length of a hint necessary to sample a transcript of the protocol, without actually

performing the communication protocol. This is an approach adopted from cryptography; we think of

private computation in terms of simulations. A function which is perfectly privately computable has a

protocol where each party, given the value of the function, can exactly simulate the entire communication

transcript using its own value. Thus h-privacy is a relaxation; essentially, the idea is that a function

is h-privately computable if each party, given the value of h, can exactly simulate the communication

transcript.

The problems with h-privacy seem to stem from the measurement of the size of the range of h. This

measurement forces h-privacy to be a worst-case concept. Instead, we propose adapting the concept to

an average-case concept as follows.

The notation ar denotes a distribution of elements a as indexed by r. Identical distributions are

denoted as ar ≡ bq, meaning that the statistical distance between the two distributions is zero.

Definition 107 (Average hint) A randomized protocol π for function f : X × Y → Z with input

distribution µ has average hint length AHLµ(π) = ℓ if there exist randomized functions hA and hB,

and two simulators SA and SB, such that

SA(x, hA(x, y, rhA), f(x, y))rhA

≡ π(x, y)r

SB(y, hB(x, y, rhB), f(x, y))rhB

≡ π(x, y)r

and, for all (x, y), the average length of the hints is bounded by ℓ:

avgcoin flips rhA|hA(x, y)| ≤ ℓ

avgcoin flips rhB|hB(x, y)| ≤ ℓ

That is, there exists hint functions hA and hB (with their own sources of randomness) which are given

the inputs for π. The simulators SA and SB take this hint, one input, and the function output, and

sample a transcript from the actual distribution of transcripts of π on x, y (over the protocol’s random

coins r).

As usual, we will want to eventually discuss the average hint for a function without any reference

to a particular protocol. This is defined in the expected way: the average hint for function f is the

infimum, over all protocols π computing f , of the average hint length of π. Error constraints may also

be considered, restricting us to the infimum over protocols π computing f with error ≤ ǫ.

Definition 108 Function f has average hint length ℓ with error ǫ if:

ℓ = infπ computes f with distributional error ≤ǫ

maxµi | π has average hint length i

In this case, we write AHL(f, ǫ) = ℓ.

Several remarks are in order. Definition 107’s use of public coins is specific. Certain parts of this

Page 88: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 79

definition can be argued or changed (as more convenient usage situations arise). And finally, the output

value f(x, y) deserves a mention (harking back to remark 2).

Remark 109 (The random coins r) There is a choice to be made here about randomness. As written

above, it is important that the transcript contain any public random coins used during the protocol.1 Thus

the simulator must generate a transcript and the random coins that accompany it. This increases the

length of the hint by at most |r| over the situation where the transcript does not include the random

coins. (The hint function can generate the random coins, and append them to the hint.)

If the transcript does not contain a record of the public coins, then it must be that the hint functions

and simulators can all “see” the public random coins — and so r must be an input to hA, hB, SA, and

SB. This type of setup is common in cryptography.

Our setup, with the public random coins included in the transcript, avoids problems. For instance,

if transcripts are uniformly distributed (for the input distribution) across all strings, then a simulator

which simply picks any string will have generated a transcript, without any hint required.

Remark 110 (Possible variations of definition 107) This definition of hint length as a measure-

ment evinces some choices. Averaging over random coins, but not over the distribution of inputs, will

allow us to compare average hint length with information complexity (below).

It may also be interesting to consider these variant definitions of hint:

• Usually the hint is short, but the hint is long with exponentially small probability.

• The hint is always or on average short, but with exponentially small probability, the hint is wrong

— that is, the simulator doesn’t get exactly the same distribution of transcripts as the protocol.

• The hint definition is the same, but the simulator needs to be within statistical distance ζ of the

actual distribution of transcripts. (This is preferred in definition 112 below.)

Remark 111 (The output value f(x, y)) The simulator could take simply one input and the hint

— without the output value f(x, y) — to more closely parallel the Braverman definition. Choosing to

give the simulator the output of the function makes sense in the context of privacy because we want to

measure how much information is leaked besides the output, which everyone gets to learn anyway. (If

the simulator doesn’t get the value f(x, y), then the hint will increase by at most |f(x, y)|. The transcript

needs to include the output of the function.)

The main benefit of our proposed definition of average hint is that we measure the average length

of the hint, whereas the original strong/weak h-privacy requirement measured the log of the range of

h, effectively the maximum length of the hint. So of course we can phrase the original definitions in

alignment with the new definition 107.

Definition 112 (Max hint) A randomized protocol π for function f : X × Y → Z has max hint

length MHL(π, ζ) = ℓ if there exist randomized functions hA and hB, and two simulators SA and SB,

such that:

log range(hA) ≤ ℓ

log range(hB) ≤ ℓ

1The private random coins are not included in the transcript; the simulator is not expected to generate private randomcoins of the other player.

Page 89: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 80

and for some ζ ≥ 0,

δ(

SA(x, hA(x, y), f(x, y))rhA, π(x, y)r

)

≤ ζ

δ(

SB(y, hB(x, y), f(x, y))rhB, π(x, y)r

)

≤ ζ

That is, there exists hint functions hA and hB (with their own sources of randomness) which are given

the inputs for π. The simulators SA and SB take this hint, one input, and the function output, and

samples a transcript within ζ statistical distance of the actual distribution of transcripts of π on x, y

(over the protocol’s random coins r).

The motivation for using ζ to bound the statistical distance, rather than requiring ζ = 0, is the possibility

of anomalous behavior of randomized protocols. Consider a randomized protocol which, on a small

fraction of random coins, reveals a huge amount of privacy (for example all of (x, y)) while on the

rest of the random coins it is reasonably private. This fraction could be small enough that the overall

information complexity of the protocol is small. Yet it is hopeless to attempt a hint function which works

for these “bad” random coins — it will have to reveal the full inputs. Relaxing to statistical distance

ζ permits the hint function to simply send a message meaning “instance of bad coins” instead of a full

hint, keeping the maximum hint length low.

Lemma 113 Fix protocol π for f : X × Y → Z. If there exists functions hA and hB such that π is

weakly (hA, hB)-private for f , and log range(hA) ≤ ℓ, and log range(hB) ≤ ℓ, then π has max hint length

ℓ.

Proof: Let π be a protocol for function f : X ×Y → Z. Suppose that π is weakly (hA, hb)-private for

f .

Protocol π partitions the matrix Mf into rectangles, where each rectangle is a particular transcript of

π. (If π is randomized, it effectively partitions the matrix into rectangles where each rectangle is defined

by having the same distribution of transcripts.) By the definition of weakly h-private, the function h

subpartitions these π-induced partitions. Hence knowing the values h(x, y) and x will allow a simulator

to obtain the transcript π(x, y) (or distribution of transcripts π(x, y)rπ , and sample accordingly).

Similarly for a simulator knowing h(x, y) and y. Thus π has max hint length log range(h).

The converse of lemma 113 is less clear. The hint functions can be randomized, but the definition of

weakly h-private does not include randomized functions.

Open problem 114 Fix protocol π for f : X × Y → Z. If π has max hint length ℓ, then is it the case

that there exist functions hA and hB such that:

• log range(hA) ≤ ℓ, and

• log range(hB) ≤ ℓ, and

• π is weakly (hA, hB)-private for f?

This last bullet point might be rephrased as the broader, “there is another protocol π′ which is weakly

(hA, hB)-private for f?” In this case, it may be interesting to compare the communication complexity

of π and π′.

The two definitions of hint length are related.

Page 90: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 81

Lemma 115 If protocol π for f has max hint length ℓ, then it has average hint length at most ℓ.

average hint length ≤ max hint length

The proof is obvious. It seems likely that this bound is not tight, for instance with highly non-uniform

distributions of inputs. (On such distributions, there could be a large number of low-probability inputs

which require long hints. Thus the max hint length will be long, but the average hint length will be

short.)

Open problem 116 Is there any function f which separates average hint and max hint? That is, is

there some f and ǫ such that

average hint length max hint length

for all protocols π computing f with error ≤ ǫ?

This problem holds implications for the information versus communication problem.

Lemma 117 If protocol π for f has communication complexity C, then it has max hint length at most

C. (And there exist functions for which max hint C is optimal.)

This follows from the observation that the hint can simply be the full communication transcript. (Indeed,

for some functions and protocols the max hint length will be exactly C. For example, the projection

function f(x, y) = x requires |x| bits of communication, and the y-player will need to receive a hint of

length H(X). For x drawn from the uniform distribution, the max hint will be exactly the same as the

communication complexity.)

Page 91: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 82

8.3 Relating hint length to IC

The hint length required by a function is related to its information complexity. One direction is easy to

see: the information content can be no longer than the hint (that is, the hint is a clear upper-bound on

information complexity).

Lemma 118 (IC ≤ hint) For any f and ǫ ≥ 0, for any distribution µ, for any protocol π for f with

error ≤ ǫ,

ICintµ (π) ≤ 2 ∗ (AHLµ(π) + log |Z|)

And further,

IC(f, ǫ) ≤ 2 ∗ (AHL(f, ǫ) + log |Z|)

Notice that the additional term |f(X,Y )| is due to the way we treated the function output in our

definition (see remark 111).

Proof of Lemma 118: Fix any ǫ-error protocol π and input distribution µ for f . The information

content is defined as:

ICintµ (π) = I(Π(X,Y );X|Y ) + I(Π(X,Y );Y |X)

Let’s consider the terms of information content one at a time.

I(Π(X,Y );Y |X)

= H(Y |X)−H(Y |X,Π(X,Y )) definition of I

≤ H(Y |X)−H(Y |X,Π(X,Y ), f(X,Y ), hA(X,Y ))

entropy decreases with more conditions

= H(Y |X)−H(Y |X, f(X,Y ), hA(X,Y )) as Π(X,Y ) can be sampled using hA(X,Y )

≤ H(Y |X)− (H(Y |X, f(X,Y ))− ℓ(n)) as H(Y |X, f, h) ≤ H(Y |X, f)− |h|= I(Y ; f(X,Y )|X) + ℓ(n) by definition of I

≤ |f(X,Y )|+ ℓ(n) as I(Y ; f(X,Y )|X) ≤ |f(X,Y )|

The other term will turn out the same, symmetrically:

I(Π(X,Y );X|Y ) ≤ |f(X,Y )|+ ℓ(n)

Thus the information content ICintµ (π) ≤ 2(ℓ(n) + |f(X,Y )|). Combining this with the definitions 108

and 77 proves the lemma.

Conjecture 119 (average hint ≤ IC) For any f and ǫ ≥ 0,

AHL(f, ǫ) ≤ IC(f, ǫ)

This average hint conjecture appears to be easy to solve, in the discussion that follows. (In particular

culminating in equation 8.3.3.) We might also consider a stronger statement.

Page 92: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 83

Open problem 120 (short max hints for any π) Fix distribution µ and protocol π for f : X×Y →Z with error ǫ ≥ 0. Then for which values of ζ ≥ 0 is it the case that

MHL(π, ζ) ≤ ICintµ (π)?

For the remainder of this section, we sketch a brief exploration of these problems.

One approach to proving conjecture 119 is to compress a particular protocol. For any function

and ǫ, for any distribution µ, [BBCR10] give a protocol π which has information cost (over every

distribution µ) less than any I > IC(f, ǫ) and communication cost CC. Since this protocol works for

every distribution, it works for a product distribution (say, the uniform distribution U). Hence a short

hint can be generated by considering this protocol over the uniform distribution and compressing it using

a theorem of [BBCR10], yielding a new protocol with communication ≤ Ipoly log(CC/ǫ)/ǫ. This entire

transcript could be given to both Alice and Bob as the hint. It is short, but not short enough — the

communication CC of this protocol may tend to infinity2 as I approaches IC(f, ǫ).

However, the appeal of the hint definition is that hints are not protocols. Each hint is a one-way

message. There is no give-and-take between the advice giver and the advice recipient. It seems likely to

be possible to leverage this (using tools of information theory) to show that hints are bounded above by

information complexity.

Fix f and ǫ. Let π be a protocol which has error ≤ ǫ on distribution µ. Our goal is to give hint and

simulator functions for the same protocol π, and show that the max hint length is bounded above by

the information complexity of π.

Without loss of generality, consider giving hints to Bob. Let R be the random variable for the coins

used during the protocol π.

I(X; Π(X,Y ), R|Y )

= H(Π(X,Y ), R|Y )−H(Π(X,Y ), R|X,Y ) definition

= H(Π(X,Y ), R|Y )−(H(Π(X,Y )|R,X, Y )−H(R|X,Y )

)by fact 69

= H(Π(X,Y ), R|Y )−H(R|X,Y ) as H(Π(X,Y )|R,X, Y ) = 0

= H(Π(X,Y ), R|Y )− |R| R is independent of X and Y

= H(Π(X,Y )|R, Y ) +H(R|Y )− |R| by fact 69 again

= H(Π(X,Y )|R, Y ) + |R| − |R| R is independent of Y

= H(Π(X,Y )|R, Y ) (8.3.1)

The entropy of the transcript, given R and Y , will be an important quantity under consideration.

Two basic techniques may be useful. One approach is to use Huffman coding to compress the

transcript to a hint whose average length is (relatable to) the entropy of the transcript. Another approach

is to use rejection sampling to give a hint which will allow Bob to produce a transcript from nearly the

correct distribution.3

2as it does for the AND function [BGPW10]3The field of length-limited Huffman coding considers a similar problem, but it does not appear to have considered the

ζ-statistically-close relaxation yet. See for example [Meh75, LH90, Gag03, Bae07].

Page 93: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 84

Theorem 121 [CT91] Let X be a random variable over some distribution. The Huffman code for X

has expected length L(X), where H(X) ≤ L(X) ≤ H(X) + 1.

Theorem 122 (Wrong Code) [CT91] Let P and Q be two distributions over the same space. The

expected length of a sample from P encoded with an optimal code for Q is ℓ where:

H(P ) +DKL(P ||Q) ≤ ℓ ≤ H(P ) +DKL(P ||Q) + 1

The rejection sampling procedure of Harsha et al. gives an implementation of this theorem.

Lemma 123 (Rejection sampling lemma) [HJMR07] Let P and Q be two distributions such that

DKL(P ||Q) is finite. Then there exists a sampling procedure which on input a sequence 〈s1, s2, . . . , si, . . .〉of independently drawn samples from the distribution Q outputs (with probability 1) an index i∗ such

that the sample si∗ is distributed according to the distribution P and the expected encoding length of the

index i∗ is at most

DKL(P ||Q) + 2 log(DKL(P ||Q)) +O(1)

where the expectation is taken over the sample sequence and the internal random coins of the procedure.

The constant 2 can be reduced to 1 + ǫ for any ǫ > 0.

Thus it seems that our conjectures for average hint length are nearly proven using existing techniques.

Let’s separate our consideration of these conjectures into cases.

Case 1: π is deterministic.

Bob knows the protocol π as well as the distributions of X and Y . He also knows the correct value

of y. He and the hint function can both generate the set of all possible transcripts π(x, y)|x ∈ X andcalculate the probability of each. The hint function then gives Bob the hint which is just the Huffman

code for the correct transcript π(x, y) (there’s only one correct transcript!) coded according to the

distribution Bob can compute. By theorem 121, the expected length of this code is ≤ H(Π(X,Y )|Y )+1.

The hints for Alice work similarly. Hence the average hint length will be ℓ where:

ℓ ≤ maxH(Π(X,Y )|Y ) + 1, H(Π(X,Y )|X) + 1 ≤ ICintµ (π) + 1

However, the maximum hint length could still be quite long. For example, if with Y = y fixed there

is a different transcript for each value of x, then the maximum hint length will be n = |X| when ζ = 0.

In order for the simulator to get the correct answer, the hint needs to contain the full value of X. This

is an upper bound on the maximum hint length for any distribution, for any protocol (deterministic or

randomized) because the hint can always simply be the value of X. (However note that if this is the

case, a more sophisticated hint/simulator and analysis may still make it plausible that there are short

max hints for any π.)

Case 2: π uses public random coins.

If the protocol uses only public random coins, then both players as well as the hint function can

see the public random coins. Thus we can consider the coins as fixed, reducing the protocol to the

deterministic case above.

Case 3: π uses mixed coins.

Hopefully, a combination of techniques from the public and private coins cases will be sufficient to

cover this case.

Page 94: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 85

Case 4: π uses private random coins.

In this case, let Q = Π(X,Y )|Y = y be the distribution of transcripts that Bob can compute given

his input y. Note that this distribution depends on the distribution of X as well as the random coins xr

and yr.

Let Px = Π(X,Y )|Y = y,X = x be the distribution of transcripts for the actual input values; this

distribution depends on the private random coins of both participants.

By the wrong code theorem 122, it is possible to give a hint to Bob which will allow him to correctly

decode the hint into a sample from Px. This hint for input (x, y) will have expected4 length ℓ where:

H(Px) +DKL(Px||Q) ≤ ℓ ≤ H(Px) +DKL(Px||Q) + 1 (8.3.2)

The KL divergence of Px andQ is an unbounded nonnegative quantity. Even ifH(Π(X,Y )|Y ) is bounded

below some small quantity (e.g., we know it is ≤ ICintµ (π)), the KL divergence DKL(Px||Q) can be quite

large.5 We will need to do something more sophisticated to characterize the maximum hint length.

Is it possible to bound the term DKL(Px||Q)? By fact 72, we know that

I(X; Π(X,Y )|Y ) = Ex DKL(p(π|x, y)||p(π|y)) = Ex DKL(Px||Q)

We can combine this with equation 8.3.1 and the obvious upper bound of information complexity.

H(Π(X,Y )|Y ) = I(X; Π(X,Y )|Y ) = Ex DKL(Px||Q) ≤ ICintµ (π)

This seems sufficient to demonstrate the conjecture for average hint length, sinceH(Px) = H(Π(X,Y )|X =

x, Y = y) ≤ H(Π(X,Y )|Y = y) ≤ ICintµ (π), equation 8.3.2 says that we can give hints to Bob with ex-

pected value ℓ (the expectation is over X and the random coins) where:

AHLµ(π) ≤ ℓ ≤ H(Px) +DKL(Px||Q) + 1 ≤ 2 · ICµ(π) + 1 (8.3.3)

Bounding maximum hint length does not succumb to this technique. There may be certain outlier

values of X for which the hint using rejection sampling is arbitrarily long (because, again, DKL(Px||Q)

is arbitrarily large). Any technique which cannot guarantee a maximum hint length n (the length of

X) is not interesting for the purposes of answering open problem 120. Thus rejection sampling alone is

insufficient.

The hint problem is nearly solved. Our next approach approach attempts iterated Huffman

coding. The idea is to use the Huffman code guarantee on the average length of codewords to successively

winnow down the set of inputs which require long hints. Once this set is small enough (a quantity relating

to ζ), our hint function can simply abort. This solution seems likely to work; all that remains is to work

through the calculations.

The hint will have three parts: hB(x, y) = (p, q, r). The first part p will describe how many iterations

of the procedure to do; the second part q will be a Huffman encoding of a transcript π(x, y); r specifies

some random coins. The simulator will work by receiving the hint, then using the p part to determine how

4The expectation is over X and the random private coins of the protocol.5Consider this simpler example. Let P be the result of tossing a fair coin. Let Q be the result of tossing a very biased

coin. The entropy H(P |Q) ≤ H(P ) = 1 but the KL divergence DKL(P ||Q) is unbounded. Recall the definition 71 ofKL divergence. Even the ζ weakening in the definition 112 of max hint length is not sufficient, since for any P ′ which isstatistically close to P , DKL(P

′||Q) will also be unbounded.

Page 95: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 86

to decode the q part. The procedure will begin with the uniform distribution6 over (X, y) and Huffman

code all transcripts. If the code for a correct transcript is short enough, we’re finished; otherwise, we

prune the distribution and Huffman code again. After a limited number of rounds we abort.

The hint procedure.

The hint function knows both x and y.

Set the input distribution µ0 where Y = y is a singleton and X is uniform. Set i = −1.Fix ℓ (the limit on hint length).

Pick uniform r ← R.

repeat

i = i+ 1 (i is the iteration we’re on)

hi,π(X,Y ) = Huffman encoding of π(x, y) according to µi with random coins r

µi+1 = uniform over (x, y) | Y = y,X s.t. |hi,π(x,y)| > ℓuntil |hi,π(x,y)| ≤ ℓ or i > ℓ

if i > ℓ then

Return “abort” hint.

else

return hint (i, hi,π(x,y), r)

end if

The simulator procedure.

The simulator receives a hint in the form (a, b, r) of total length ≤ 2ℓ+ |r|. If the “abort” hint is received,

the simulator returns any transcript. Otherwise, it decodes the hint as follows.

Set the input distribution µ0 where Y = y is a singleton and X is uniform. Set i = −1.repeat

i = i+ 1 (i is the iteration we’re on)

Huffman encode all π(x, y) with support in µi, according to µi with random coins r

µi+1 = uniform over (x, y) | Y = y,X s.t. |hi,π(x,y)| > ℓuntil i = a

t = Huffman decoding of b according to µi

return (t, r)

The analysis of this hint procedure has several pieces.

Length of hint. The protocol parametrizes this so that the maximum hint has length ≤ 2ℓ (dis-

counting the length of the appended randomness; see remark 109). Whether this is ≤ ICintµ (π) is the

core of open problem 120. The particular parameters of ℓ for which this is possible will certainly be

related to the error parameter ζ.

Correctness. In non-abort cases, the simulator can perfectly decode the hint and obtain a transcript

π(x, y) and the random coins that accompany it. Thus the overall correctness of this procedure depends

entirely on the probability of aborting.

Abort probability. The target is to determine a value ζ which upper-bounds the probability of

aborting. This will yield an answer to open problem 120.

Open problem 124 How effective is the iterated Huffman procedure? That is, how are the values of ℓ

(the bound on the max hint length) and the abort probability related?

6It could even begin with the actual distribution (X,Y )|Y = y, since we assume that Bob knows the input distribution.

Page 96: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 8. Privacy, advice, and error 87

This is an ongoing avenue of research. Separating or collapsing average hints and max hints holds

implications for separating information from communication. This is the motivation behind proposing

that hint length serve as the worst-case approximate privacy measure in settings with error. As has

been explored in the pages above, the research literature in this area is unresolved. There is no general

consensus of what makes a “good” definition of worst-case approximate privacy in the ǫ-error setting.

The proposed hint measures seek to fill this absence, and justify their utility by easy comparisons with

other measures of privacy (as we will see in the next chapter).

Page 97: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 9

Comparing definitions of privacy

Given the plethora of approaches to privacy, it seems useful to gather the comparisons of the differing

approximate privacy measures and their motivations. In this chapter, we summarize approximate privacy

for two players (only!), with its various definitions and their interrelations.

In general, internal privacy loss (to the other player) is less than external privacy loss (to the eaves-

dropper). Average-case measures are usually comparable, whereas worst-case measures are often incom-

parable. Several diagrams and tables will summarize the knowledge spread throughout the preceding

pages.

Recall our formative questions from the introduction.

• Is the goal perfect privacy, or only a relaxation to an approximate version?

• Can we bound the worst-case privacy loss or the average-case privacy loss?

• Can the protocol make errors?

• Is the concern privacy loss to other players, or to an eavesdropper?

We have seen and compared a plethora of privacy definitions, and now stand well-equipped to sort our

definitions according to these themes. Figure 9.125 should prove helpful whenever one needs to decide

which privacy measure to use.

88

Page 98: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter9.

Comparing

definitionsofprivacy

89

What sort of privacy?

t-privateperfect

Which case?approximate

Error?

worst

Privacy from?no

PARextexternal

strongly h-private

PARintinternal

internal onlyyes

weakly h-private(δ, t)-privateMHL

Error?average

Privacy from?no

avgµ PARext

PRIVextexternal

avgµ PARint

PRIVintinternal

Privacy from?yes

avgµ PARǫ,ext

PRIVǫ,ext

ICextµ

external

avgµ PARǫ,int

ICintµ

PRIVǫ,int

IC(f, ǫ)Ii(f), Ic−i(f)AHL

internal

Figure 9.125: Flow chart: which privacy measure should I use?

Page 99: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 9. Comparing definitions of privacy 90

definition randomness? error? privacy type theorems

avgµ PARext, avgµ PAR

int no nointernal

or external∝ PRIV: theorems 99, 100

tradeoff with CC(π): theorem 82

avgµ PARǫ,ext yes yes

internalor external

∝ CC(π): lemma 67

information cost

ICintµ (π), ICext

µ (π)yes yes

internalor external

int ∝ ext: lemma 74∝ PRIV: theorems 95 & 97

PRIVintµ (π)

PRIVextµ (π)

yes nointernal

or external

int ∝ ext: lemma 76

∝ ICintµ (π): theorem 95

∝ ICextµ (π): theorem 97

∝ PAR: theorems 99 & 100

PRIVǫ(π) yes yesinternal

or external∝ PARǫ: theorem 101

information complexityICµ(f, ǫ), IC(f, ǫ), ICD(f, ǫ)

yes yesinternal

or external

comparing: lemma 79∝ Ii, Ic−i: lemma 98∝ AHLµ(π): lemma 118

conjecture 119additional informationIi(f), Ic−i(f)

yes yes internal only∝ each other: theorem 81∝ IC(f, ǫ): lemma 98

average hint lengthAHLµ(π)

yes yes internal only

∝ MHL: lemma 115

∝ ICintµ (π): lemma 118conjecture 119

Figure 9.126: Table comparing average privacy measures.

9.1 Average-case measures

Average-case measures of approximate privacy are generally comparable, with or without error. Whether

internal or external, and whether given the function value or not, most average-case measures are closely

related. The table in figure 9.126 permits a side-by-side inspection of the average-case measures.

In the deterministic or zero-error setting, PRIV, IC, and AHL all measure the privacy loss (roughly)

in terms of information. Average PAR measures something like information, although it is not quite the

same. (The theorems 99 and 100 comparing PRIV and PAR are not tight.) Deterministic or zero-error

measures are compared in table 9.126 and figure 9.127, where a→ b means a ≥ b and arrows are labeled

by theorem number. (Relative placement in the diagram should also help the reader decode which

measures dominate which other measures.)

When error is permitted, the comparison of measures is essentially the same; figure 9.128 looks nearly

the same as figure 9.127. The only difference is that PAR and PRIV have been switched to their error-

permitting counterparts. (All other measures can be considered over protocols with or without error, so

the comparisons remain the same.)1

As [KLX13] illustrate, the informational measure PRIV(π) is smooth in the sense that an ǫ change (by

statistical distance) in µ incurs at most an ǫn change in PRIV(π), whereas the quantity minπ avgµ PARǫ(π)

can dramatically change over the same close distributions. (They give an example where the minimum

PARǫ over all protocols π balloons from constant to exponential.) This makes PAR, even with ran-

domized and erring protocols, a more conservative measure in the sense that it heavily penalizes large

privacy loss, even when it occurs with very low probability.

1Because the measures Ii(f) and Ic−i(f) are only defined when ǫ ≤ 1/2, they are omitted from the figure. Lemma 98shows that they are essentially equivalent to information complexity, so no critical information has been omitted.

Page 100: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 9. Comparing definitions of privacy 91

CC(π)

67

vvvvvvvvvvvvvvvvvvvvvvv

67

**TTTTTTTTTTTTTTTT

2noo

log avgµ PARext(π)

100

rreeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

log avgµ PARint(π)

99

PRIVextµ (π) + log |Z|

97

76

uujjjjjjjjjjjjjjj

AHLintµ (π) + log |Z|

118

--

PRIVintµ (π) PRIVint

µ (π) + log |Z|oo

95

ICextµ (π)

97

74

uujjjjjjjjjjjjjjjjj

AHLintµ (π) ICint

µ (π)

uukkkkkkkkkkkkkkkkk

95

119oo_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ PRIVext

µ (π)

76

uujjjjjjjjjjjjjjj

ICµ(f) PRIVintµ (π)− log |Z|

Figure 9.127: Deterministic and zero-error average privacy measures compared.

CC(π)

67

zzuuuuuuuuuuuuuuuuuuuuuuuu

67

**UUUUUUUUUUUUUUUUU

2noo

log avgµ PARǫ,ext(π)

101

rreeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

log avgµ PARǫ,int(π)

101

PRIVǫ,µext(π) + log |Z|

ttjjjjjjjjjjjjjjj

AHLintµ (π) + log |Z|

118

--

PRIVǫ,µint(π) PRIVǫ,

µint(π) + log |Z|oo

ICextµ (π)

74

ttjjjjjjjjjjjjjjjjjj

AHLintµ (π) ICint

µ (π)

uujjjjjjjjjjjjjjjjjj

119oo_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ PRIVǫ,

µext(π)

ttjjjjjjjjjjjjjjj

ICµ(f) PRIVǫ,µint(π)− log |Z|

Figure 9.128: Error-permitting average privacy measures compared.

Page 101: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 9. Comparing definitions of privacy 92

9.2 Separating information from communication

One main open problem in this area asks whether communication complexity can be compressed to

(nearly) information complexity.

Open problem 129 [BBCR10] Given a communication protocol π over distribution µ with communi-

cation cost C, is there a generic way to convert into a new protocol with communication only ICintµ (π) ·

poly logC?

The answer to this question has strong implications for direct-sum theorems in randomized commu-

nication complexity. A separation (or collapse) of the inequalities in figures 9.127 and 9.128 would be

a major step towards answering this question. PAR is poised in the center of these towers of inequal-

ities. Other approaches to lower bounds for communication and information use alternative measures,

e.g. zero communication protocols. Almost all lower bounds currently known on probabilistic commu-

nication complexity can be used to give lower bounds on information complexity [KLL+12]. The AND

function has a series of protocols which converge to an information-optimal protocol, but this unfortu-

nately has an infinite number of rounds [BGPW10]. The separation of information from communication

remains a compelling question in this area.

A weaker hierarchy is established by hints.

ICintµ (π) ≤ AHLµ(π) ≤ MHL(π, ζ = 0) ≤ CC(π) (9.2.1)

The discussion in section 8.3 already suggests that it may be possible to separate average from maximum

hint length. It may still be independently interesting to examine this series of inequalities.

Page 102: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 9. Comparing definitions of privacy 93

definition randomness? error? privacy type theorems

perfectly private yes yes internal onlyBoolean functions: theorems 19 & 21

any function: theorems 22 & 23(perfect privacy) if k = 2: theorems 13 & 15

PAR, PARsub no nointernal

or externaltradeoff with CC(π): theorem 51

strongly h-private yes no internal only strongly ∝ weakly: lemma 46weakly h-private yes yes internal only(δ, t)-private w/ǫ error yes yes internal only Boolean ∝ t-private: theorem 43Ic(f) yes yes internal only ∝ Ii(f), Ic−i(f): theorem 81

MHL(π, ζ) yes yes internal only∝ AHL: lemma 115

∝ weak h-privacy: lemma 113∝ IC: question 120

Figure 9.130: Table comparing worst-case privacy measures.

9.3 Worst-case measures

Worst-case measures of approximate privacy vary. Some consider error, randomness, or eavesdroppers;

some do not. More particularly, the “worst” part of “worst-case” is far from consensus. (In contrast

with the average case, where nearly every measure is ≈ IC.) What does “worst” refer to, and how should

it be measured?

Researchers have not generally attempted to compare worst-case measures. It is easy to see why.

When defining an average-case measure, one must pick some quantity to examine the average of. Not so

with worst-case measures. Some worst-case measures give us a quantified measurement (PAR, Ic, andMHL), but other worst-case privacy “measures” are actually parametrized properties. A function either

is or is not perfectly private; strongly/weakly h-private; or (δ, 2)-private with ǫ error. We may call it a

“measure of approximate privacy”, but actually these definitions are binary indicators. And although

the parameters of these latter privacy properties can be used as measurements, this does not prove

particularly insightful. Measuring the range of the function h such that f is strongly/weakly h-private

turns out not to be effective (or true! remember section 8.1). And t-privacy is relatable to (δ, t)-privacy

with ǫ error for particular functions and values of ǫ and δ; these are neither intuitive nor illuminating.

This means that a tidy summary diagram like figure 9.127 is not forthcoming, although the measures

can be listed for side-by-side comparison of their features in a table (figure 9.130). At best we can say that

each worst-case measure relaxes perfect privacy, and usually has a trivial upper bound. (PARext(π) ≤22n, PARint(π) ≤ 22n, and MHL(π, ζ) ≤ CC(π). Every f is strongly h-private for an h with range of

size 2n. Every deterministic protocol is either (0, 2)-private or (1, 2)-private.)

Worst-case measures are not widely relatable. We come up short on nontrivial comparisons of worst-

case measures to their average-case counterparts (where they exist). Theorem 43 (which is more than

twenty years old) represents the most substantive comparison, and it is limited to Boolean functions.

• avgµ PARext(π) ≤ PARext(π)

• avgµ PARint(π) ≤ PARint(π)

• AHLµ(π) ≤ MHL(π, ζ) by lemma 115.

• Lemma 46) variously relates f being strongly/weakly h-private to f being strongly/weakly fh-

private.

Page 103: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 9. Comparing definitions of privacy 94

• f is perfectly (2-)private⇔ f is weakly f -private⇔ f is strongly f -private by proposition 106 and

lemma 47.

• Theorem 81 shows that Ideti ≤ Idetc−i ≤ Idetc and Ii ≤ Ic−i ≤ Ic.

• Theorem 43 nontrivially relates 2-privacy to (δ, 2)-privacy with ǫ error, subject to some conditions

on δ, ǫ, and f .

We also know that some of these are incomparable. For example, disproof 105 shows that both: f is

perfectly (2-)private 6⇒ f is weakly h-private for some h such that | range(f)| = | range(h)|, and f is

perfectly (2-)private 6⇐ f is weakly h-private for some h such that | range(f)| = | range(h)|.Each worst-case measure defines a hierarchy. For perfect privacy, the two-player privacy hierarchy

is thoroughly characterized (see chapter 2). Chapter 4 uses Vickrey auctions to describe the (separate!)

levels of the PAR privacy hierarchy.

Page 104: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 10

Conclusion

These techniques hold the promise of similar length-privacy tradeoffs for other functions. Further possible

extensions include settings with more players, randomization, and ǫ-error. With the restriction of perfect

privacy for two-player functions, [Kus89] shows that the set of functions with deterministic protocols

and the set of functions with randomized protocols are the same. Perhaps there is a similar result for

any fixed constant PAR, or perhaps as the PAR requirement is relaxed, the two sets gradually differ.

Privacy optimists might aim to show that randomization makes most functions privately computable

with PAR 1− ǫ, nearly perfect privacy; a negative result would show that even approximate privacy is

not achievable for some functions.

There are several related fields beyond the scope of this project.

Differential privacy. Consider a different privacy scenario in which a trusted curator manages a

large collected database of many items of private information (e.g., a hospital with health records). If

this curator wants to release some functions of the database (like statistics), it will need to do so in a way

which does not compromise the privacy of individuals in the database. One guarantee of such protection

is differential privacy, a property which roughly makes the promise that the inclusion/exclusion of

one individual’s data from the collection does not affect the released statistics (much) [Dwo08]. The

motivation for this definition is to justify the choice of each participant to allow their private information

to be included in the statistic. If the released statistic doesn’t change much, then not much can be inferred

about that individual’s private data.

This represents another type of tradeoff for privacy: differential privacy trades accuracy (error)

in return for a partial1 privacy guarantee. Differential privacy is related to the information-theoretic

concepts above, and can be considered in the two-party communication setting [MMP+11].

Game theory/mechanism design. The Vickrey auction problem is widely studied because it

possesses the property of truthfulness : neither player has incentive to lie about his input. This mechanism

design concept is defined in a setting where the players are sending their inputs to some trusted third

party, who computes the output. (Recall that this differs from the communication complexity concept

of “honest-but-curious” players.) Vickrey auctions form a canonical example of a truthful mechanism.

One strong reason to study the privacy of Vickrey auctions and other problems in the communication

setting is the difficulty of finding a truly trusted third party. However, once the computation of the

function f is distributed amongst several players who must communicate for many rounds, even Vickrey

1“partial” here simply denotes “not perfect”

95

Page 105: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 10. Conclusion 96

auctions lose their truthfulness: players (prompted by standard greed or more nuanced privacy concerns)

may decide to stop participating, or begin participating dishonestly, during the protocol [GHMV05].

There is a known gap between truthfulness and computational efficiency [PSS08]; selfish behavior is also

known to increase communication complexity [FS09].

Given these considerations, it makes sense to revisit our early assumption that players are honest-but-

curious. Mechanism design provides a wealth of results and problems in which players are self-interested.

They may seek to effect the outcome of the function, or learn about the other inputs, in addition to

attempting to protect their own privacy. Rather than being constrained to follow the protocol, players

may choose to stop participating, or participate maliciously. [SB11] study a relaxation of PAR which

attempts to preserve the (close-to-)truthfulness of a mechanism while examining its approximate privacy

loss.

This additional degree of freedom in the model raises new questions: Which functions are com-

putable, and at what cost of communication complexity, privacy, and accuracy? Several approaches to

this problem have examined mechanism design in the light of differential privacy [MT07, MMP+11]. In

some settings it is possible to convert truthful mechanisms into truthful mechanisms which are differen-

tially private, but overall differential privacy is not sufficient to motivate truthful behavior in arbitrary

mechanisms [Xia11]. Maximizing value exactly may require the revelation of additional information

(and hence loss of privacy), although approximating this maximum is possible in certain scenarios with

dramatically less privacy loss [NS06]. A mechanism which considers (differential) privacy loss to have

cost (in the mechanism-design sense) can be severely limited in accuracy [GR11].

Cryptography studies similar settings to our own, with eavesdroppers overhearing messages sent

between parties on a channel. There is an apparent similarity between secure multiparty computation

[GMW87] and privacy in communication complexity, but the two types of research have very different

concerns. It nevertheless may be useful to attempt to bridge this divide and connect the two areas,

especially in privacy settings where players have randomness (as they do in most cryptographic scenarios).

Page 106: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 10. Conclusion 97

10.1 Open problems

We conclude with some new and some restated open problems.

Open problem 60 Can theorem 51 be tightened to match the upper bound of lemma 59? That is,

does Vickrey auction represent a function for which tight worst-case privacy-communication tradeoffs

are provable?

Open problem 114 Fix protocol π for f : X × Y → Z. If π has max hint length ℓ, then is it the case

that there exist functions hA and hB such that:

• log range(hA) ≤ ℓ, and

• log range(hB) ≤ ℓ, and

• π is weakly (hA, hB)-private for f?

That is, is there another protocol π′ which is weakly (hA, hB)-private for f?

Open problem 116 Is there any function f which separates average hint and max hint? That is, is

there some f and ǫ such that

average hint length max hint length

for all protocols π computing f with error ≤ ǫ?

This problem holds implications for the information versus communication problem.

Open problem 120 (short max hints for any π) Fix distribution µ and protocol π for f : X×Y →Z with error ǫ ≥ 0. Then for which values of ζ ≥ 0 is it the case that

MHL(π, ζ) ≤ ICintµ (π)?

Open problem 124 How effective is the iterated Huffman procedure? That is, how are the values of ℓ

(the bound on the max hint length) and the abort probability related?

Open problem 129 [BBCR10] Given a communication protocol π over distribution µ with communi-

cation cost C, is there a generic way to convert into a new protocol with communication only ICintµ (π) ·

poly logC?

Open problem 131 Is there a randomized version of theorem 13; that is, is there a way to convert any

randomized two-player protocol with approximate privacy K into a deterministic protocol with approxi-

mate privacy K (for any measure of approximate privacy)? Alternately, is there a two-player function

which has a randomized protocol with approximate privacy K but does not have any deterministic protocol

with approximate privacy K?

Open problem 132 Is a combinatorial characterization of approximate privacy (in the style of theo-

rem 13) possible for two-player functions?

Page 107: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Chapter 10. Conclusion 98

Open problem 133 For two players, is it the case that any function f which is computable with ap-

proximate privacy Q (for some measure of approximate privacy) is computable deterministically with

approximate privacy Q? That is, does randomness help to make more functions approximately private?

Does it help to shorten protocols?

Recall that randomness does not help with two-player perfect privacy.

Page 108: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Bibliography

[ACC+12] Anil Ada, Arkadev Chattopadhyay, Stephen A Cook, Lila Fontes, Michal Koucky, and To-

niann Pitassi. The Hardness of Being Private. In Conference on Computational Complexity,

2012.

[Bae07] Michael B Baer. D -ary Bounded-Length Huffman Coding. arXiv, page cs:0701012v2, 2007.

[BBCR10] Boaz Barak, Mark Braverman, Xi Chen, and Anup Rao. How to compress interactive

communication. ACM Symposium on the Theory of Computing, 2010.

[BCNW06] Amos Beimel, Paz Carmi, Kobbi Nissim, and Enav Weinreb. Private approximation of

search problems. ACM Symposium on the Theory of Computing, pages 119–128, 2006.

[BGPW10] Mark Braverman, Ankit Garg, Denis Pankratov, and Omri Weinstein. From Information

to Exact Communication. In Electronic Colloquium on Computational Complexity, pages

TR12–171, 2010.

[BOGW88] Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. Completeness Theorems for Non-

Cryptographic Fault-Tolerant Distributed Computation. ACM Symposium on the Theory

of Computing, pages 1–10, 1988.

[BR10] Mark Braverman and Anup Rao. Information Equals Amortized Communication. (submit-

ted), 2010.

[Bra11] Mark Braverman. Interactive information complexity. Electronic Colloquium on Computa-

tional Complexity, (123), 2011.

[BS08] Felix Brandt and Tuomas Sandholm. On the Existence of Unconditionally Privacy-

Preserving Auction Protocols. ACM Transactions on Information and System Security,

11(2):1–21, May 2008.

[BYCKO93] Reuven Bar-Yehuda, Benny Chor, Eyal Kushilevitz, and Alon Orlitsky. Privacy, Additional

Information, and Communication. IEEE Transactions on Information Theory, 39:55–65,

1993.

[BYJKS04] Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, and D. Sivakumar. An information statistics

approach to data stream and communication complexity. Journal of Computer and System

Sciences, 68(4):702–732, June 2004.

99

Page 109: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Bibliography 100

[CDSS11] Marco Comi, Bhaskar Dasgupta, Michael Schapira, and Venkatakumar Srinivasan. On

Communication Protocols That Compute Almost Privately. Symposium on Algorithmic

Game Theory (SAGT), pages 44–56, 2011.

[CGGK94] Benny Chor, Mihaly Gereb-Graus, and Eyal Kushilevitz. On the structure of the privacy

hierarchy. Journal of Cryptology, 7(1):53–60, 1994.

[CK89] Benny Chor and Eyal Kushilevitz. A Zero-One Law for Boolean Privacy (extended ab-

stract). In ACM Symposium on the Theory of Computing, pages 62–72, 1989.

[CK91] Benny Chor and Eyal Kushilevitz. A Zero-One Law for Boolean Privacy. SIAM Journal

of Discrete Math, 4:36–47, 1991.

[CT91] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley-

Interscience, New York, 1991.

[CWYS01] Amit Chakrabarti, Anthony Wirth, Andrew Yao, and Yaoyun Shi. Informational complex-

ity and the direct sum problem for simultaneous message complexity. IEEE Symposium on

Foundations of Computer Science, pages 270–278, 2001.

[Dwo08] Cynthia Dwork. Differential Privacy : A Survey of Results. Lecture Notes in Computer

Science, 4978:1–19, 2008.

[FJS10a] Joan Feigenbaum, Aaron D Jaggard, and Michael Schapira. Approximate Privacy: Foun-

dations and Quantification. ACM Conference on Electronic Commerce, pages 167–178,

2010.

[FJS10b] Joan Feigenbaum, Aaron D Jaggard, and Michael Schapira. Approximate Privacy: PARs

for Set Problems. DIMACS Technical Report 2010-01, pages 1–34, 2010.

[FRPU94] Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with noisy

information. SIAM Journal on Computing, 23(5):1001–1018, 1994.

[FS09] Ronald Fadel and Ilya Segal. The communication cost of selfishness. Journal of Economic

Theory, 144(5):1895–1920, September 2009.

[Gag03] Travis Gagie. New Ways to Construct Binary Search Trees. In 14th International Sympo-

sium on Algorithms and Computation (ISAAC), pages 537–543, 2003.

[GHMV05] Elena Grigorieva, P. Jean-Jacques. Herings, Rudolf Muller, and Dries Vermeulen. The

private value single item bisection auction. Journal of Economic Theory, 30(1):107–118,

November 2005.

[GMW87] Oded Goldreich, Silvio Micali, and Avi Wigderson. How to play any mental game. ACM

Symposium on the Theory of Computing, pages 218–229, 1987.

[GR11] Arpita Ghosh and Aaron Roth. Selling Privacy at Auction. In ACM Conference on Elec-

tronic Commerce, pages 119–208, 2011.

Page 110: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Bibliography 101

[HJMR07] Prahladh Harsha, Rahul Jain, David McAllester, and Jaikumar Radhakrishnan. The Com-

munication Complexity of Correlation. Twenty-Second Annual IEEE Conference on Com-

putational Complexity (CCC’07), pages 10–23, June 2007.

[Kla02] Hartmut Klauck. On quantum and approximate privacy. Symposium on Theoretical Aspects

of Computer Science, 2002.

[KLL+12] Iordanis Kerenidis, Sophie Laplante, Virginie Lerays, Jeremie Roland, and David Xiao.

Lower bounds on information complexity via zero-communication protocols and applica-

tions. 2012.

[KLX13] Iordanis Kerenidis, Mathieu Lauriere, and David‘ Xiao. New lower bounds for privacy

in communication protocols. In Electronic Colloquium on Computational Complexity, vol-

ume 15, 2013.

[KN97] Eyal Kushilevitz and Noam Nisan. Communication Complexity. Cambridge University

Press, 1997.

[Kus89] Eyal Kushilevitz. Privacy and communication complexity. IEEE Symposium on Founda-

tions of Computer Science, pages 416–421, 1989.

[LH90] Lawrence L. Larmore and Daniel S. Hirschberg. A fast algorithm for optimal length-limited

Huffman codes. Journal of the ACM, 37(3):464–473, July 1990.

[Meh75] Kurt Mehlhorn. Nearly Optimal Binary Search Trees. Acta Informatica, 5:287–295, 1975.

[MMP+11] Andrew McGregor, Ilya Mironov, Toniann Pitassi, Omer Reingold, Kunal Talwar, and

Salil Vadhan. The Limits of Two-Party Differential Privacy. Electronic Colloquium on

computational Complexity, 106(106), 2011.

[MT07] Frank McSherry and Kunal Talwar. Mechanism Design via Differential Privacy. IEEE

Symposium on Foundations of Computer Science, 2007.

[NS06] Noam Nisan and Ilya Segal. The communication requirements of efficient allocations and

supporting prices. Journal of Economic Theory, 129(1):192–224, July 2006.

[PSS08] Christos H. Papadimitriou, Michael Schapira, and Yaron Singer. On the Hardness of Being

Truthful. pages 1–10, 2008.

[SB11] Xin Sui and Craig Boutilier. Efficiency and Privacy Tradeoffs in Mechanism Design. In

Proceedings of the 25th Annual Conference on Artificial Intelligence (AAAI), pages 738–

744, 2011.

[Sha48] C. E. Shannon. A Mathematical Theory of Communication. The Bell System Technical

Journal, 27(July, October 1948):379–423,623–656, 1948.

[Xia11] David Xiao. Is privacy compatible with truthfulness? 2011.

[Yao77] Andrew Chi-chih Yao. Probabilistic Computations: Toward a Unified Measure of Com-

plexity. IEEE Symposium on Foundations of Computer Science, pages 222–227, 1977.

Page 111: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Bibliography 102

[Yao79] Andrew Chi-chih Yao. Some Complexity Questions Related to Distributive Computing.

ACM Symposium on the Theory of Computing, pages 209–213, 1979.

Page 112: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

Index

·r notation, 78

| · |µ notation, 47

2-private, see perfectly private

2nd-price auction, see Vickrey auction

additional information, 33, see Ic(f), 56AHL(f, ǫ), 78, 82

AHLµ(π), 78, 82

average hint length

of a function, 78

of a protocol, 78

avgµ PAR(π), 47, 70

avgµ PARint(π), 48

avgµ PARǫ(π), 50

Ball Partition Problem, 61

binary search, 31, 36

bisection protocol, 23, 36

k bisection protocol, 58

Bisenglish protocol, 43, 44

coalitions, 17

communication cost, 8

last message, 8

corners lemma, 13

cryptography, 96

cut, 59

cutπ(R), 59

decomposable, see matrix

(δ, t)-private with ǫ error, 33

differential privacy, 95

distinguishability, 12

English auction, 21, 22, 36

entropy, 51

conditional, 51

relative, see Kullback-Leibler divergence

EQ(uality) function, 31

error

distributional, 10, 55

standard, 10

worst-case, 10

zero, 10

game theory, 95

GT (>) function, 31

H(X), 51

h-private

different from t-private, 28

strongly, 28, 74

weakly, 33, 74

hashing, 31

hierarchy

average-case approximate privacy, 66

information & communication, 92

perfect privacy, 14

worst-case approximate privacy, 45

honest-but-curious, 2, 11, 16, 95

Huffman code

bounds, 84

iterated, 85

I(X;Y), 51

Ic(f), 35ICD(f, ǫ), 55

Idetc (f), 35

ICextµ (π), 53, 69

IC(f, ǫ), 55, 69, 82

Ic−i(f), 56, 69

ICintµ (π), 53, 68, 82, 83

ICµ(f, ǫ), 55

Ii(f), 56, 69information complexity, 55

103

Page 113: by LilaA.Fontes - IRIFfontes/papers/FontesThesis-UT.pdf · by LilaA.Fontes ... Honest, because they obey the rules of the game. Curious, as they do not miss any opportunity to gain

INDEX 104

max-distributional, 55

prior-free, 55

information cost, 53

external, 53

DKL(P ||Q), 52

Kullback-Leibler divergence, 52

matrix

decomposable, 13

forbidden, 13

Mf , 9

max hint, 79

mechanism design, 21, 95

MHL(π, ζ), 79, 83

monochromatic, 9

mutual information

conditional, 51

mutual information

conditional, 51

definition, 51

number-in-hand model, 8

number-on-forehead model, 8

PAR

average-case, external, 47

average-case, internal, 48

internal, 48

tradeoff, 37

with error (PARǫ), 48, 50

worst-case, 36

worst-case external, 27

worst-case internal, 27

partition lemma, 18

partitions, 9

perfectly private, 11

PRIV, 54

PRIVǫ, 71

privacy

differential, see differential privacy

external, 11

internal, 11

perfect, see perfectly private

privacy approximation ratio, see PAR

PRIVextµ (π), 54, 69, 70

PRIVǫ,extµ (π), 71

PRIVintµ (π), 54, 68

PRIVǫ,intµ (π), 71

progress, 40

protocol, 8

erring, 9

protocol tree, 8

randomization, 9

rectangle, 9

protocol-induced, 26

region, 26

rejection sampling, 84

relative entropy, see Kullback-Leibler divergence

set intersection, 72

statistical distance, 51, 78

strategy, 40

strongly h-private, see h-private, strongly

synchronous, 8

t-private, 17

different from h-private, 28

tradeoff

average-case approximate privacy, 58

worst-case approximate privacy, 37

tree, see protocol tree

truthful, 2, 21, 96

useless, 40

Vickrey auction, 21

average-case PAR, 58

definition, 21

worst-case PAR, 36

weakly h-private, see h-private, weakly

wrong code, 84

zero error, see error