15
Delegatable Pseudorandom Functions and Applications Aggelos Kiayias Dept. of Informatics & Telecommunications National and Kapodistrian U. of Athens, Greece [email protected] Stavros Papadopoulos Dept. of Computer Science & Engineering HKUST, Hong Kong [email protected] Nikos Triandopoulos RSA Laboratories Cambridge MA, USA [email protected] Thomas Zacharias Dept. of Informatics & Telecommunications National and Kapodistrian U. of Athens, Greece [email protected] ABSTRACT We put forth the problem of delegating the evaluation of a pseudorandom function (PRF) to an untrusted proxy and introduce a novel cryptographic primitive called delegatable pseudorandom functions, or DPRFs for short: A DPRF en- ables a proxy to evaluate a pseudorandom function on a strict subset of its domain using a trapdoor derived from the DPRF secret key. The trapdoor is constructed with respect to a certain policy predicate that determines the subset of in- put values which the proxy is allowed to compute. The main challenge in constructing DPRFs is to achieve bandwidth ef- ficiency (which mandates that the trapdoor is smaller than the precomputed sequence of the PRF values conforming to the predicate), while maintaining the pseudorandomness of unknown values against an attacker that adaptively con- trols the proxy. A DPRF may be optionally equipped with an additional property we call policy privacy, where any two delegation predicates remain indistinguishable in the view of a DPRF-querying proxy: achieving this raises new de- sign challenges as policy privacy and bandwidth efficiency are seemingly conflicting goals. For the important class of policy predicates described as (1-dimensional) ranges, we devise two DPRF constructions and rigorously prove their security. Built upon the well- known tree-based GGM PRF family [17], our constructions are generic and feature only logarithmic delegation size in the number of values conforming to the policy predicate. At only a constant-factor efficiency reduction, we show that our second construction is also policy private. Finally, we describe that their new security and efficiency properties render our DPRF schemes particularly useful in numerous security applications, including RFID, symmetric searchable encryption, and broadcast encryption. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita- tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CCS’13, November 4–8, 2013, Berlin, Germany. Copyright 2013 ACM 978-1-4503-2477-9/13/11 ...$15.00. http://dx.doi.org/10.1145/2508859.2516668. Categories and Subject Descriptors C.2.4 [Communication Networks]: Distributed Systems— client/server, distributed applications ; E.3 [Data Encryp- tion]; G.3 [Probability and Statistics]: random number generation; K.6.5 [Management of Computing and In- formation Systems]: Security and Protection—authenti- cation, cryptographic controls General Terms Algorithms, Security, Theory Keywords delegation of computation; pseudorandom functions; RFIDs; searchable encryption; broadcast encryption; authentication 1. INTRODUCTION Due to its practical importance, the problem of securely delegating computational tasks to untrusted third parties comprises a particularly active research area. Generally speaking, secure delegation involves the design of protocols that allow the controlled authorization for an—otherwise untrusted—party to compute a given function while achiev- ing some target security property (e.g., the verifiability of results or privacy of inputs/outputs) and also preserving the efficiency of the protocols so that the delegation itself remains meaningful. Beyond protocols for the delegation of general functionalities (e.g., [18, 33]) a variety of specific cryptographic primitives have been considered in this con- text (see related work in Section 2). Surprisingly enough, pseudorandom functions (PRFs), a fundamental primitive for emulating perfect randomness via keyed functions, thus, finding numerous applications in in- formation security, have not been explicitly studied in the context of delegation of computations (of PRF values). We, hereby, initiate a study on this matter. A new PRF concept. We introduce delegatable pseu- dorandom functions, or DPRFs for short, a novel crypto- graphic primitive which enables the delegation of the evalu- ation of a PRF to an untrusted proxy according to a given predicate that defines the inputs on which the proxy will evaluate the PRF. Specifically, let F be a PRF family, and P a set of pred- icates, called the delegation policy, defined over the domain of F . A DPRF is a triplet (F ,T,C) constructed with respect

Delegatable Pseudorandom Functions and Applications

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Delegatable Pseudorandom Functions and Applications

Delegatable Pseudorandom Functions and Applications

Aggelos KiayiasDept. of Informatics & Telecommunications

National and Kapodistrian U. of Athens, [email protected]

Stavros PapadopoulosDept. of Computer Science & Engineering

HKUST, Hong [email protected]

Nikos TriandopoulosRSA Laboratories

Cambridge MA, [email protected]

Thomas ZachariasDept. of Informatics & Telecommunications

National and Kapodistrian U. of Athens, [email protected]

ABSTRACTWe put forth the problem of delegating the evaluation of apseudorandom function (PRF) to an untrusted proxy andintroduce a novel cryptographic primitive called delegatablepseudorandom functions, or DPRFs for short: A DPRF en-ables a proxy to evaluate a pseudorandom function on astrict subset of its domain using a trapdoor derived from theDPRF secret key. The trapdoor is constructed with respectto a certain policy predicate that determines the subset of in-put values which the proxy is allowed to compute. The mainchallenge in constructing DPRFs is to achieve bandwidth ef-ficiency (which mandates that the trapdoor is smaller thanthe precomputed sequence of the PRF values conformingto the predicate), while maintaining the pseudorandomnessof unknown values against an attacker that adaptively con-trols the proxy. A DPRF may be optionally equipped withan additional property we call policy privacy, where any twodelegation predicates remain indistinguishable in the viewof a DPRF-querying proxy: achieving this raises new de-sign challenges as policy privacy and bandwidth efficiencyare seemingly conflicting goals.

For the important class of policy predicates described as(1-dimensional) ranges, we devise two DPRF constructionsand rigorously prove their security. Built upon the well-known tree-based GGM PRF family [17], our constructionsare generic and feature only logarithmic delegation size inthe number of values conforming to the policy predicate.At only a constant-factor efficiency reduction, we show thatour second construction is also policy private. Finally, wedescribe that their new security and efficiency propertiesrender our DPRF schemes particularly useful in numeroussecurity applications, including RFID, symmetric searchableencryption, and broadcast encryption.

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected]’13, November 4–8, 2013, Berlin, Germany.Copyright 2013 ACM 978-1-4503-2477-9/13/11 ...$15.00.http://dx.doi.org/10.1145/2508859.2516668.

Categories and Subject DescriptorsC.2.4 [Communication Networks]: Distributed Systems—client/server, distributed applications; E.3 [Data Encryp-tion]; G.3 [Probability and Statistics]: random numbergeneration; K.6.5 [Management of Computing and In-formation Systems]: Security and Protection—authenti-cation, cryptographic controls

General TermsAlgorithms, Security, Theory

Keywordsdelegation of computation; pseudorandom functions; RFIDs;searchable encryption; broadcast encryption; authentication

1. INTRODUCTIONDue to its practical importance, the problem of securely

delegating computational tasks to untrusted third partiescomprises a particularly active research area. Generallyspeaking, secure delegation involves the design of protocolsthat allow the controlled authorization for an—otherwiseuntrusted—party to compute a given function while achiev-ing some target security property (e.g., the verifiability ofresults or privacy of inputs/outputs) and also preservingthe efficiency of the protocols so that the delegation itselfremains meaningful. Beyond protocols for the delegationof general functionalities (e.g., [18, 33]) a variety of specificcryptographic primitives have been considered in this con-text (see related work in Section 2).

Surprisingly enough, pseudorandom functions (PRFs), afundamental primitive for emulating perfect randomness viakeyed functions, thus, finding numerous applications in in-formation security, have not been explicitly studied in thecontext of delegation of computations (of PRF values). We,hereby, initiate a study on this matter.

A new PRF concept. We introduce delegatable pseu-dorandom functions, or DPRFs for short, a novel crypto-graphic primitive which enables the delegation of the evalu-ation of a PRF to an untrusted proxy according to a givenpredicate that defines the inputs on which the proxy willevaluate the PRF.

Specifically, let F be a PRF family, and P a set of pred-icates, called the delegation policy, defined over the domainof F . A DPRF is a triplet (F , T, C) constructed with respect

Page 2: Delegatable Pseudorandom Functions and Applications

to P, which provides the elementary functionality shown inFigure 1: For any secret key k and a predicate P ∈ P,the delegator computes a trapdoor τ via algorithm T , andτ is transmitted to the proxy. The latter then runs al-gorithm C on τ to derive exactly the set of PRF valuesBP = {fk(x)|P (x)}, where fk ∈ F , i.e., the PRF valueon every input x satisfying predicate P , overall correctlyenabling the evaluation of PRF fk subject to predicate Pwithout explicit knowledge of secret key k, or even the inputvalues AP = {x|P (x)}.

Delegator

k, P

τCT

BP = {fk(x) | P (x)}

Proxy

Figure 1: The DPRF functionality.

What motivates the above scheme is bandwidth efficiency :As long as the trapdoor τ is sublinear in the size |BP | ofdelegated PRF values, the delegation is meaningful as thedelegator conserves resources (otherwise, the proxy could beprovided directly with all the PRF values BP ).

At the same time, the DPRF must retain the securityproperties of the underlying PRF, namely, (i) pseudoran-domness for any value x conforming to the delegation pred-icate P , i.e., P (x), and (ii) unpredictability for any noncon-forming value x such that ¬P (x). In addition, a DPRF canoptionally satisfy a policy-privacy property which preventsthe proxy from inferring information about P or the dele-gated (input) set AP from the trapdoor τ .

Our definitional framework. We introduce a formal def-initional framework for the new primitive of DPRFs, care-fully capturing all the above technical requirements. We firstrigorously define the correctness and security requirementsthat any DPRF should meet. Correctness captures the abil-ity of the proxy to successfully evaluate the PRF on exactlythose inputs (set AP ) specified by predicate P . Security cap-tures the requirement that the delegation capabilities of thePRF do not compromise its core pseudorandomness prop-erty. But this condition goes beyond the standard securitydefinition of PRFs, since the pseudorandomness attacker, inaddition to the PRF oracle, may now adaptively query alsoa trapdoor oracle for delegation on predicates of its choice.

Equally important is also the policy-privacy property thata DPRF may optionally satisfy, intuitively capturing the in-ability of a malicious proxy to learn any (non-trivial) prop-erty about the delegation set AP (that is not implied byBP ). Our security notion postulates that for any two pred-icates P, P ′ the corresponding trapdoors are indistinguish-able, provided that |AP | = |AP ′ | and the adversary issuesno delegation queries that trivially separate BP and BP ′ .

GGM-based realization for range predicates. We de-vise two bandwidth-efficient and provably secure DPRF con-structions for the case where the delegation policy containspredicates described by 1-dimensional ranges. Range-basedpolicies is an interesting use case, as many applications main-tain an ordering over the PRF inputs, and delegation rightsare defined with respect to ranges of such inputs.

Our first DPRF scheme is called best range cover, or BRCfor short, and relies on the well-known GGM PRF fam-ily [17]. This family defines a PRF based on the hierarchicalapplication of any length-doubling pseudorandom generator(PRG) according to the structure induced by a tree, whereinput values are uniquely mapped to root-to-leaf paths. Byexploiting the above characteristics, our BRC scheme fea-tures logarithmic delegation size in the number |BP | of val-ues conforming to the policy predicate, by having the trap-door τ comprise a subset IP of internal PRG values thatoptimally cover the target range BP of PRF values. Weprovide a formal security proof for the above scheme which,interestingly, does not trivially follow from the GGM secu-rity proof. The reason is that the adversary can now em-ploy delegation queries to learn any internal PRG value inthe tree, and not simply leaf PRF values as in the originalsecurity game in [17]. We note that, although similar“range-covering”GGM-based constructions appear in the literature,e.g., in [30], no formal security analysis has been given.

However, our BRC scheme does not satisfy policy privacy,as the structure of the PRG values in IP leaks informationabout predicate P . This motivates our second construction,called uniform range cover, or URC for short. This schemeaugments BRC in a way that renders all trapdoors corre-sponding to ranges of the same size indistinguishable. Thisis achieved by carefully having τ comprising a subset I ′P ofPRG values that cover the target range BP of PRF valuesless optimally: I ′P contains PRG values that are descendantsof those values in IP at a tree height that depends solely on|BP | (which by definition leaks to the adversary). More in-terestingly, by adopting the above change, URC retains boththe asymptotic logarithmic bandwidth complexity of BRCand its DPRF security, but it crucially achieves a policy-privacy notion appropriately relaxed to our setting of rangepredicates. Inherently, as we argue, no efficient tree-basedDPRF scheme can satisfy policy privacy, so we have to resortto a sufficient relaxation which we call union policy privacy.

Main applications. Finally, our DPRF schemes, equippedwith bandwidth efficiency, security and policy privacy (URConly), lend themselves to new schemes that provide scalablesolutions to a wide range of information security and appliedcryptography settings involving the controlled authorizationof PRF-based computations. Generally, DPRFs are partic-ularly useful in applications that rely on the evaluation of(secret) key-based cryptographic primitives on specific in-puts (according to an underlying policy): Using a DPRFscheme then allows a cost-efficient, yet secure and private,key management for an untrusted proxy who is otherwisecapable in executing a particular computational task.

We outline several such applications in which DPRFs areuseful, including authentication and access control in RFIDs,efficient batch querying in searchable encryption, as well asbroadcast encryption. Due to the underlying GGM build-ing component, our DPRFs are extremely lightweight, astheir practical implementation entails a few repeated appli-cations of any efficient candidate instantiation of a PRG thatis length doubling.

Summary of contributions. Our main contributions are:

• We initiate the study of policy-based delegation of thetask of evaluating a pseudorandom function on specificinput values, and introduce the concept of delegatablePRFs (DPRFs).

Page 3: Delegatable Pseudorandom Functions and Applications

• We develop a general and rigorous definitional frame-work for the new DPRF primitive, capturing proper-ties such as bandwidth efficiency, correctness, securityand policy privacy, and also offering a relaxed unionpolicy privacy that is arguably necessary for tree-wiseDPRF constructions.

• We present a framework for building DPRF schemesfor the important case where the delegation policy isgoverned by range predicates over inputs; our frame-work augments the generic GGM construction frame-work of [17] to provide two concrete DPRF schemes,namely BRC and URC.

• We prove the security of our constructions in a modu-lar way, thus also yielding the first security analysis ofsimilar GGM-based key-delegation schemes, and theunion-policy privacy of URC.

• We describe several key applications of DPRFs in thecontext of efficient key-delegation protocols for authen-tication, access control and encryption purposes.

Paper organization. Section 2 reviews related work. Sec-tion 3 formulates the DPRF primitive, Section 4 presentsour two DPRF constructions for the case of range poli-cies. Section 5 elaborates on the applicability of DPRFs.Finally, Section 6 concludes our paper with directions forfuture work. Selected proofs appear in the Appendix.

2. RELATED WORKSecure delegation of computations. The notion of dele-gation of cryptographic operations is already mature: Start-ing from early work on proxy signatures [28] and proxy cryp-tography [5], basic primitives such as signatures (e.g., [6, 28])and encryption (e.g.,[2, 19, 22]) have been studied in thecontext of an untrusted proxy who is authorized to operateon signatures or ciphertexts. Recently, there has also beenan increased interest in verifiability and privacy of generaloutsourced computations (e.g., [1, 9, 10, 18, 33]) or specificcrypto-related operations (e.g., [4, 15, 20]). To the best ofour knowledge, however, no prior work explicitly and for-mally examines the delegation of PRFs.1

PRF extensions. Closer to our new DPRF primitive arethe known extensions of PRFs, namely the verifiable PRFs(VPRFs) (e.g., [12, 27, 29]) and the oblivious PRFs (OPRFs)(e.g., [16, 23]). A VPRF provides a PRF value along witha non-interactive proof, which enables anyone to verify thecorrectness of the PRF value. Although such proofs can beuseful in third-party settings, they are not related to thedelegation of the PRF evaluation without the secret key.Similarly, an OPRF is a two-party protocol that securely im-plements functionality (k, x)→ (⊥, fk(x))—that is, a partyevaluates a PRF value without knowledge of the secret key.Yet, although the party that provides the key can be viewedto preserve resources, this setting does not match ours be-cause there is no (i) input privacy (it is the second party whoprovides the input x), and (ii) bandwidth efficiency whenapplied over many values. Other related work includes al-gebraic PRFs, employed in [4] to achieve optimal privateverification of outsourced computation.1Delegating PRF evaluation in a similar vein as we considerhere, was considered in [7, 8] after our work was submittedfor publication.

GGM framework. This refers to the seminal work by Gol-dreich et al. [17], which shows how to generically constructa PRF given black-box access to a length-doubling PRG.The approach is based on a tree structure, over which hi-erarchical invocations of the PRG produce the PRF valuesat the leaves. Our constructions extend this framework innon-trivial ways: First, we support delegation of PRF evalu-ations; second, our security is proved in a much stronger ad-versarial setting where the adversary gets to adaptively learnalso internal PRG values. The GGM framework, along withits tree-based key-derivation structure, have been widelyused in the literature; also for special/limited delegation pur-poses in the context of access control [30] and forward secu-rity [21]. Yet, to the best of our knowledge, no such knownkey-derivation method has been analyzed fully in a securitymodel like ours, which allows for adaptive PRG/delegationqueries and addresses policy-privacy issues.

3. DEFINITIONSA pseudorandom function (PRF) family F is a family of

functions {fk : A → B | k ∈ K} so that K is efficientlysamplable and all F ,K, A,B are indexed by a security pa-rameter λ. The security property of a PRF is as follows:For any probabilistic polynomial-time (PPT) algorithm Arunning in time polynomial in λ it holds that

|Pr[Afk(·) = 1]− Pr[AR(·) = 1]| = negl(λ) ,

where negl denotes a negligible function and the probabilityabove is taken over the coins of A and the random variablesk and R which are uniform over the domains K and (A→ B)respectively.

Delegatable PRFs. The notion of delegation for a PRFfamily F as above is defined with respect to a delegationpolicy P, i.e., P is a set of predicates defined over A, alsoindexed by λ, where each predicate P has an efficient en-coding; the set of elements in A that satisfy P is denoted asAP = {x ∈ A | P (x)}.

Definition 1 (Delegatable PRF) A triple (F , T, C) is adelegatable PRF, or DPRF scheme, for family F = {fk :A → B | k ∈ K} with respect to policy P of predicatesover A, provided it satisfies two properties, correctness andsecurity, that are defined individually below.

Correctness. We define T to be a PPT algorithm suchthat, given a description of P ∈ P and a key k ∈ K, it out-puts a “trapdoor” τ . The latter is intended to be used alongwith the deterministic PPT algorithm C for the computa-tion of every element of A that satisfies the predicate P . Forfixed P, k, the algorithm C can be viewed as a function

C : StP,k −→ B × StP,k ,

where StP,k is a set of states, and for any s ∈ StP,k theoutput C(s) is a pair that consists of a PRF value and a(new) state. We denote C(s) = 〈CL(s), CR(s)〉 and definerecursively the set of reachable states from a subset S ofStP,k as

R(S) , S ∪R(CR(S)) .

The elements of the DPRF that are produced given an ini-tial state s will be defined using the complete set of reachablestates given s. For a singleton S = {s}, we will write R(s)

Page 4: Delegatable Pseudorandom Functions and Applications

instead of R({s}), and we will denote by R(s) the closureof s under R, i.e., the fixpoint of the recursive equation forR that also contains s.

Definition 2 (Correctness) The DPRF scheme (F , T, C)is correct for a policy P if for every P ∈ P:

1. {τ | τ ← T (P, k)} ∪ {⊥} ⊆ StP,k, for any k ∈ K(initialization).

2. CR(⊥) = ⊥ (termination condition).

3. There is a polynomial q such that for every k ∈ K andτ ← T (P, k):

(i) ⊥ ∈ R(τ) (termination guarantee).

(ii) |R(τ)| ≤ q(λ) (feasibility).

(iii) {fk(x) | P (x)} = BP = {CL(s) | s ∈ R(τ)}(completeness).

According to the above conditions, all possible trapdoorscorresponding to a certain policy predicate P are valid initialinputs for C. Moreover, starting from an arbitrary trapdoorfor P as initial input, the proxy can execute a number ofsteps of algorithm C, during which it successively computesthe DPRF image of every argument x that satisfies P , andeventually terminate when it reaches the final state ⊥, whereno further useful information can be derived.

We note that condition 3.(ii) implies the restriction thatthe size of every policy predicate is polynomial. This stemsfrom the fact that we consider the setting where the proxywishes to eventually compute all the delegated PRF val-ues. If this is not necessary (or desirable) for the DPRFapplication, the condition can be relaxed to any size of AP(including super-polynomial sizes). In this case, the com-pleteness condition (item 3.(iii) above) would be infeasibleto meet, since the proxy cannot hope to be able to computeall the delegated PRF values. There are a number of waysto capture this by suitably modifying the way C works; forinstance: (i) C may sample the uniform distribution overBP , (ii) C may be given the value x as input, and returnfk(x) if x ∈ AP and ⊥ otherwise, (iii) C may be given thelexicographic rank of an element x within AP and returnfk(x) or ⊥ otherwise.

Security. We consider the case where the proxy is mali-cious, and accordingly model DPRF security as an oracleindistinguishability game GASEC carried out between an at-tacker A and a challenger C indexed by parameter λ. Thegame proceeds as shown in Figure 2. Note that due to thedelegation capabilities of DPRFs, the security game GASECis more elaborate than the security game of a plain PRF:Indeed, in addition to issuing standard PRF queries, thepseudorandomness attacker may now adaptively query alsoa trapdoor oracle for delegation on predicates of its choice.

Definition 3 (Security) A DPRF scheme (F , T, C) is se-cure for a policy P if for any PPT A, it holds that

Pr[GASEC(1λ) = 1] ≤ 1

2+ negl(λ) .

DPRF Security Game GASEC(1λ)

1. The challenger C randomly selects k from K.

2. The adversary A is allowed to interact with C and asktwo types of queries:(a) PRF queries for a value x ∈ A; to those queries Cresponds with fk(x) and adds the value x to a set Lque.(b) Delegation queries for a policy predicate P ∈ P; tothose queries C responds with τ ← T (P, k) and adds Pto a set Lpol.

3. The adversary A submits a challenge query x∗ to whichthe challenger C responds as follows: It flips a coin band if b = 1 it responds with y∗ = fk(x∗), otherwise, itresponds with a random value y∗ from B.

4. The adversary A continues as in step 2.

5. The adversary A terminates by returning a single bit b.Subsequently the game returns a bit which is 1 if andonly if the following holds true:

(b = b) ∧ (x∗ 6∈ Lque) ∧ ∀P ∈ Lpol : ¬P (x∗) .

Figure 2: The DPRF security game.

We make the following observations about the definition.First, it is easy to see that a delegatable PRF is indeeda PRF. Specifically, any PRF attacker A against fk canbe turned into an attacker A′ that wins the DPRF gameGASEC(1λ). We provide only a simple sketch of this which fol-lows by a standard “walking” argument (the reader familiarwith such arguments may skip to the next paragraph). Fixsome PPT A and let α be its non-negligible distinguishingadvantage. There will be some polynomial q so that, for anyλ, q(λ) is an upper bound on the number of queries submit-ted to A’s oracle by A. Given such q and for fixed λ, we de-fine the hybrid oracle (fk/R)j for any j ∈ {0, . . . , q(λ)} thatoperates as follows: (fk/R)j responds as fk(·) in the first jqueries and as R(·) in the last q(λ)− j queries. Taking suchsequence of oracles into account, by triangular inequality, itfollows that there exists some value j ∈ {0, . . . , q(λ)−1} forwhich the distinguishing probability will be at least α/q(λ)for A to distinguish between two successive hybrid oracles(fk/R)j and (fk/R)j+1 when R is a random function. Thisfollows from the fact that A distinguishes the “extreme” hy-brids R(·) and fk(·) with probability α. We now constructA′ as follows out of A: A′ plays the DPRF game and queriesthe DPRF function for the first j queries of A. Then it sub-mits the (j + 1)-th query of A as the challenge. Finally, itcompletes the simulation of A by answering any remainingqueries of A with random values drawn from B (w.l.o.g. thequeries to the oracle are all distinct), and outputs what Adoes. It is easy to see that the distinguishing advantage ofA′ is α/2q(λ), i.e., non-negligible in λ.

Second, we observe that there is a trivial construction ofa delegatable PRF from any PRF: Consider an ordering ≤over A, e.g., the lexicographical order. For fixed P, k setT (P, k) = 〈fk(x1), . . . , fk(x|AP |)〉 = τ , where xi is the i-thelement of AP according to ≤. Given τ , the set of states canbe StP,k = {τ, (2, τ), . . . , (|AP |, τ),⊥}, and the reconstruc-tion function C can be simply defined to be a table-lookupin τ . It is straightforward to show that (F , T, C) is a DPRFas long as the underlying family F is a PRF, since any del-egation query can be interpreted as a series of polynomiallymany PRF queries.

Page 5: Delegatable Pseudorandom Functions and Applications

The existence of a trivial DPRF construction with re-spect to arbitrary policies from any given PRF motivatesour primitive: Interesting DPRF constructions will be thosethat are bandwidth efficient, i.e., they provide trapdoors withsize that is sublinear in the number of elements that satisfythe corresponding policy predicate.

Policy privacy. We next introduce an additional privacyproperty that a DPRF scheme may optionally satisfy. Thisproperty goes beyond the standard (D)PRF security and isrelevant in the context of PRF delegation for protecting theassociated policy predicates over which the delegation is de-fined. We accordingly model this new privacy condition as apredicate indistinguishability game GAPP carried out betweenan attacker A and a challenger C indexed by a security pa-rameter λ. The game proceeds as shown in Figure 3.

DPRF Policy Privacy Game GAPP(1λ)

1. The challenger C randomly selects k from K.

2. The adversary A is allowed to interact with C and askdelegation queries on policy predicates P ∈ P; to thosequeries C responds with τ ← T (P, k) and adds P to aset Lpol.

3. The adversary A submits two policy predicates P0, P1

to C. The challenger flips a bit b and computes τ∗ ←T (Pb, k). It returns τ∗ to the adversary.

4. The adversary A continues as in step 2.

5. The adversary A terminates by returning a single bit b.Subsequently the game returns a bit that is 1 if and onlyif the following holds true:

(b = b) ∧ (|AP0| = |AP1

|) ∧ (AP06= AP1

)∧

∧∀S ⊆ Lpol :∣∣( ⋂P∈S

AP ) ∩AP0

∣∣ =∣∣( ⋂P∈S

AP ) ∩AP1

∣∣ .Figure 3: The DPRF policy privacy game.

Definition 4 (Policy privacy) A DPRF scheme (F , T, C)is policy private for a policy P if for any PPT A, it holdsthat

Pr[GAPP(1λ) = 1] ≤ 1

2+ negl(λ) .

The above definition suggests that the trapdoor that cor-responds to a certain policy predicate hides the predicateitself (i) at least among all (distinct) policy predicates thatenable evaluations of the PRF on the same number of ele-ments in its domain, and (ii) when the adversary does notmake queries whose responses leak unequal parts of the PRFimage of the two challenge predicates. Observe that all therestrictions stated are necessary: If |AP0 | 6= |AP1 |, thenthe adversary can distinguish P0 from P1 by counting thenumber |{CL(s) | s ∈ R(τ∗)}| of new PRF values it com-putes starting from state τ∗ and ending in ⊥. In addition,if the number of elements that satisfy simultaneously anyset S of delegation queries and P0 is different than the num-ber of elements that satisfy S and P1, then the adversarycan distinguish the two predicates by counting the size of{CL(s) | s ∈ R(τ∗)} ∩ (

⋂P∈S BP ).

We note that while the above formulation can capture awide set of attacks against privacy, it can be strengthened

further by allowing multiple challenge pairs of policy predi-cates. In this case, the policy-privacy game allows a multiplenumber of adversarial actions submitting pairs of policies(as in step 3 of the game) to be interleaved with delegationqueries. The challenger always responds based on the samecoin flip b. Interestingly, this multiple-challenge policy pri-vacy formulation can be shown to be strictly stronger thanthe one corresponding to Definition 4.

On efficiently achieving policy privacy. We next arguethat even though desirable, the above formulations of pri-vacy conflict with our efficiency considerations (sublinear-size trapdoors) for a wide class of schemes. This will mo-tivate relaxing the policy-privacy property as we will see inthe end of the section; the reader unwilling to go over thedetails of the lower bound type of argument we sketch belowmay skip directly to the policy-privacy relaxation in the endof the section.

Assume that the policy predicates are ranges [a, b] thatlie in an interval [0, 2n − 1], where n is a positive integer.Then, no efficient and policy-private DPRF scheme exists,if the trapdoor-generation algorithm T is deterministic, andthe delegated-computation algorithm C is deterministic andtree-wise. This means that for each range, the trapdoorconsists of a data structure of initial keys and meta-datathat enable in a deterministic way the calculation of the fi-nal set of PRF values through a tree-like derivation process,where the final underlying tree structure of this process de-pends only on the meta-data (and not on any of the initialkeys). Indeed, if policy privacy is satisfied, then for ev-ery 0 < j ≤ b − a, the delegated computation of the PRFvalues in the intersection [a + j, b] must be indistinguish-able for [a, b] and [a + j, b + j]. Otherwise, an adversarycan make queries for all the PRF values in [a + j, b] and,since it knows the way that each unique trapdoor of [a, b]and [a + j, b + j] computes the PRF values of the range[a + j, b], can thus distinguish the two range predicates bymaking the corresponding equality tests. By induction, thisimplies that for any j ∈ {0, . . . , b − a}, in the trapdoorof [a, b] there exist distinct keys d0, . . . , dj that allow thecomputation of fk(a), . . . , fk(a+ j), respectively. Thus, thesize of the trapdoor of the range [a, b] consists of at leastr = b − a + 1 = #[a, b] keys which means that the DPRFcannot be efficient (i.e., enabling the delegation of the rangewith a trapdoor size less than the set of values being dele-gated).

The above argument suggests that, if policy privacy is tobe attained, we need trapdoors at least as long as the valueswe delegate. Note that the trivial construction for rangesmentioned above, i.e., when the trapdoor of [a, b] is the〈fk(a), fk(a+ 1), . . . , fk(b)〉, does not satisfy policy privacy.For instance, when delegating [1, 3] and [3, 5] the attackercan easily distinguish them by obtaining the value fk(3) (itbelongs in the intersection hence the attacker is allowed tohave it) and checking its location within the trapdoor vec-tor. To achieve policy privacy for the trivial constructionone needs to additionally permute the PRF values in thetrapdoor in some fashion: It can be easily shown that alexicographic ordering of these values suffices.

In our upcoming constructions, we obtain efficient DPRFsthat use tree-wise derivations where the PRF values com-puted are at the leaves of a full binary tree, and the policypredicates are ranges of the form [a, b], covered by one ormore proper subtrees. In this case, even allowing a proba-

Page 6: Delegatable Pseudorandom Functions and Applications

bilistic trapdoor generator does not suffice to provide policyprivacy for a very efficient scheme. To see this, let [a, b]be a range of size r and τ be a trapdoor for [a, b] of sizeg(r) = O(logc r), i.e., g(r) is polylogarithmic in r, hencealso in λ. By the Pigeonhole principle, there exists a delega-tion key y in τ that computes all the values correspondingto a set T ⊆ [a, b] of size at least r/g(r), that is, a subtree Tof depth at least d ≥ log(r) − log(g(r)) = ω(1). The latterimplies that there exists a value x ∈ [a, b] with an all-onesuffix of length d that is computed by this key y. (Here, atree node contributes a 0 or 1 to root-to-leaves path labelingsif it is a left or right child respectively). Since there are nomore than g(r) such possible values x ∈ [a, b], an adversaryhas significant probability greater than 1/g(r) of guessing x.Then it can make a query x, receive fk(x) and submit as thechallenge the two policies P0 = [a, b] and P1 = [x, b+(x−a)].After it receives the challenge trapdoor τ∗, it can locate allkeys that correspond to subtrees of size ≥ d and check ifthere is a minimum leaf value (an all-zero path) in some ofthese subtrees that equals to fk(x). Then, the adversary candistinguish effectively P0, P1 since x cannot be covered byany subtree of size d in a trapdoor for [x, b+(x−a)] (it mustbe covered by a leaf while d > 1). This argument can be ex-tended to more general tree-like delegation schemes and forthe case g(r) = o(r) but we omit further details as it alreadydemonstrates that efficient tree-like constructions require asomewhat more relaxed definition of policy privacy—whichwe introduce next.

Union policy privacy. We finally introduce the notionof union policy privacy, where the adversary is restrictedfrom making queries in the union of the challenge policypredicates (but is allowed to query at arbitrary locationsoutside the targeted policy set). This privacy condition isa strict relaxation of the one corresponding to Definition 4,and we model it by a game GAUPP(1λ) that proceeds identi-cally as GAPP(1λ), but terminates with 1 provided that the

following (weaker set of) conditions are all met: (i) b = b,(ii) |AP0 | = |AP1 |, (iii) AP0 6= AP1 and (iv) ∀P ∈ Lpol :AP ∩ (AP0 ∪ AP1) = ∅. To see the connection with Defini-tion 4 observe that the latter condition is equivalent to

∀S ⊆ Lpol :∣∣( ⋂P∈S

AP ) ∩AP0

∣∣ =∣∣( ⋂P∈S

AP ) ∩AP1

∣∣ = 0 .

Note that for policies P consisting of disjoint predicates, thegames GAPP(1λ) and GAUPP(1λ) are equivalent.

4. CONSTRUCTIONSIn this section we present DPRF schemes for range policy

predicates. In Section 4.1 we describe a first construction,called best range cover (BRC), which satisfies the correct-ness and security properties of DPRFs, achieving trapdoorsof logarithmic size in the range size. However, BRC lacksthe policy-privacy property. In Section 4.2 we build uponBRC to obtain a DPRF scheme, called uniform range cover(URC), that is additionally (union) policy private, while re-taining the trapdoor size complexity of BRC. In Section 4.3we include the security proofs of the two schemes. In Section4.4 we prove the union policy-privacy property of URC.

4.1 The BRC ConstructionLet G : {0, 1}λ → {0, 1}2λ be a pseudorandom gener-

ator and G0(k), G1(k) be the first and second half of the

string G(k), where the specification of G is public and k is asecret random seed (of bit length λ). The GGM pseudoran-dom function family [17] is defined as F = {fk : {0, 1}n →{0, 1}λ}k∈{0,1}λ , such that

fk(xn−1 · · ·x0) = Gx0(· · · (Gxn−1(k))) ,

where n is polynomial in λ and xn−1 · · ·x0 is the input bit-string of size n.

The GGM construction defines a binary tree over the PRFdomain. We illustrate this using Figure 4, which depicts abinary tree with 5 levels. The leaves are labeled with adecimal number from 0 to 15, sorted in ascending order.Every edge is labeled with 0 (resp. 1) if it connects a left(resp. right) child. We label every internal node with thebinary string determined by the labels of the edges alongthe path from the root to this node. Suppose that thePRF domain is {0, 1}4. Then, the PRF value of 0010 isfk(0010) = G0(G1(G0(G0(k)))). Observe that the compo-sition of G is performed according to the edge labels in thepath from the root to leaf 2 = (0010)2, selecting the first(second) half of the output of G when the label of the vis-ited edge is 0 (resp. 1) and using this half as the seed forthe next application of G. Based on the above, the n-longbinary representation of the leaf labels constitute the PRFdomain, and every leaf is associated with the PRF value ofits label—these values constitute the PRF range.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 150

(0010)2

0

0

0

100

fk(00) = G0(G0(k))

fk(0010) = G0(G1(G0(G0(k))))

001

1

01

[2, 7] (0111)2

1

1

Figure 4: A GGM tree example.

Note that we can also associate every internal node ofthe GGM tree with a partial PRF value, by performingthe composition of G as determined by the path from theroot to that node. For example, node 00 in Figure 4 isassociated with partial PRF G0(G0(k)). Henceforth, forsimplicity, we denote by fk(xn−1 · · ·xj) the partial PRFGxj (· · · (Gxn−1(k))). Observe that if a party has the partialPRF fk(xn−1 · · ·xj), then it can compute the PRF values ofall 2j inputs that have prefix xn−1 · · ·xj , simply by followinga DFS traversal in the subtree rooted at (the node labelledby) xn−1 · · ·xj and composing with seed fk(xn−1 · · ·xj). Inour running example, using the partial PRF value at node00, we can derive the PRF values of the inputs in (deci-mal) range [0, 3] as fk(0000) = G0(G0(fk(00))), fk(0001) =G1(G0(fk(00))), fk(0010) = G0(G1(fk(00))), and fk(0011)= G1(G1(fk(00))).

For any range [a, b] of leaf labels, there is a (non-unique)set of subtrees in the GGM tree that cover exactly the cor-responding leaves. For instance, [2, 7] is covered by the sub-trees rooted at nodes 001 and 01 (colored in grey). Ac-cording to our discussion above, a party having the partialPRF values of these subtree roots and the subtree depths,it can derive all the PRF values of the leaves with labels in

Page 7: Delegatable Pseudorandom Functions and Applications

[a, b]. In our example, having (fk(001), 1) and (fk(01), 2), itcan derive the PRF values of the leaves with labels in [2, 7].Our first construction is based on the above observations.In particular, given a range policy predicate [a, b] ∈ P withsize r = b − a + 1 = |AP |,2 it finds the minimum numberof subtrees that cover [a, b]. As such, we call this scheme asbest range cover (BRC). A formal description follows.

The BRC DPRF construction is a triplet (F , T, C), whereF is the GGM PRF family described above with tree depthn. The delegation policy is P = {[a, b] | 0 ≤ a ≤ b ≤a+λγ < 2n}, where γ is a constant integer. Note that whena = b the trapdoor sent to the proxy is (essentially) simplythe PRF value fk(a). In the rest of the section, we focuson the non-trivial case where the range is not a singleton.When a < b, the trapdoor generation algorithm T of BRC,henceforth denoted by TBRC, is given below.

The Trapdoor Generation Algorithm TBRCInput: a, b : 0 ≤ a < b ≤ a+ λγ ≤ 2n − 1 and k ∈ {0, 1}λOutput: Trapdoor τ for computing {fk(x)|x ∈ [a, b]}1. τ ← 〈〉2. t← max{i | ai 6= bi}3. if (∀i ≤ t : ai = 0) then4. if (∀i ≤ t : bi = 1) then5. return (fk(an−1 · · · at+1), t+ 1)6. else7. Append (fk(an−1 · · · at), t) to τ8. else9. µ← min{i | i < t ∧ ai = 1}10. for i = t− 1 to µ+ 111. if ai = 0 then12. Append (fk(an−1 · · · ai+11), i) to τ13. Append (fk(an−1 · · · aµ), µ) to τ14. if (∀i ≤ t : bi = 1) then15. Append (fk(bν−1 · · · bt), t) to τ16. else17. ν ← min{i | i < t ∧ bi = 0}18. for i = t− 1 to ν + 119. if bi = 1 then20. Append (fk(bn−1 · · · bi+10), i) to τ21. Append (fk(bn−1 · · · bν), ν) to τ22. return τ

This algorithm takes as input a secret key k and a rangepredicate [a, b] ∈ P. It outputs a delegation trapdoor τ thatenables the computation of fk(x) for every x whose decimalrepresentation is in [a, b] (i.e., the PRF values of the leavesin the GGM tree with labels in [a, b]). TBRC initially finds thefirst bit in which a and b differ (Line 2), which determinesthe common path from the root to leaves a and b. Supposethat this common path ends at node u. If [a, b] is the setof labels that belong to the subtree with root u, then TBRC

outputs a single pair consisting of the partial PRF value thatcorresponds to u along with the depth of the subtree (Lines3-5). Otherwise, TBRC traverses the left and right subtree ofu separately (Lines 7-13 and 14-21, respectively). We willdescribe only the left traversal (the right one is performedsymmetrically). TBRC considers the path pv→a starting fromthe left child of u, denoted by v, to a. It checks whether a isthe leftmost leaf of v’s subtree. In this case the PRF valueof v is included in τ along with the depth of v’s subtree,and the traversal terminates (Lines 3,7). Otherwise, if pv→aproceeds to the left child of v, it includes in the trapdoor thePRF value of the right child of v together with the depth of

2In the sequel, we use symbols r and |AP | interchangeably.

the subtree rooted at that child. In any case, it continuesthe traversal in the same fashion with the next node of pv→a(Lines 8-13).

In the example of Figure 4, for input [2, 7], TBRC outputstrapdoor τ = 〈(fk(001), 1), (fk(01), 2)〉. By construction, itis easy to see that the algorithm covers the input range withmaximal subtrees, thus using a minimum number of them.

We next describe the reconstruction algorithm C of BRC.For fixed P = [a, b] of size r and key k, we define theset of states as StP,k = {τ, (1, τ), . . . , (r − 1, τ),⊥}, whereτ = 〈(y1, d1), . . . , (ym, dm)〉 is the trapdoor produced byalgorithm TBRC. Note that, every yi, i = 1, . . . ,m corre-sponds to a partial PRF value associated with the root ofa GGM subtree. Therefore, there is a natural ordering ofPRF values for a given τ , starting from the leftmost leafof the subtree of y1 to the rightmost leaf of the subtree ofym. This order is not necessarily the same as the orderof the leaves in the GGM tree.3 Starting from the PRFvalue of the leftmost leaf of the y1 subtree, C computes inevery next step the PRF value of the next leaf in the order-ing discussed above. Specifically, C starts with state τ andcomputes C(τ) = 〈G0(· · · (G0(y1))), (1, τ)〉, where the G0

composition is performed d1 times. Then, generally, givenstate (i, τ), C locates the unique subtree that covers the(i + 1)-th leaf x in the ordering of τ . In particular, it findsthe pair (yt, dt) in τ , where t is such that

∑t−1ρ=1 2dρ ≤ i <∑t

ρ=1 2dρ . Then, the suffix xdt−1 · · ·x0 is the binary repre-

sentation of i −∑t−1ρ=1 2dρ ∈ [0, 2dt − 1]. Given this suffix,

C can compute fk(x) = Gx0(· · · (Gxdt−1(yt))) and output

C((i, τ)) = 〈Gx0(· · · (Gxdt−1(yt))), (i + 1, τ)〉. For the case

where dt = 0, C simply returns fk(x) = yt. Finally, for in-put state (r− 1, τ), C outputs 〈G1(· · · (G1(ym))),⊥〉, wherethe G1 composition is performed dm times.

In the example of Figure 4, the trapdoor for range [2, 14] is〈(fk(01), 2), (fk(001), 1), (fk(10), 2), (fk(110), 1), (fk(1110), 0)〉.Algorithm C computes the PRF values for leaves 4, 5, 6, 7,2, 3, 8, 9, 10, 11, 12, 13, 14 in this order, i.e.,

C(τ) = 〈G0(G0(fk(01))), (1, τ)〉 = 〈fk(0100), (1, τ)〉 →C((1, τ)) = 〈G1(G0(fk(01))), (2, τ)〉 = 〈fk(0101), (2, τ)〉 →...C((12, τ)) = 〈fk(1110),⊥〉 .

As described above, during the execution of C some par-tial PRF values in the GGM tree are computed multipletimes; e.g., partial PRF value fk(011) is computed twice toderive fk(0110) and fk(0111). However, this can be easilyavoided by employing a cache of size O(r): Every PRF valueis computed once, stored in the cache, and retrieved in a nextstep if necessary. In this manner, the computational cost ofC can become O(r), since we compute one PRF value forevery node in the subtrees covering the range (of size r).

The correctness of BRC is stated in the following theorem.

Theorem 1 The BRC DPRF construction (F , T, C) w.r.t.policy P = {[a, b] | 0 ≤ a ≤ b ≤ a + λγ ≤ 2n − 1}, wheren is the depth of the underlying GGM tree, γ is a constantinteger, and λγ is the maximum range size, is correct.

Proof. We show that all conditions of Definition 2 hold.Condition 1 can be easily seen to hold by the definition of the

3This by-design feature of algorithm C comes at no efficiencycost in BRC, but it is crucial for policy privacy in URC.

Page 8: Delegatable Pseudorandom Functions and Applications

set of states StP,k. Moreover, sinceR(τ) = StP,k, conditions2, 3.(i) and 3.(ii) are directly satisfied by the construction ofalgorithm C, and because the size of range r (which deter-mines the number of values conforming to the range policypredicate) is upper bounded by λγ which is a polynomial.

Therefore, it remains to show that condition 3.(iii) holds.Note that, by construction, algorithm C computes exactlythe PRF values of the leaves covered by the subtrees cor-responding to the partial PRF values included in τ by al-gorithm TBRC. Hence, it suffices to show that the labels ofthese leaves comprise exactly the set of values in [a, b].

Let t = max{i | ai 6= bi}. Also, let V1, V2 be the sets ofarguments of the PRF values that are computed in Lines3-13 and 14-21 of algorithm TBRC, respectively. If V2 = ∅,then the input range is exactly covered by a single subtree(checks in lines 3 and 4 are true) and V1 = [a, b]. Otherwise,it holds that

V1 = {x ∈ [0, 2n − 1] | xn−1 · · ·xt = an−1 · · · at∧∧ (∃i < t : [xi = 1 ∧ ai = 0] ∨ x = a)}

= {x ∈ [0, 2n − 1] | a ≤ x ≤ an−1 · · · at1 · · · 1}= [a, an−1 · · · at1 · · · 1] .

Similarly, we get that V2 = [bn−1 · · · bt0 · · · 0, b]. Observethat, by the definition of t, at = 0 and bt = 1. Thus, it holdsthat an−1 · · · at1 · · · 1 + 1 = bn−1 · · · bt0 · · · 0. This meansthat [a, b] = V1 ∪ V2, which concludes our proof.

We next discuss the trapdoor size complexity in BRC.Let V1, V2 be defined as in the proof of Theorem 1. Then,|V1| + |V2| = r. We will analyze |V1| (the analysis for V2 issimilar). Observe that, in every step in Lines 3-13, the al-gorithm covers more than b|V1|/2c values of V1 with a max-imal subtree of the sub-range defined by V1. Iteratively,this means that the algorithm needs no more than log(r)maximal subtrees to cover the entire sub-range of V1. Con-sequently, the total number of elements in τ is O(log(r)).

We have explained the correctness of BRC and its effi-cient trapdoor size. We will also prove its security in Sec-tion 4.3. Nevertheless, BRC does not satisfy policy privacy,even for non-intersecting policy predicates. We illustratethis with an example using the tree of Figure 4. Considerthe ranges [2, 7] and [9, 14], both with size 6, which gen-erate trapdoors 〈((fk(001), 1), (fk(01), 2)〉 and 〈(fk(101), 1),(fk(1001), 0), (fk(110), 1), (fk(1110), 0)〉, respectively; theseare trivially distinguishable due to their different sizes. Thismotivates our second DPRF construction presented next.

4.2 The URC ConstructionConsider again the ranges [2, 7] and [9, 14] in the tree of

Figure 4, for which BRC produces two distinguishable trap-doors, namely trapdoor 〈(fk(001), 1), (fk(01), 2)〉 and trap-door 〈(fk(101), 1), (fk(1001), 0), (fk(110), 1), (fk(1110), 0)〉.

Now, instead of computing the trapdoor of [2, 7] as above,assume that we generate an alternative trapdoor equal to〈(fk(010), 1), (fk(0010), 0), (fk(011), 1), (fk(0011), 0)〉. Ob-serve that this trapdoor appears to be indistinguishable tothat of [9, 14]: Indeed, the two trapdoors have the samenumber of elements, the first parts of their elements are allpartial PRFs, whereas their second parts (i.e., the depths)are pairwise equal. This suggests that, we could achievepolicy privacy, if we could devise a trapdoor algorithm Tsuch that, for any range predicate of a fixed size r it alwaysgenerates a trapdoor with a fixed number of elements and a

fixed sequence of depths. More simply stated, the algorithmshould produce uniform trapdoors for ranges of the samesize. The challenge is to design such an algorithm retain-ing the logarithmic trapdoor size of BRC. Next, we presentour second DPRF construction, called uniform range cover(URC), which enjoys the efficiency of BRC and the unionpolicy-privacy property.

URC builds upon BRC. In particular, it starts by produc-ing a trapdoor as in BRC, and then modifies it to generatea uniform trapdoor for the given range r. Before embarkingon its detailed description, we must investigate some inter-esting properties of the trapdoors of BRC, and provide someimportant definitions. Recall that a trapdoor in BRC is asequence of elements, where the first part is a (full or partial)PRF value, and the second is a depth value.

Definition 5 For an integer r > 1, a pair of non-negativeintegral sequences D = ((k1, . . . , kc), (l1, . . . , ld)) is called adecomposition of r, if the following hold:

(i)∑ci=1 2ki +

∑dj=1 2lj = r, and

(ii) k1 > · · · > kc and l1 > · · · > ld.

A decomposition of r D = ((k1, . . . , kc), (l1, . . . , ld)) is aworst-case decomposition (w.c.d.) of r if it is of maximumsize, i.e., for every decomposition of r D′ = ((k′1, . . . , k

′c′),

(l′1, . . . , l′d′)), we have that c′ + d′ ≤ c+ d. We define MD ,

max{k1, l1} as the maximum integer that appears in D.

By the description of algorithm TBRC in BRC for fixedrange size r, the depths in the trapdoor can be separatedinto two sequences that form a decomposition of r, unless theinput range is exactly covered by a single subtree. Each se-quence corresponds to a set of full binary subtrees of decreas-ing size that cover leaves in the range predicate. The usageof the worst case decomposition will become clear soon.

The next lemma shows that the maximum integer thatappears in any decomposition of r (hence, the maximumdepth of a subtree in a cover of a range of size r), can onlytake on one of two consecutive values that depend only on r.

Lemma 1 Let D = ((k1, . . . , kc), (l1, . . . , ld)) be a decompo-

sition of r. Define B(r) , dlog(r + 2)e − 2. Then MD ∈{B(r), B(r) + 1}. In addition, if MD = B(r) + 1 then thesecond largest value is less than MD.

Proof. By the two properties of D, we have that

r =

c∑i=1

2ki +

d∑j=1

2lj ≤ 2k1+1 + 2l1+1 − 2 ≤

≤ 2MD+2 − 2⇔ 2MD+2 ≥ r + 2⇒MD ≥ B(r) .

Since 2B(r)+2 ≥ 2log(r+2) > r ≥ 2k1 + 2l1 , we have that themaximum value MD ∈ {k1, l1} is less than B(r) + 2 andk1, l1 cannot be both equal to B(r) + 1.

By Lemma 1, the trapdoor that is generated by BRC for arange P = [a, b] of size |AP | = r, even if this trapdoor corre-sponds to a w.c.d. of r, consists of at most |{0, . . . , B(r)}|+|{0, . . . , B(r) + 1}| = 2B(r) + 3 pairs. Hence, the trap-door size is O(log(r)), which complies with the bound wedescribed in Section 4.1. Moreover, since |AP | ≤ λγ , everytrapdoor has no more than 2dlog(λγ + 2)e − 1 pairs.

Page 9: Delegatable Pseudorandom Functions and Applications

Observe that two trapdoors with depths that form thesame decomposition appear indistinguishable. Moreover, wehave proven that a trapdoor following a w.c.d. retains thelogarithmic size in r. Therefore, our goal for the trapdooralgorithm T in URC, hereafter referred to as TURC, is toconstruct a converter that takes as input a BRC trapdoor,and produces an alternative trapdoor that complies with afixed worst-case decomposition. Before proceeding to thedetails of TURC, we must prove the following vital theorem.

Theorem 2 Let D = ((k1, . . . , kc), (l1, . . . , ld)) be a decom-position of r. Then all the elements in {0, . . . ,MD} appearin D iff D is a w.c.d. of r.

Proof. Assume that the implication does not hold andlet x be the maximum integer in {0, . . . ,MD} that does notappear inD. Since x < MD, x+1 is in {0, . . . ,MD}. Assumew.l.o.g. that lj = x+1. If x > k1, then the decomposition ofr, ((x, k1, . . . , kc), (l1, . . . , lj−1, x, lj+1, . . . , ld)), is of greatersize than that of D. Similarly, if i = min{i | x < ki}, thenthe decomposition of r, ((k1, . . . , ki, x, ki+1, . . . , kc), (l1, . . . ,lj−1, x, lj+1, . . . , ld)), is of greater size than that of D. Bothcases contradict the hypothesis, hence, x must appear in D.

For the converse, let D′ = ((k′1, . . . , k′c′), (l

′1, . . . , l

′d′)) be

a w.c.d. of r. Then, the integers 0, . . . ,MD′ appear in D′.By Lemma 1, all integers 0, . . . , B(r) appear in D and D′.By removing the integers 0, . . . , B(r) from D and D′, theremaining integers are y1 ≥ . . . ≥ ys and z1 ≥ . . . ≥ zt,respectively. Since an integer cannot appear more than twicein a decomposition of r and, by Lemma 1, the maximumpossible value B(r) + 1 cannot appear more than once, wehave that y1, . . . , ys and z1, . . . , zt are sequences of distinctintegers such that

∑si=1 2yi =

∑tj=1 2zj . Assume that there

exists a minimum index ρ ≤ s such that yρ 6= zρ and thatw.l.o.g. yρ > zρ. Then we have the contradiction

s∑i=1

2yi ≥∑i<ρ

2yi + 2yρ >∑i<ρ

2yi + 2zρ+1 − 1 ≥

≥∑i<ρ

2zi +∑i≥ρ

2zi =

t∑j=1

2zj =

s∑i=1

2yi .

Thus, {y1, . . . , ys} and {z1, . . . , zt} are equal, and thereforeD is a w.c.d. of r.

A consequence of Theorem 2 and Lemma 1 is that, forevery integer r > 1, a w.c.d. of r is a proper re-arrangementof the integers that appear in the w.c.d. of r where thefirst sequence is (B(r), . . . , 0) and the second sequence is theremaining integers in the decomposition in decreasing order.We term this unique w.c.d. as the uniform decompositionof r. The main idea in URC is to always generate a trapdoorthat complies with the uniform decomposition of r.

We are ready to describe algorithm TURC in URC, whosepseudocode is provided below. The process starts with in-voking the TBRC algorithm of BRC to get an initial trap-door τ (Line 1). If the input range is covered by a singlesubtree, then τ is reformed into two pairs that correspondto the child subtrees, so that the dephs in τ always forma decomposition (Lines 2-3). Let D be the decompositionimplied by τ (Line 4). The loop in Lines 5-7 utilizes The-orem 2 and works as follows. It finds the highest depth xthat does not exist in D, and “splits” the partial PRF value

yi in the rightmost pair in τ with depth x+1, producing twonew partial PRF values with depth x, i.e., y0i = G0(yi) andy1i = G1(yi). (Line 6). Note that these values correspond tothe two children of the subtree of yi. Thus, if we substituteelement (yi, x+ 1) by (y0i , x), (y1i , x) in τ , then the trapdoorcan still produce the same leaf PRF values as before. How-ever, at all times we wish the sequence of depths in τ to forma decomposition. Hence, after removing (yi, x + 1), we ap-propriately insert (y0i , x) in the left pair sequence and (y1i , x)in the right pair sequence of τ (Line 7). Upon terminationof the loop, all the values in {0, . . . ,MD} appear in D and,thus, we have reached a w.c.d. according to Theorem 2. Theprocess terminates, after properly re-arranging the elementsof τ , such that they comply with the unique uniform decom-position of r (Lines 8-9). This is done deterministically, bysimply filling the missing depths from {0, . . . , B(r)} in theleft sequence with the unique appropriate pair that exists(by Theorem 2) in the right sequence.

The Trapdoor Generation Algorithm TURCInput: a, b : 0 ≤ a < b ≤ a+ λγ ≤ 2n − 1 and k ∈ {0, 1}λOutput: Trapdoor τ for computing {fk(x)|x ∈ [a, b]}

1. Invoke TBRC(a, b, k) and receive the output τ2. if τ consists only of pair (y, d) then3. τ ← 〈(G0(y), d− 1), (G1(y), d− 1)〉4. Let τ = 〈(y1, d1), . . . , (yn, dm)〉 andD = ((d1, . . . , dc), (dc+1, . . . , dm))be the corresponding decomposition of r = b− a+ 1

5. while there is a maximum integer x in {0, . . . ,MD} thatdoes not appear in D:

6. Find the rightmost pair (yi, x+ 1) and compute valuesy0i = G0(yi), y

1i = G1(yi)

7. Remove (yi, x+ 1) from τ , insert the pairs(y0i , x) and (y1i , x) in τ respecting the strictly decreasingorder, and update D accordingly

8. if the leftmost sequence of D is not (B(r), . . . , 0) then9. Fill leftmost sequence with values from rightmost

sequence until it complies with the uniformdecomposition of r

10. return τ

In our running example, for the range [2, 7], the TURC al-gorithm in URC converts the original token retrieved by thetrapdoor algorithm of BRC, τ = 〈(fk(001), 1), (fk(01), 2)〉,as follows (we underline a newly inserted element to the leftsequence, and depict as bold a newly inserted element to theright sequence):

〈(fk(001), 1), (fk(01), 2)〉↓

〈(fk(0010), 0), (fk(01), 2), (fk(0011),0)〉↓

〈(fk(010), 1), (fk(0010), 0), (fk(011),1), (fk(0011), 0)〉

The C algorithm of URC is identical to that of BRC, be-cause the trapdoor in URC has exactly the format expectedby this algorithm, i.e., pairs of PRF values corresponding toGGM subtrees along with their depths. Moreover, duringthe evolution of the initial BRC trapdoor into one of uni-form decomposition in the T algorithm of URC, a partialPRF value y is substituted by two new PRF values that cangenerate the same leaf PRF values as y. As such, the cor-rectness of the BRC scheme is inherited in URC. Finally,due to Lemma 1, the size of a w.c.d. (and, hence, also auniform decomposition) of r is O(log(r)), which means thatthe trapdoor size in URC is also O(log(r)).

Page 10: Delegatable Pseudorandom Functions and Applications

4.3 SecurityIn this section we prove the security of BRC and URC.

Both proofs rely on the security of the pseudorandom gen-erator of the underlying GGM construction. Note that thesecurity proof does not follow straightforwardly from theGGM proof because contrary to the case of GGM where theadversary obtains only leaf PRFs, the adversary in a DPRFcan obtain also partial PRF values in the GGM tree (viadelegation queries). Note that the tree structure of a trap-door (which is independent of k) for a range predicate P ofsize r is deterministic and public in both BRC and URC.Thus, when querying the oracle in the security game, theadversary can map the partial or full PRF values appearingin τ for a selected P to subtrees in the GGM tree. Based onthis observation, we prove first the security of BRC againstadversaries that query only subtrees, or equivalently prefixesfor strings in {0, 1}n, where n is the depth of the GGM tree.We call this type of security subtree security. We concludeour proofs by showing that the subtree security of BRC im-plies the security of both schemes.

In order to prove the subtree security property of BRC,we insure and exploit the subtree security for two specialcases. First, we consider the case that the adversary is non-adaptive, i.e., it poses all its queries in advance.

Lemma 2 The BRC scheme with depth n and range size atmost λγ is subtree secure against PPT adversaries that makeall their queries, even the challenge query, non-adaptively.

Proof. Let A be a non-adaptive prefix-only PPT adver-sary against a BRC scheme with depth n and maximumrange size λγ . Without loss of generality we assume thatA always advances to step 3 (submits a challenge to thechallenger).

We define recursively two sequences of PPT algorithmsA = An, . . . ,A1 and Sn, . . . ,S1 as follows.

For i = n − 1, . . . , 1, Ai on input 1λ, initially invokesAi+1 receiving all of its non-adaptive queries, and chooses arandom value k′ in {0, 1}λ. If a query xi · · ·xt has the samemost significant bit (MSB) as the challenge query, Ai makesthe query xi−1 · · ·xt, and responds with the received value y.Otherwise, it responds with the value Gxt(· · · (Gxi−1(k′))).It returns Ai+1’s output.

For i = n, . . . , 1, on input (z0, z1) ∈ {0, 1}2λ, Si invokesAi and receives all of its queries. For every query xi−1 · · ·xt,it responds with Gxt(· · · (Gxi−2(zxi−1))) (or zx0 for i = 1).On the challenge phase, it flips a coin b and acts as thechallenger in a DPRF security game with Ai. It returns 1iff Ai returns b.

Let qi and pi be the probabilities that Si outputs 1 whenit receives its input from the uniform distribution U2λ in{0, 1}2λ and the pseudorandom distribution G(Uλ), respec-tively. By the definition of Si, pn = Pr[GASEC(1λ) = 1] whileq1 ≤ 1/2, since it corresponds to a totally random game.

We next observe that An−1, . . . ,A1 behave like attackersagainst the subtree security of BRC schemes with respectivedepths n−1, . . . , 1. The behavior of Ai as an attacker is thesame as Ai+1’s, when the latter interacts with a modifiedchallenger that replaces the two possible partial PRF valuesfor the MSB of a prefix, with two random values. Thus,following the previous notation, we have that pi = qi+1.Since G(Uλ) and U2λ are indistinguishable, it holds that

|pi−qi| ≤ εi(λ), where εi(·) is a negligible function. Finally:

|pn − q1| = |n∑i=1

(pi − qi)| ≤n∑i=1

|pi − qi| ≤n∑i=1

εi(λ) ,

hence Pr[GASEC(1λ) = 1] = pn ≤ q1 + n · ε(λ) ≤ 1/2 + n · ε(λ),where ε(λ) = max

i{εi(λ)}.

We use the above lemma to prove the security of a specialcategory of BRC schemes, where the maximum range size isat least half of A = [0, 2n − 1], which represents the biggestinterval where range predicates are defined. This will serveas a stepping stone for designing our final security proof.

Lemma 3 The BRC scheme with depth n and maximumrange size λγ is subtree secure if 2n−1 ≤ λγ < 2n.

Proof. Let A be a prefix-only adversary. We construct anon-adaptive prefix-only adversary B for the subtree securitygame that on input 1λ chooses randomly a challenge x∗ in[0, 2n−1] and makes the n queries that cover all the possiblevalues except from x∗. Namely, B makes queries (x∗n−1 ⊕1), . . . , (x∗n−1 · · ·x∗i ⊕ 1), . . . , (x∗n−1 · · ·x∗0 ⊕ 1) and submitschallenge x∗. It receives responses yn−1, . . . , y0 respectively,along with y∗ which is the response to x∗. Then, it invokesA and plays the security game with A, as a challenger thatcan respond appropriately for every value that is not x∗ or arange that does not contain x∗. If A sets x∗ as a challenge,then B responds with y∗, and returns A’s guess. Otherwise,A has either made a query which is a prefix of x∗, or it hassubmitted a challenge different than x∗, so B terminates thegame it plays with A and returns a random bit, as its guessto its challenge.

Let E be the event that B guesses A’s challenge, i.e. A’schallenge is x∗. By the description of B we have that

Pr[GBSEC(1λ) = 1 ∧ ¬E] = 1/2 · (1− 1/2n) and

Pr[GBSEC(1λ) = 1 ∧ E] = 1/2n · Pr[GASEC(1λ) = 1] .

Since B is non-adaptive, by Lemma 2 we get that for somenegligible function ε(·), Pr[GBSEC(1λ) = 1] ≤ 1/2 + ε(λ). Byadding the above equations we have that

1/2n · (Pr[GASEC(1λ) = 1]− 1/2) + 1/2 ≤ 1/2 + ε(λ) ,

so, Pr[GASEC(1λ) = 1] ≤ 1/2 + 2n · ε(λ) ≤ 1/2 + 2λγ · ε(λ).

We apply Lemma 3 to prove the subtree security of BRC.

Lemma 4 The BRC scheme with depth n and maximumrange size λγ is subtree secure.

Proof. See Appendix.

Finally, we prove that the subtree security of BRC impliesthe security of BRC and URC.

Theorem 3 The BRC and URC schemes with depth n andmaximum range size λγ are secure.

Proof. Let A be a PPT adversary that wins the DPRFsecurity game of either BRC or URC with non-negligibleadvantage α(·). As noted in the beginning of this section,any query A makes, can be represented as a sequence ofO(log(λγ)) subtrees, or equivalently of O(log(λγ)) prefixes.Thus, we can construct a prefix-only adversary A′ that in-vokes A and when it receives a query sequence 〈x1n−1 · · ·x1t1 ,

Page 11: Delegatable Pseudorandom Functions and Applications

. . . , xmn−1 · · ·xmtm〉 from A, it makes all prefix queries sepa-rately, receives y1, . . . , ym and answers by 〈y1, . . . , ym〉. A′also transfers A’s challenge and outputs its guess. There-fore, it wins the DPRF security game with advantage α(·),which contradicts Lemma 4.

4.4 Policy PrivacyThis section analyzes the policy privacy of URC. Accord-

ing to the lower bound argument we gave in Section 3, URCcannot be expected to satisfy the general policy-privacy prop-erty, because it is efficient. We illustrate this explicitly witha toy example. For challenge ranges [2, 5] and [4, 7], thetrapdoors will contain PRF values corresponding to subtreescovering the ranges as [2, 3], {4}, {5} and [4, 5], {6}, {7}, re-spectively. Therefore, the adversary can issue query for leaf4 and receive a PRF value y. Having 〈(y1, 1), (y2, 0), (y3, 0)〉as challenge trapdoor, it can check whether y2 = y, whichhappens only when [2, 5] was chosen by the challenger.

Nevertheless, in the theorem below we prove that URCachieves union policy privacy. The above attack is circum-vented as in the union policy-privacy game, the adversarycannot obtain a PRF value for a leaf in the intersection ofthe challenge ranges, i.e., for 4 and 5.

Theorem 4 The URC scheme with depth n and maximumrange size λγ is a DPRF with union policy privacy.

Proof. See Appendix.

5. APPLICATIONSIn this section we discuss interesting applications of the

general DPRF primitive and our specialized constructionsfor range-based policies. We stress, though, that their ap-plicability is not limited to these scenarios; we are confidentthat they can capture a much wider set of applications.

Authentication and access control in RFID. RadioFrequency Identification (RFID) is a popular technology thatis expected to become ubiquitous in the near future. AnRFID tag is a small chip with an antenna. It typically storesa unique ID along with other data, which can be transmit-ted to a reading device lying within a certain range from thetag. Suppose that a trusted center (TC) possesses a set ofRFID tags (attached to books, clothes, etc.), and distributesRFID readers to specified locations (e.g., libraries, restau-rants, etc.). Whenever a person or object carrying a tag liesin proximity with a reader, it transmits its data (e.g., thetitle of a book, the brand of a jacket, etc.). The TC canthen retrieve these data from the RFID readers, and mineuseful information (e.g., hotlist books, clothes, etc.).

Despite its merits, RFID technology is challenged by secu-rity and privacy issues. For example, due to the availabilityand low cost of the RFID tags, one can easily create tagswith arbitrary information. As such, an adversary may im-personate other tags, and provide falsified data to legitimatereaders. On the other hand, a reader can receive data fromany tag in its vicinity. Therefore, sensitive information maybe leaked to a reader controlled by an adversary. For exam-ple, the adversary may learn the ID and the title of a bookstored in a tag, match it with public library records, anddiscover the identity and reading habits of an individual.

Motivated by the above, authentication and access controlin RFID-based systems has been studied in the literature.A notable paradigm was introduced in [30], which can be

directly benefited by DPRFs. At a high level, every tag isassociated with a key, and the TC delegates to a reader a setof these keys (i.e., the reader is authorized to authenticateand access data from only a subset of the tags). The goalis for the TC to reduce certain costs, e.g., the size of thedelegation information required to derive the tag keys whilemaintaining a high number of distinct keys in order to ensurethat attacks can be compartmentalized.

Observe that a DPRF (F , T, C) is directly applicable tothe above setting. F is defined on the domain of the tag IDs,and its range is the tag keys. Given a delegation predicate onthe tag IDs, the TC generates a trapdoor via algorithm T ,and sends it to the reader. The latter runs C on the trapdoorto retrieve the tag keys. In fact, for the special case wherethe access policy is a range of IDs, the delegation protocolsuggested in [30] is identical to the non-private BRC scheme(we should stress though that [30] lacks rigorous definitionsand proofs). Range-based policies are meaningful, since tagIDs may be sequenced according to some common theme(e.g., books on the same topic are assigned consecutive tagIDs). In this case, a range policy concisely describes a setof tags (e.g., books about a certain religion) and, hence, thesystem can enjoy the logarithmic delegation size of BRC.However, as explained in Section 4, BRC leaks the positionof the IDs in the tree, which may further leak informationabout the tags. Although [30] addresses tag privacy, it pro-vides no privacy formulation, and overlooks the above struc-tural leakage. This can be mitigated by directly applyingour policy-private URC construction for delegating tag keysto the readers. To sum up, DPRFs find excellent applica-tion in authentication and access control in RFIDs, enablingbandwidth-efficient tag key delegation from the TC to thereader. Moreover, policy-private DPRFs provide strongerprotection for the tag IDs against the readers.

Batch queries in searchable symmetric encryption.Searchable symmetric encryption (SSE) (e.g., [11, 24]) en-ables processing queries directly on ciphertexts generatedwith symmetric encryption. Although SSE corresponds toa general paradigm, here we focus on the definitions andschemes of [11]. Previous works support the special case ofkeyword queries, and provide an acceptable level of provablesecurity. The general framework underlying [11] is as fol-lows. In an offline stage, a client encrypts his data with hissecret key k, and uploads the ciphertexts c to an untrustedserver. He also creates and sends a secure index I on thedata for efficient keyword search, which is essentially an en-crypted lookup table or inverted index. Given a keyword w,the client generates a query token τw using k, and forwardsit to the server. It is important to stress that this “trapdoor”is merely comprised of one or more PRF values computedon w with k, which were used as keys to encrypt I. Forsimplicity and w.l.o.g., we assume that τw is a single PRFvalue. The server uses τw on I and retrieves the IDs of theciphertexts associated with w. The results c1, . . . , cm areretrieved and transmitted back to the client, who eventuallydecrypts them with his key. The security goal is to protectboth the data and the keyword from the server.

Suppose that the client wishes to search for a batch ofN keywords w1, . . . , wN . For instance, the client may askfor documents that contain multiple keywords of his choice(instead of just a single one). As another example, assumethat the client’s data are employee records, and each recordcontains a salary attribute that takes as values intervals of

Page 12: Delegatable Pseudorandom Functions and Applications

the form [iK, (i+1)K] (i.e., instead of exact salaries). Theseintervals can serve as keywords in SSE search. Suppose thatthe client wishes to retrieve records with salaries in a rangeof intervals, e.g., [1K, 10K]. Observe that this cannot be pro-cessed with a single keyword query (no record is associatedwith [1K, 10K]). To overcome this while utilizing the SSEfunctionality, the client can ask 9 distinct queries with key-words “[1K, 2K]”, “[2K, 3K]”, . . ., “[9K, 10K]”, which coverthe query range [1K, 10K]. Such scenarios are handled with“traditional” SSE as shown in Figure 5(a). Given a predi-cate P that describes the keywords w1, . . . , wN in the batchquery, the client generates N trapdoors τw1 , . . . , τwN usingthe standard SSE trapdoor algorithm. These trapdoors aresent to the server, which then searches I with every τwi fol-lowing the SSE protocol. Obviously, for large N , the client’scomputational and communication cost is greatly impacted.

Client

k, P

τw1, . . . , τwN

c1, . . . , cm

Server

Trapdoor Search I, c

(a) “Traditional” SSE.

Client

k, P c1, . . . , cm

Server

Search

I, c

CTτw1

, . . . , τwN

DPRF

τ

(b) SSE augmented with DPRFs.

Figure 5: Batch keyword query processing in SSE.

We can augment the discussed SSE framework with DPRFfunctionality, in order to support batch queries with sublin-ear (in N) processing and communication cost at the client,while providing provable security along the lines of [11]. Fig-ure 5(b) illustrates our enhanced SSE architecture. Recallthat τwi is practically a PRF value, computed on wi withkey k. Therefore, instead of computing a PRF value for ev-ery wi itself, the client delegates the computation of thesePRF values to the server by employing a DPRF scheme(F , T, C), where F is defined over the domain of the key-words. Given predicate P and k, the client runs T andgenerates trapdoor τ , which is sent to the server. The latterexecutes C on τ to produce τw1 , . . . , τwN . Execution thenproceeds in the same manner as in “traditional” SSE. Ob-serve that, for the special case of ranges, if URC is used asthe DPRF, the computational and communication cost atthe client decreases from O(N) to O(log(N)). This trans-formation would work “out of the box” in combination toany SSE scheme that uses a PRF for creating tokens.

The above framework can be proven secure against adap-tive adversaries along the lines of [11]. The most importantalteration in the security game and proof is the formulationof the information leakage of the C algorithm of the DPRF.In particular, the level of keyword privacy that is providedby the construction is dictated by the level of policy privacythat is attained by the underlying DPRF. Specifically, given

a DPRF with multi-instance policy privacy, it is easy toshow that the same level of keyword privacy as in [11] can beachieved. Weaker notions of policy privacy (such as single-instance or union) result in correspondingly weaker notionsof keyword privacy. For instance, recall that union policyprivacy ensures indistinguishability of delegation queries notintersecting with previously asked queries. Thus, combin-ing PRF-based SSE with our URC construction will providethis level of keyword privacy. Given the lower bound argu-ments for tree-wise constructions we sketched in Section 3,the privacy loss incurred by this tree-based design is the un-avoidable cost for the exponential efficiency improvement inclient-to-server communication. Efficient constructions at-taining higher level of policy privacy might be feasible, butthey will have to follow a non tree-base design paradigm.

Broadcast encryption. In a broadcast encryption scheme(e.g., [14, 31, 32]) a sender wishes to transmit data to a setof receivers so that at each transmission the set of receiversexcluded from the recipient set can be chosen on the flyby the sender. In particular, this means that the senderhas to be able to make an initial key assignment to therecipients, and then suitably use the key material so thatonly the specific set of users of its choosing can receive themessage. In such schemes, it was early on observed thatthere is a natural tradeoff between receiver memory storageand ciphertext length (e.g., see lower bounds in [26]). Theintuition behind this is that if the receivers have more keys,this gives to the sender more flexibility in the way it canencrypt a message to be transmitted.

In the above sense one can think of the key assignmentstep of a broadcast encryption scheme as a PRF definedover the set Φ which contains all distinct subsets that thebroadcast encryption scheme assigns a distinct key. Giventhis configuration, the user u will have to obtain all the keyscorresponding to subsets S ∈ Φ for which it holds that u ∈ S(we denote those subsets by Su ⊆ Φ). In DPRF languagethis would correspond to a delegation policy for a PRF: userswill need to store the trapdoor that enables the evaluationof any value in the delegated key set Su.

Seen in this way, any DPRF is a key assignment mecha-nism for a broadcast encryption scheme that saves space onreceiver storage. For example, our constructions for range-based policies give rise to the following broadcast encryptionscheme: Receivers [n] = {1, . . . , n} are placed in sequence;each receiver u ∈ [n] is initialized with the trapdoor for arange [u− t, u+ t] for some predetermined parameter t ∈ Z.In this broadcast encryption scheme the sender can very ef-ficiently enable any range of receivers that is positioned atdistance at most t from a fixed location v ∈ [n]. This isdone with a single ciphertext (encrypted with the key of lo-cation v). Any receiver of sufficient proximity t to locationv can derive the corresponding decryption key from its trap-door. Furthermore, given the efficiency of our construction,storage on receivers is only logarithmic on t. While the se-mantics of this scheme are more restricted than a full-fledgedbroadcast encryption scheme (which enables the sender toexclude any subset of users on demand), it succinctly illus-trates the relation between broadcast encryption and DPRF;further investigation in the relation between the two primi-tives from a construction point of view will be motivated byour notion. Specifically, an efficient DPRF with domain Φover a policy set P = {Su | u ∈ [n]} will provide an efficientkey assignment for a broadcast encryption scheme operating

Page 13: Delegatable Pseudorandom Functions and Applications

over Φ. Interestingly, the reverse is also true and a broadcastencryption scheme with efficient key assignment will providea DPRF with domain Φ and the policy set {Su | u ∈ [n]}.

Regarding policy privacy, it is interesting to point outthat this security property is yet unstudied in the domain ofbroadcast encryption. A different privacy notion has beenconsidered in [3, 13, 25] that deals with the structure of ci-phertexts in such schemes. Our policy privacy on the otherhand deals with the privacy of memory contents from thereceiver’s point of view. Maintaining the indistinguishabil-ity of the storage contents is a useful security measure inbroadcast encryption schemes, and our DPRF primitive willmotivate the study of this security property in the context ofbroadcast encryption—note that none of the tree-like key-delegation methods used in broadcast encryption schemesprior to our work satisfies policy privacy.

6. CONCLUSIONWe have introduced the concept of delegatable pseudoran-

dom functions (DPRFs), a new cryptographic primitive thatallows for policy-based computation at an untrusted proxyof PRF values, without knowledge of a secret or even theinput values. We provided formal definitions of the coreproperties of DPRFs for (1) correctness, the ability of theproxy to compute PRF values only for inputs that satisfya given predicate, (2) security, the standard pseudorandom-ness guarantee, but against a stronger adversary that alsoissues delegation queries, and (3) policy privacy, which pre-vents leakage of the secret preimages of the computed PRFvalues. Moreover, we presented two DPRF constructionsfor PRF delegation controlled by range-based policies, alongwith a comprehensive analysis in terms of their security andprivacy guarantees and some inherent trade-offs with effi-ciency. Our proposed DPRF schemes are generic, yet practi-cal, based on the well-understood and widely-adopted GGMdesign framework for PRFs and, as we showed, they find di-rect application in many key-delegation or key-derivationsettings providing interesting new results.

Further exploration of DPRFs holds promise for manydirections of interest. Open problems include: DesigningDPRFs for other classes of predicates, establishing upperand lower bounds on the connection between efficiency andpolicy privacy, and studying applications in other settings.

AcknowledgmentsWe thank all anonymous reviewers for providing detailedcomments and suggestions. The first and fourth authorswere supported by projects CODAMODA of the EuropeanResearch Council, Marie Curie RECUP, and respectivelyFINER of GSRT.

7. REFERENCES[1] M. J. Atallah and K. B. Frikken. Securely outsourcing

linear algebra computations. In Proc. of the 5thSymposium on Information, Computer and Comm.Security (ASIACCS), pages 48–59. ACM, 2010.

[2] G. Ateniese, K. Fu, M. Green, and S. Hohenberger.Improved proxy re-encryption schemes withapplications to secure distributed storage. ACMTrans. Inf. Syst. Secur., 9(1):1–30, Feb. 2006.

[3] A. Barth, D. Boneh, and B. Waters. Privacy inencrypted content distribution using private broadcast

encryption. In Proc. of the 10th Int. Conf. onFinancial Cryptography and Data Security, pages52–64. Springer, 2006.

[4] S. Benabbas, R. Gennaro, and Y. Vahlis. Verifiabledelegation of computation over large datasets. In Proc.of the 31st Annual Conf. on Advances in Cryptology(CRYPTO), pages 111–131. Springer, 2011.

[5] M. Blaze, G. Bleumer, and M. Strauss. Divertibleprotocols and atomic proxy cryptography. In Proc. ofthe 17th Int. Conf. on the Theory and Application ofCryptographic Techniques (EUROCRYPT), pages127–144. Springer, 1998.

[6] A. Boldyreva, A. Palacio, and B. Warinschi. Secureproxy signature schemes for delegation of signingrights. Journal of Cryptology, 25:57–115, 2012.

[7] D. Boneh and B. Waters. Constrained pseudorandomfunctions and their applications. In Proc. of the 19thInt. Conf. on the Theory and Application ofCryptology and Information Security (ASIACRYPT),2013. To appear; full version in Cryptology ePrintArchive, 2013:352.

[8] E. Boyle, S. Goldwasser, and I. Ivan. Functionalsignatures and pseudorandom functions. IACRCryptology ePrint Archive, 2013:401, 2013.

[9] R. Canetti, B. Riva, and G. N. Rothblum. Practicaldelegation of computation using multiple servers. InProc. of the 18th Conf. on Computer and Comm.Security (CCS), pages 445–454. ACM, 2011.

[10] K.-M. Chung, Y. T. Kalai, and S. P. Vadhan.Improved delegation of computation using fullyhomomorphic encryption. In Proc. of the 30th AnnualConf. on Advances in Cryptology (CRYPTO), pages483–501. Springer, 2010.

[11] R. Curtmola, J. Garay, S. Kamara, and R. Ostrovsky.Searchable symmetric encryption: Improveddefinitions and efficient constructions. In Proc. of the13th Conf. on Computer and Comm. Security (CCS),pages 79–88. ACM, 2006.

[12] Y. Dodis and A. Yampolskiy. A verifiable randomfunction with short proofs and keys. In Proc. of the8th Int. Workshop on Public Key Cryptography(PKC), pages 416–431. Springer, 2005.

[13] N. Fazio and I. M. Perera. Outsider-anonymousbroadcast encryption with sublinear ciphertexts. InProc. of the 15th Int. Conf. on Public KeyCryptography (PKC), pages 225–242. Springer, 2012.

[14] A. Fiat and M. Naor. Broadcast encryption. In Proc.of the 13st Annual Conf. on Advances in Cryptology(CRYPTO), pages 480–491. Springer, 1993.

[15] D. Fiore and R. Gennaro. Publicly verifiabledelegation of large polynomials and matrixcomputations, with applications. In Proc. of the 19thConf. on Computer and Comm. Security (CCS), pages501–512. ACM, 2012.

[16] M. J. Freedman, Y. Ishai, B. Pinkas, and O. Reingold.Keyword search and oblivious pseudorandomfunctions. In Proc. of the 2nd Theory of CryptographyConference (TCC), pages 303–324. Springer, 2005.

[17] O. Goldreich, S. Goldwasser, and S. Micali. How toconstruct random functions. J. ACM, 33(4):792–807,1986.

Page 14: Delegatable Pseudorandom Functions and Applications

[18] S. Goldwasser, Y. T. Kalai, and G. N. Rothblum.Delegating computation: interactive proofs formuggles. In Proc. of the 40th Annual Symposium onTheory of Computing (STOC), pages 113–122. ACM,2008.

[19] M. Green and G. Ateniese. Identity-based proxyre-encryption. In Proc. of the 5th Int. Conf. onApplied Cryptography and Network Security (ACNS),pages 288–306. Springer, 2007.

[20] S. Hohenberger and A. Lysyanskaya. How to securelyoutsource cryptographic computations. In Proc. of the2nd Theory of Cryptography Conference (TCC), pages264–282. Springer, 2005.

[21] G. Itkis. Handbook of Information Security, chapterForward Security: Adaptive Cryptography—TimeEvolution. John Wiley and Sons, 2006.

[22] A.-A. Ivan and Y. Dodis. Proxy cryptographyrevisited. In Proc. of the 10th Network and DistributedSystem Security Symposium (NDSS). The InternetSociety, 2003.

[23] S. Jarecki and X. Liu. Efficient obliviouspseudorandom function with applications to adaptiveot and secure computation of set intersection. In Proc.of the 6th Theory of Cryptography Conference (TCC),pages 577–594. Springer-Verlag, 2009.

[24] S. Kamara, C. Papamanthou, and T. Roeder.Dynamic searchable symmetric encryption. In Proc. ofthe 19th Conf. on Computer and Comm. Security(CCS), pages 965–976. ACM, 2012.

[25] B. Libert, K. G. Paterson, and E. A. Quaglia.Anonymous broadcast encryption: Adaptive securityand efficient constructions in the standard model. InProc. of the 15th Int. Conf. on Public KeyCryptography (PKC), pages 206–224. Springer, 2012.

[26] M. Luby and J. Staddon. Combinatorial bounds forbroadcast encryption. In Proc. of the 17th Int. Conf.on the Theory and Application of CryptographicTechniques (EUROCRYPT), pages 512–526. Springer,1998.

[27] A. Lysyanskaya. Unique signatures and verifiablerandom functions from the DH-DDH separation. InProc. of the 22nd Annual Conf. on Advances inCryptology (CRYPTO), pages 597–612. Springer, 2002.

[28] M. Mambo, K. Usuda, and E. Okamoto. Proxysignatures for delegating signing operation. In Proc. ofthe 3rd Conf. on Computer and Comm. Security(CCS), pages 48–57. ACM, 1996.

[29] S. Micali, M. O. Rabin, and S. P. Vadhan. Verifiablerandom functions. In Proc. of the 40th AnnualSymposium on Foundations of Computer Science(FOCS), pages 120–130. IEEE, 1999.

[30] D. Molnar, A. Soppera, and D. Wagner. A scalable,delegatable pseudonym protocol enabling ownershiptransfer of RFID tags. In Proc. of the 12th Int.Workshop on Selected Areas in Cryptography (SAC),pages 276–290. Springer, 2006.

[31] D. Naor, M. Naor, and J. Lotspiech. Revocation andtracing schemes for stateless receivers. In Proc. of the21st Annual Conf. on Advances in Cryptology(CRYPTO), pages 41–62. Springer, 2001.

[32] M. Naor and B. Pinkas. Efficient trace and revokeschemes. Int. J. Inf. Sec., 9(6):411–424, 2010.

[33] B. Parno, M. Raykova, and V. Vaikuntanathan. Howto delegate and verify in public: verifiablecomputation from attribute-based encryption. In Proc.of the 9th Theory of Cryptography Conference (TCC),pages 422–439. Springer, 2012.

APPENDIXProof of Lemma 4By Lemma 3, it suffices to show for the case that λγ < 2n−1.This is definitely the interesting case, since 2n is normallysuperpolynomial in λ. Let A be a prefix-only adversaryagainst BRC that makes at most q(λ) queries (including thechallenge) and d be the minimum integer that λγ < 2d.We construct a PPT PRF distinguisher B that makes oraclequeries of fixed size n′ = n − d ≥ 1. On input 1λ, B flips acoin b, invokes A and initializes a security game, itself beingthe challenger. By the bound on the size of the ranges, allqueries of A have length greater than n′.Thus, for everyquery xn−1 · · ·xt, we have that t < d and B responds bymaking a query xn−1 · · ·xd, which is of length n′, receiving avalue y and answering to A as Gxt(· · · (Gxd−1(y))). When Asubmits a challenge, B acts as a normal challenger accordingto the coin flip b utilizing its oracle to determine the valueup to level d as above. Finally, B returns 1 iff A returns b.Clearly, when B’s oracle is a PRF fk of length n′, B returns1 iff A wins, i.e Pr[GASEC(1λ) = 1] = Pr[Bfk(·)(1λ) = 1]. Sincefk is a PRF, we have that for some negligible function ε(·),

|Pr[GASEC(1λ) = 1]− Pr[BR(·)(1λ) = 1]| ≤ ε(λ) . (1)

Consequently, we can construct a prefix-only adversary Aagainst a BRC with depth d and maximum range size λγ asfollows: A invokes A and chooses a random index j ∈ [q(λ)]and q(λ) − 1 random values k1, . . . , kj−1, kj+1, . . . , kq(λ) ∈{0, 1}λ. Index j reflects A’s attempt to guess which of all ofpossible different prefixes of length n′ that will appear in A’squeries will be the one that a prospective challenge query willhave. Then A keeps count of the different prefixes of lengthn′ that gradually appear and reacts to a query xn−1 · · ·xtaccording to the following checks:

(i). xn−1 · · ·xd is the i-th prefix and i 6= j: In this case, Aresponds with Gxt(· · · (Gxd−1(ki))).

(ii). xn−1 · · ·xd is the j-th prefix: If all of the queries madeby A that have the j-th prefix, along with xn−1 · · ·xt,cover the whole subtree with root prefix xn−1 · · ·xd,Tj , then A terminates the game with A and chooses aleaf zd−1 · · · z0 of Tj that has not been covered by A’sprevious queries. It submits zd−1 · · · z0 as its challengeand returns a random bit. Otherwise, A makes queryxd−1 · · ·xt and responds with the received value y.

(iii). t = 0 and xn−1 · · ·x0 is A’s challenge: If xn−1 · · ·xd is

not the j-th prefix, then A terminates the game withA, chooses a leaf zd−1 · · · z0 of Tj not yet covered byA’s queries, submits zd−1 · · · z0 as its challenge and re-turns a random bit. Otherwise, it submits challengexd−1 · · ·x0, receives value y∗ and responds with y∗.

If A does not terminate the security game with A, thenit returns A’s guess. By the choice of d, 2d−1 ≤ λγ < 2d,hence Lemma 3 implies that A has negligible distinguishingadvantage. Thus for some negligible function δ(·),

Pr[GASEC(1λ) = 1] ≤ 1/2 + δ(λ) . (2)

Page 15: Delegatable Pseudorandom Functions and Applications

By the description of A, the interaction between A andB in the case that B’s oracle is random is fully simulatedby A when the latter’s guess for the prefix of A’s challengeis correct. Formally, let E be the event that A guesses A’schallenge. Then it holds that

Pr[BR(·)(1λ) = 1] = Pr[GASEC(1λ) = 1|E] . (3)

By the description of A and (3), we have that

Pr[GASEC(1λ) = 1 ∧ ¬E] = 1/2 · (1− 1/q(λ)) and

Pr[GASEC(1λ) = 1 ∧ E] = 1/q(λ) · Pr[BR(·)(1λ) = 1] ,

where we used that Pr[E] = 1/q(λ). Therefore,

Pr[GASEC(1λ) = 1] = 1/2 + 1/q(λ) · (Pr[BR(·)(1λ) = 1]− 1/2) ,

so by applying (2) we get

Pr[BR(·)(1λ) = 1] ≤ 1/2 + q(λ) · δ(λ) . (4)

Finally, by (1) and (4) we conclude that

Pr[GASEC(1λ) = 1] ≤ 1/2 + q(λ) · δ(λ) + ε(λ) =

= 1/2 + negl(λ) .

Proof of Theorem 4LetA be a PPT adversary that wins the union policy privacygame with non-negligible distinguishing advantage α(·). LetP0, P1 be the two challenge ranges and b the chosen randombit, hence A receives the challenge trapdoor τb for Pb. De-note each element of a decomposition D as (x,L) or (x,R),if it is integer x and belongs to the leftmost or rightmost se-quence ofD respectively. Define the ordering<D over the el-ements of D as follows: (x,L) <D (y,R) or (x,R) <D (y,L),if x < y and (x,L) <D (x,R). We define a sequence of hy-brid games GA0 (1λ), . . . ,GAN (1λ), where N is the maximumsize of a trapdoor output by TURC. As shown in section4.2, N = 2dlog(λγ + 2)e − 1. The game Gi is executedas the original game GAUPP(1λ) during the pre-challenge andthe post-challenge phase. Assume the pairs of the challengetrapdoor τb are arranged according to the ordering deter-mined by the decomposition formed by the depths. Theonly modification in GAi (1λ) is that in the first i pairs of τb,the partial PRF values are the same as τb’s, while all theother elements are replaced by random values in {0, 1}λ. InGA0 (1λ), the challenge trapdoor consists of a number of pairsof random values attached to certain integers, independentlyof the choice of b. Therefore, Pr[GA0 (1λ) = 1] = 1/2. SinceGAN (1λ) is the UPP game, it holds that

Pr[GAN (1λ) = 1]− Pr[GA0 (1λ) = 1] ≥ α(λ) .

Let Er be the event that the size of the challenge ranges|AP0 |, |AP1 | that A submits is r. Then for some challengebit b ∈ {0, 1} and r ∈ [λγ ]:

Pr[GAN (1λ) = 1∧b∧Er]−Pr[GA0 (1λ) = 1∧b∧Er] ≥ α(λ)/2λγ ,

which implies that there exists an i ∈ [λγ ] such that

Pr[GAi (1λ) = 1 ∧ b ∧ Er]−

− Pr[GAi−1(1λ) = 1 ∧ b ∧ Er] ≥ α(λ)/2Nλγ .(5)

We will show that for these fixed b, r, i, we can construct anadversary Ai for the security game of a BRC construction

that has non-negligible winning advantage. The main idea isthat Ai invokes A and simulates either Gi or Gi−1 on selectedchallenge Pb of size r depending on the value of the challengebit bi for the security game GAiSEC(1λ).

On input 1λ, Ai computes the uniform decomposition ofr, Ur, and arranges its elements according to <Ur . Let uibe the integer that appears in the i-th element of Ur. TheBRC that Ai attacks has depth n − ui. Specifically, Aiinvokes A and answers all of its pre-challenge queries as fol-lows: for each query xn−1 · · ·xt, if t ≥ ui, it just transfersthe query, receives value y, and responds with y. Other-wise, it makes query xn−1 · · ·xui , receives value y, and re-sponds with Gxt(· · · (Gxui−1(y))). In the challenge phase,

if |APb | 6= r, then Ai terminates the game with A, choosesa valid random challenge, and returns a random bit. Oth-erwise, it makes i − 1 queries and computes the <Ur -firsti− 1 partial delegation keys y1, . . . , yi−1 of τb as in the pre-challenge phase, and sets the string x∗n−1 · · ·x∗ui that corre-sponds to the i-th partial key as its challenge, receiving y∗.Note that since A is restricted from making queries withinAP0 ∪AP1 , it makes no queries with prefix x∗n−1 · · ·x∗ui , thusAi’s challenge is valid. It arranges the values y1, . . . , yi−1, y

according to the order that τb imposes and “fills” the |Ur|− iremaining positions of an array like trapdoor τb with |Ur|− ipairs consisting of random values from {0, 1}λ along with thecorresponding depths. It returns τb to A and answers to A’spost-challenge queries as in the pre-challenge phase. If Areturns b, then Ai returns 1, or else it returns a random bit.

The probability that Ai wins the security game is

Pr[GAiSEC(1λ) = 1] = Pr[GAiSEC(1λ) = 1 ∧ ¬Er]+

+ Pr[GAiSEC(1λ) = 1 ∧ Er ∧ bi = 1]+

+ Pr[GAiSEC(1λ) = 1 ∧ Er ∧ bi = 0] .

(6)

It holds that Pr[GAiSEC(1λ) = 1 ∧ ¬Er] = 1/2 · (1− Pr[Er]).For the other two terms in the right part of (6), we observe

that when bi = 1, Ai simulates Gi when b and Er occur,whereas when bi = 0, Ai simulates Gi−1 when b and Eroccur. Therefore, by the description of Ai we have that

Pr[GAiSEC(1λ) = 1 ∧ Er ∧ bi = 1] =

= 1 · Pr[GAi (1λ) = 1 ∧ b ∧ Er]+

+ 1/2 · Pr[GAi (1λ) 6= 1 ∧ b ∧ Er] =

= 1/2 · Pr[b ∧ Er] + 1/2 · Pr[GAi (1λ) = 1 ∧ b ∧ Er]) .

and

Pr[GAiSEC(1λ) = 1 ∧ Er ∧ bi = 0] =

= 0 · Pr[GAi−1(1λ) = 1 ∧ b ∧ Er]+

+ 1/2 · Pr[GAi−1(1λ) 6= 1 ∧ b ∧ Er] =

= 1/2 · Pr[b ∧ Er]− 1/2 · Pr[GAi−1(1λ) = 1 ∧ b ∧ Er].

By the independency of b and Er, we have that Pr[b∧Er] =

1/2 · Pr[Er]. Thus, we evaluate Pr[GAiSEC(1λ) = 1] accordingto (6) as

Pr[GAiSEC(1λ) = 1] = 1/2 + 1/2 · (Pr[GAi (1λ) = 1 ∧ b ∧ Er]−

− Pr[GAi−1(1λ) = 1 ∧ b ∧ Er]) .

Therefore by (5), Pr[GAiSEC(1λ) = 1] ≥ 1/2 + α(λ)/4Nλγ ,which contradicts Theorem 3.