39
Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Embed Size (px)

Citation preview

Page 1: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Search to Decision Reductions for Knapsacks and LWE

1

October 3, 2011

Daniele Micciancio, Petros Mol

UCSD Theory Seminar

Page 2: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Number Theoretic Cryptography• Number theory: standard source of hard problems

– Factoring: given N = pq, find p (or q) (p, q: large primes)– Discrete Log: given g, gx (mod p) , find x

2

• Very influential over the years• Extensively studied (algorithms & constructions)

• Two major concernsi. How do we know that a “random” N is hard to factor ?ii. Algorithmic progress

- subexponential (classical) algorithms- polynomial quantum algorithms

Page 3: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Cryptography from Learning Problems

3

…Finding s easy

just need ~ n samples

secret sintegers n, q (public)

Page 4: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

New Problem: Learning With Errors (LWE)

4

secret sintegers n, q (public)

small error from a known distribution

Finding s possibly much hardernoise

b,

(mod q) random

small error vector

Am

n

A S e+=

secretCompactly…

Page 5: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

The many interpretations of LWE

• Algebraic: Solving noisy random linear equations

• Geometric: Bounded Distance Decoding in Lattices

• Learning Theory: Learning linear functions over under random “classification” noise

• Coding theory: Decoding random linear q-ary codes

5

Page 6: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

6

LWE Background• Introduced by Regev [R05]

• q = 2, Bernoulli noise -> Learning Parity with Noise (LPN)

• Extremely successful in Cryptography• IND-CPA Public Key Encryption [Regev05]• Injective Trapdoor Functions/ IND-CCA encryption [PW08]• Strongly Unforgeable Signatures [GPV08, CHKP10]• (Hierarchical) Identity Based Encryption [GPV08, CHKP10, ABB10]• Circular- Secure Encryption [ACPS09]• Leakage-Resilient Cryptography [AGV09, DGK+10, GKPV10]• (Fully) Homomorphic Encryption [GHV10, BV11]

Page 7: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Why LWE ?

7

• Worst-case/average-case reduction• Solving LWE* (average problem) at least as hard as

solving certain hard lattice problems in the worst-case

• High resistance to algorithmic attacks• no quantum attacks known• no subexponential algorithms known

*for specific error distribution

• Rich and Intuitive structure• Linear operations over fields (or rings)• Certain homomorphic properties

• Ability to embed a trapdoor• Wealth of Cryptographic applications

Page 8: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

LWE: Search & Decision

8

Public parameters

Given:Goal: find s (or e)

Find

Given:

Distinguish

Goal: decide ifor

n: size of the secret, m: #samplesq: modulus, :error dis/ion

Page 9: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

S-to-D: Worst-Case vs Average Case

9

• Clique (search): Given graph G and integer k, find a clique of size k in G, or say none exists.

• Clique (decision): Given graph G and integer k, output YES if G has a clique of size k and NO otherwise.

Theorem: Search <= Decision

Proof: Assume decider D.for v in V if D((G \ v, k)) = YES then G G \ vreturn G

Crucial: D is a worst-case (perfect) decider

Petros Mol
Page 10: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

S-to-D: Worst-Case vs Average Case

10

Pr[ D(A, As + e) = YES] - Pr[ D(A, t) = YES] = δ > 1/poly()

probability taken over sampling the instance and D’s randomness

D is an average (imperfect) deciderthis talkthis talk

Petros Mol
Page 11: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Search-to-Decision reductions (S-to-D)

11

Why do we care?

- all LWE-based constructions rely on decisional LWE- strong indistinguishability flavor of security definitions

- their hardness is better understood

decision problems

searchproblems

Petros Mol
Page 12: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Search-to-Decision reductions (S-to-D)

12

Why do we care?

- all LWE-based constructions rely on decisional LWE- strong indistinguishability flavor of security definitions

S-to-D reductions: “Primitive Π is ABC-Secure assuming search problem P is hard”

- their hardness is better understood

decision problems

searchproblems

Petros Mol
Page 13: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Our results

13

• Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions

• Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere

• Toolset for studying Search-to-Decision reductions for LWE with polynomially bounded noise.- Subsume and extend previously known ones- Reductions are in addition sample-preserving

Petros Mol
Page 14: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Our results

14

• Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions

• Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere

• Toolset for studying Search-to-Decision reductions for LWE with polynomially bounded noise.- Subsume and extend previously known ones- Reductions are in addition sample-preserving

Petros Mol
Page 15: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Bounded knapsack functions over groupsParameters - integer m- finite abelian group G- set S = {0,…, s - 1} of integers, s: poly(m)

15

(Random) Knapsack familySampling where

Evaluation

Example(random) modular subset sum:

Page 16: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Knapsack functions: Computational problems

16

invert(search)

Input: Goal: Find x

Distinguish(decision)

Input: Samples from either:

Goal: Label the samples

Glossary: If decision problem is hard, function is pseudorandom (PsR) If search problem is hard, function is One-Way

distribution over public

Notation: family of knapsacks over G with distribution

Page 17: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Search-to-Decision: Known results

17

Decision as hard as search when…

[Fischer, Stern 96]: syndrome decoding

, vector group

uniform over all m-bit vectors with Hamming weight w.

[Impagliazzo, Naor 89] : (random) modular subset sum

, cyclic group

uniform over

Page 18: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Our contribution: S-to-D for general knapsack

18

One-Way

: knapsack family with range G and input distribution over

PsRPsR +

s: poly(m)

Page 19: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

19

One-Way PsRPsR +Main Theorem

Our contribution: S-to-D for general knapsack: knapsack family with range G and input

distribution over s: poly(m)

Page 20: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

20

One-Way PsRPsR +Main Theorem

PsR

Much less restrictive than it seems

✔In most interesting cases holds in a strong information theoretic sense

Our contribution: S-to-D for general knapsack

Page 21: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

21

S-to-D for general knapsack: Examples

Any group G and any distribution over

Any group G with prime exponent and any distribution

Subsumes[IN89,FS96]and more

One-Way PsR

And many more…using known information theoretical tools (LHL, entropy bounds etc)

Page 22: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Proof overview

22

Input: Goal: Distinguish

DistinguisherInverter

Input: g , g.xGoal: Find x

Reminder

Page 23: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Proof overview

• Approach not (entirely) new [GL89, IN89]

23

Input: Goal: Distinguish

DistinguisherPredictor

Input: g , g.x, rGoal: find x.r (mod t)

Inverter

Input: g , g.xGoal: Find x

• Making it work in a general setting requires more advanced machinery and new technical ideas

<= <=

Page 24: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Proof Sketch: Step 1

24

Predictor

Input: g , g.x, rGoal: find x.r (mod t)

Inverter

Input: g , g.xGoal: Find x

<=

Page 25: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Proof Sketch: Step 1

25

Predictor

Input: f , f(x), rGoal: find x.r (mod t)

Inverter

Input: f , f(x)Goal: Find x

<=

This step holds for any function with domain

General conditions for inverting given noisy predictions for x.r (mod t) for possibly composite t-Goldreich–Levin: t=2-Goldreich, Rubinfeld, Sudan: t prime

Page 26: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Digression: Fourier Transform

26

Basis functions: where

Fourier Transform:

Fourier Representation:

Page 27: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

27

Energy:

LHFC is given query access to h. It outputs a list L s.t:- if , then L will most likely include α

- LHFC runs in poly(m, 1/τ)

τ

LHFCLHFC

xi h(xi)

Digression: Learning heavy coefficients [AGS03]

Page 28: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Inverting <= Predicting (predictor)

28

Input: f, f(x)Goal: Find x

PredictorInverter

Input: f, f(x), rGoal: guess x.r (mod t)

Quality of Predictor

perfect Imperfect(but useful)

Imperfect(and useless)

Error distributione = guess – x.r (mod t)

bias:

Page 29: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Inverting <= Predicting (main idea)

29

Run. time: poly(m, 1/ε) TP

Succes Prob.: c.ε

Predictort >= s

Inverter

Run. time: TP

Bias: ε

Theorem

LHFCLHFCPred.Red.f, f(x)

Proof Idea

inverter

Page 30: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Inverting <= Predicting (main idea)

30

Predictort >= s

InverterTheorem

LHFCLHFCPred.Red.f, f(x)

Proof Idea

When the predictor’s bias is ε, the unknown xhas energy

inverterh

Run. time: poly(m, 1/ε) TP

Succes Prob.: c.εRun. time: TP

Bias: ε,

Page 31: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Proof Sketch: Step 2

31

Input: Goal: Distinguish

DistinguisherPredictor

Input: g , g.x, rGoal: find x.r (mod t)

<=

• Reduction specific to knapsack functions

• Proof follows outline of [IN89] but technical details very different.

Page 32: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Predicting <= Distinguishing

32

Input: Goal: DistinguishDist/erPredictorPredictor

Input: g , g.x, rGoal: Guess x.r (mod t)

Proof Idea

• Predictor makes an initial guess for x.r (mod t)• Uses guess to create some input to the distinguisher• If dist/er outputs “knapsack”, predictor outputs guess• Otherwise, it revises the guess

Key Property

• Correct guess • Incorrect guess “closer” to

• [IN89] used same idea for subset sum • For arbitrary groups proof significantly more involved

Page 33: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Our results

33

• Powerful and usable criteria to establish Search-to-Decision equivalence for general classes of knapsack functions

• Use known techniques from Fourier analysis in a new context. Ideas potentially useful elsewhere

• Toolset for studying Search-to-Decision reductions for LWE with polynomially bounded noise.- Subsume and extend previously known ones- Reductions are in addition sample-preserving

Petros Mol
Page 34: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

What about LWE?

34

s e+A ,e,g1 g2…gm

GA g1 g2…gm

G is the parity check matrix for the code generated by A

If A is “random”, G is also “random”

Error e from LWE unknown input of the knapsack

m

n

Petros Mol
Page 35: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

What about LWE?

35

e,g1 g2…gm

G

g1 g2…gm

The transformation works in the other direction as well

s e+A , A

(A, As +e ) <= (G, Ge) <= (G’, G’e) <= (A’,A’s’ + e)

Search Search Decision Decision

S-to-D for knapsack

Putting all the pieces together…

Petros Mol
Page 36: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

LWE Implications

36

LWE reductions follow from knapsacks reductions over

All known Search-to-Decision results for LWE/LPN with bounded error [BFKL93, R05, ACPS09, KSS10] follow as a direct corollary

Search-to-Decision for new instantiations of LWE

Petros Mol
Page 37: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

LWE: Sample Preserving S-to-D

37

If we can distinguish m LWE samples from uniform with 1/poly(n) advantage, we can find s with 1/poly(n) probability given m LWE samples

Caveat: Inverting probability goes down (seems unavoidable)

All reductions are in addition sample-preserving

A

Previous reductions

A’searchdecision

poly(m)m<=b

,

b’,

Petros Mol
Page 38: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Why care about #samples?

38

LWE-based schemes often expose a certain number of samples, say m

With sample-preserving S-to-D we can base their security on the hardness of search LWE with m samples

Concrete algorithmic attacks against LWE [MR09, AG11] are sensitive to the number of exposed samples• for some parameters, LWE is completely broken by [AG11]

if number of given samples above a certain threshold

Petros Mol
Page 39: Search to Decision Reductions for Knapsacks and LWE 1 October 3, 2011 Daniele Micciancio, Petros Mol UCSD Theory Seminar

Open problems

Sample preserving reductions for

39

1. LWE with unbounded noise- Used in various settings [Pei09, GKPV10, BV11b, BPR11]- reductions are known [Pei09] but are not sample-preserving

2. ring LWE- Samples (a, a*s+e) where a, s, e drawn from R=Zq[x]/<f(x)>

- non sample-preserving reductions known [LPR10]