Upload
others
View
16
Download
0
Embed Size (px)
Citation preview
Differentially Private Distributed Convex Optimization
via Functional Perturbation
Erfan Nozari
Department of Mechanical and Aerospace Engineering
University of California, San Diego
http://carmenere.ucsd.edu/erfan
July 6, 2016
Joint work with Pavankumar Tallapragada and Jorge Cortes
Distributed Coordination
What if local information is sensitive?
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 2
Distributed Coordination
What if local information is sensitive?
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 2
Motivating Scenario: Optimal EV Charging[Han et. al., 2014]
Central aggregator solves:
minimizer1,...,rn
U(∑n
i=1ri)
subject to ri ∈ i ∈ 1, . . . , n
• U = energy cost function
• ri = ri(t) = charging rate
•
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 3
Motivating Scenario: Optimal EV Charging[Han et. al., 2014]
Central aggregator solves:
minimizer1,...,rn
U(∑n
i=1ri)
subject to ri ∈ Ci i ∈ 1, . . . , n
• U = energy cost function
• ri = ri(t) = charging rate
• Ci = local constraints
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 3
Motivating Scenario: Optimal EV Charging[Han et. al., 2014]
Central aggregator solves:
minimizer1,...,rn
U(∑n
i=1ri)
subject to ri ∈ Ci i ∈ 1, . . . , n
• U = energy cost function
• ri = ri(t) = charging rate
• Ci = local constraints
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 3
Myth: Aggregation Preserves Privacy
• Fact: NOT in the presence of side-information
• Toy example:
1 100
2 120...
n 90
Database
Average = 110
2 120...
n 90
Side Information
⇒ d1 = 100
• Real example: A. Narayanan and V. Shmatikov successfully
de-anonymized Netflix Prize dataset (2007)
Side information: IMDB databases!
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 4
Myth: Aggregation Preserves Privacy
• Fact: NOT in the presence of side-information
• Toy example:
1 100
2 120...
n 90
Database
Average = 110
2 120...
n 90
Side Information
⇒ d1 = 100
• Real example: A. Narayanan and V. Shmatikov successfully
de-anonymized Netflix Prize dataset (2007)
Side information: IMDB databases!
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 4
Myth: Aggregation Preserves Privacy
• Fact: NOT in the presence of side-information
• Toy example:
1 100
2 120...
n 90
Database
Average = 110
2 120...
n 90
Side Information
⇒ d1 = 100
• Real example: A. Narayanan and V. Shmatikov successfully
de-anonymized Netflix Prize dataset (2007)
Side information: IMDB databases!
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 4
Myth: Aggregation Preserves Privacy
• Fact: NOT in the presence of side-information
• Toy example:
1 100
2 120...
n 90
Database
Average = 110
2 120...
n 90
Side Information
⇒ d1 = 100
• Real example: A. Narayanan and V. Shmatikov successfully
de-anonymized Netflix Prize dataset (2007)
Side information: IMDB databases!
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 4
Myth: Aggregation Preserves Privacy
• Fact: NOT in the presence of side-information
• Toy example:
1 100
2 120...
n 90
Database
Average = 110
2 120...
n 90
Side Information
⇒ d1 = 100
• Real example: A. Narayanan and V. Shmatikov successfully
de-anonymized Netflix Prize dataset (2007)
Side information: IMDB databases!
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 4
Outline
1 DP Distributed Optimization
Problem Formulation
Impossibility Result
2 Functional Perturbation
Perturbation Design
3 DP Distributed Optimization via Functional Perturbation
Regularization
Algorithm Design and Analysis
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 5
Outline
1 DP Distributed Optimization
Problem Formulation
Impossibility Result
2 Functional Perturbation
Perturbation Design
3 DP Distributed Optimization via Functional Perturbation
Regularization
Algorithm Design and Analysis
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 5
Problem FormulationOptimization
Standard additive convex optimization problem:
minimizex∈D
f(x) ,n∑i=1
fi(x)
subject to G(x) ≤ 0
Ax = b
minimizex∈X
f(x) ,n∑i=1
fi(x)
• A non-private solution
[Nedic et. al., 2010]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =n∑j=1
wijxj(k)∑
αk =∞∑α2k <∞
1 2
3
4
5 6
7
8
Assumption:
• D is compact
• fi’s are strongly
convex and C2
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 6
Problem FormulationOptimization
Standard additive convex optimization problem:
minimizex∈D
f(x) ,n∑i=1
fi(x)
subject to G(x) ≤ 0
Ax = b
minimizex∈X
f(x) ,n∑i=1
fi(x)
minimizex∈X
f(x) ,n∑i=1
fi(x)
• A non-private solution
[Nedic et. al., 2010]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =n∑j=1
wijxj(k)∑
αk =∞∑α2k <∞
1 2
3
4
5 6
7
8
Assumption:
• D is compact
• fi’s are strongly
convex and C2
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 6
Problem FormulationOptimization
Standard additive convex optimization problem:
minimizex∈X
f(x) ,n∑i=1
fi(x)
• A non-private solution
[Nedic et. al., 2010]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =
n∑j=1
wijxj(k)∑
αk =∞∑α2k <∞
1 2
3
4
5 6
7
8
Assumption:
• D is compact
• fi’s are strongly
convex and C2
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 6
Problem FormulationOptimization
Standard additive convex optimization problem:
minimizex∈X
f(x) ,n∑i=1
fi(x)
• A non-private solution
[Nedic et. al., 2010]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =
n∑j=1
wijxj(k)
∑αk =∞∑α2k <∞
1 2
3
4
5 6
7
8
Assumption:
• D is compact
• fi’s are strongly
convex and C2
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 6
Problem FormulationOptimization
Standard additive convex optimization problem:
minimizex∈X
f(x) ,n∑i=1
fi(x)
• A non-private solution
[Nedic et. al., 2010]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =
n∑j=1
wijxj(k)∑
αk =∞∑α2k <∞
1 2
3
4
5 6
7
8
Assumption:
• D is compact
• fi’s are strongly
convex and C2
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 6
Problem FormulationPrivacy
• “Information”: F = (fi)ni=1 ∈ Fn
• Given (V, ‖ · ‖V) with V ⊆ F ,
Adjacency
F, F ′ ∈ Fn are V-adjacent if there exists i0 ∈ 1, . . . , n such that
fi = f ′i for i 6= i0 and fi0 − f ′i0 ∈ V
• For a random map M : Fn × Ω→ X and ε ∈ Rn>0
Differential Privacy (DP)
M is ε-DP if
∀ V-adjacent F, F ′ ∈Fn ∀O ⊆ X
PM(F ′, ω) ∈ O ≤ eεi0‖fi0−f′i0‖VPM(F, ω) ∈ O
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 7
Problem FormulationPrivacy
• “Information”: F = (fi)ni=1 ∈ Fn
• Given (V, ‖ · ‖V) with V ⊆ F ,
Adjacency
F, F ′ ∈ Fn are V-adjacent if there exists i0 ∈ 1, . . . , n such that
fi = f ′i for i 6= i0 and fi0 − f ′i0 ∈ V
• For a random map M : Fn × Ω→ X and ε ∈ Rn>0
Differential Privacy (DP)
M is ε-DP if
∀ V-adjacent F, F ′ ∈Fn ∀O ⊆ X
PM(F ′, ω) ∈ O ≤ eεi0‖fi0−f′i0‖VPM(F, ω) ∈ O
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 7
Problem FormulationPrivacy
• “Information”: F = (fi)ni=1 ∈ Fn
• Given (V, ‖ · ‖V) with V ⊆ F ,
Adjacency
F, F ′ ∈ Fn are V-adjacent if there exists i0 ∈ 1, . . . , n such that
fi = f ′i for i 6= i0 and fi0 − f ′i0 ∈ V
• For a random map M : Fn × Ω→ X and ε ∈ Rn>0
Differential Privacy (DP)
M is ε-DP if
∀ V-adjacent F, F ′ ∈Fn ∀O ⊆ X
PM(F ′, ω) ∈ O ≤ eεi0‖fi0−f′i0‖VPM(F, ω) ∈ O
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 7
Case StudyLinear Classification with Logistic Loss Function
• Training records: (aj , bj)Nj=1
where aj ∈ [0, 1]2 and
bj ∈ −1, 1• Goal: find the best separating
hyperplane xTa
0 1
1
a1
a2
xTa = 0
Convex Optimization Problem
x∗ = argminx∈X
n∑i=1
• Logistic loss: `(x; a, b) = ln(1 + e−baT x)
1 2
3
4
5 6
7
8
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 8
Case StudyLinear Classification with Logistic Loss Function
• Training records: (aj , bj)Nj=1
where aj ∈ [0, 1]2 and
bj ∈ −1, 1• Goal: find the best separating
hyperplane xTa
0 1
1
a1
a2
xTa = 0
Convex Optimization Problem
x∗ = argminx∈X
n∑i=1
N∑j=1
(`(x; aj , bj) +
λ
2|x|2)
• Logistic loss: `(x; a, b) = ln(1 + e−baT x)
1 2
3
4
5 6
7
8
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 8
Case StudyLinear Classification with Logistic Loss Function
• Training records: (aj , bj)Nj=1
where aj ∈ [0, 1]2 and
bj ∈ −1, 1• Goal: find the best separating
hyperplane xTa
0 1
1
a1
a2
xTa = 0
Convex Optimization Problem
x∗ = argminx∈X
n∑i=1
Ni∑j=1
(`(x; ai,j , bi,j) +
λ
2|x|2)
• Logistic loss: `(x; a, b) = ln(1 + e−baT x) 1 2
3
4
5 6
7
8
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 8
Message Perturbation vs. Objective Perturbation
A generic distributed optimization algorithm:
Message Passing
i j
Local State Update
x+i = hi(xi, x−i)fi
Network
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 9
Message Perturbation vs. Objective Perturbation
Message Perturbation:
Message Passing
i j
Local State Update
x+i = hi(xi, x−i)fi
Network
Objective Perturbation:
Message Passing
i j
Local State Update
x+i = hi(xi, x−i)fi
Network
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 9
Message Perturbation vs. Objective Perturbation
Message Perturbation:
Message Passing
i j
Local State Update
x+i = hi(xi, x−i)fi
Network
Objective Perturbation:
Message Passing
i j
Local State Update
x+i = hi(xi, x−i)fi
Network
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 9
Message Perturbation vs. Objective Perturbation
Message Perturbation:
Message Passing
i j
Local State Update
x+i = hi(xi, x−i)fi
Network
Objective Perturbation:
Message Passing
i j
Local State Update
x+i = hi(xi, x−i)fi
Network
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 9
Impossibility Result
Generic message-perturbing algorithm:
x(k + 1) = aI(x(k), ξ(k))
ξ(k) = x(k) + η(k)
Theorem
If
• The η → x dynamics is 0-LAS
• ηi(k) ∼ Lap(bi(k)) or ηi(k) ∼ N (0, bi(k))
• bi(k) is O( 1kp ) for some p > 0
Then no ε-DP of the information set I for any ε > 0
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 10
Impossibility Result
Generic message-perturbing algorithm:
x(k + 1) = aI(x(k), ξ(k))
ξ(k) = x(k) + η(k)
Theorem
If
• The η → x dynamics is 0-LAS
• ηi(k) ∼ Lap(bi(k)) or ηi(k) ∼ N (0, bi(k))
• bi(k) is O( 1kp ) for some p > 0
Then no ε-DP of the information set I for any ε > 0
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 10
Impossibility Result: An Example
Algorithm proposed in [Huang et. al., 2015]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =
n∑j=1
wijξj(k)
ξj(k) = xj(k) + ηj(k)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 11
Impossibility Result: An Example
Algorithm proposed in [Huang et. al., 2015]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =
n∑j=1
wijξj(k)
ξj(k) = xj(k) + ηj(k)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 11
Impossibility Result: An Example
Algorithm proposed in [Huang et. al., 2015]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =
n∑j=1
wijξj(k)
ξj(k) = xj(k) + ηj(k)
• ηj(k) ∼ Lap(∝ pk)
• αk ∝ qk0 < q < p < 1
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 11
Impossibility Result: An Example
Algorithm proposed in [Huang et. al., 2015]:
xi(k + 1) = projX(zi(k)− αk∇fi(zi(k)))
zi(k) =
n∑j=1
wijξj(k)
ξj(k) = xj(k) + ηj(k)
• ηj(k) ∼ Lap(∝ pk)
• αk ∝ qk0 < q < p < 1
Finite sum
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 11
Impossibility Result: An Example
Algorithm proposed in [Huang et. al., 2015]:
• Simulation results for a linear classification problem:
1013
1014
1015
1016
ǫ
10-3
10-2
10-1
100
101
102
103
|x−x∗|
10-1
100
101
102
Empirical dataLinear fit of log |x∗
− x∗| against log ǫ
Theoretical upper bound on |E[x∗]− x∗|
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 11
Outline
1 DP Distributed Optimization
Problem Formulation
Impossibility Result
2 Functional Perturbation
Perturbation Design
3 DP Distributed Optimization via Functional Perturbation
Regularization
Algorithm Design and Analysis
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 11
State of the Art
• [Chaudhuri et. al., 2011]
• First proposed “objective perturbation” by adding linear random
functions
• Extended by [Kifer et. al., 2012] to constrained and non-differentiable
problems
• Preserves DP of objective function parameters
• [Zhang et. al., 2012]
• Proposed objective perturbation by adding sample path of Gaussian
stochastic process
• Preserves DP of objective function parameters
• [Hall et. al., 2013]
• Proposed objective perturbation by adding quadratic random functions
• Preserves DP of objective function parameters
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 12
State of the Art
• [Chaudhuri et. al., 2011]
• First proposed “objective perturbation” by adding linear random
functions
• Extended by [Kifer et. al., 2012] to constrained and non-differentiable
problems
• Preserves DP of objective function parameters
• [Zhang et. al., 2012]
• Proposed objective perturbation by adding sample path of Gaussian
stochastic process
• Preserves DP of objective function parameters
• [Hall et. al., 2013]
• Proposed objective perturbation by adding quadratic random functions
• Preserves DP of objective function parameters
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 12
State of the Art
• [Chaudhuri et. al., 2011]
• First proposed “objective perturbation” by adding linear random
functions
• Extended by [Kifer et. al., 2012] to constrained and non-differentiable
problems
• Preserves DP of objective function parameters
• [Zhang et. al., 2012]
• Proposed objective perturbation by adding sample path of Gaussian
stochastic process
• Preserves DP of objective function parameters
• [Hall et. al., 2013]
• Proposed objective perturbation by adding quadratic random functions
• Preserves DP of objective function parameters
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 12
Prelim: Hillbert Spaces
• Hilbert space H = complete inner-product space
• Orthonormal basis ekk∈I ⊂ H
• If H is separable:
h =
∞∑k=1
δk︷ ︸︸ ︷〈h, ek〉 ek
• For D ⊆ Rd, L2(D) is a separable Hilbert space ⇒ F = L2(D)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 13
Prelim: Hillbert Spaces
• Hilbert space H = complete inner-product space
• Orthonormal basis ekk∈I ⊂ H
• If H is separable:
h =
∞∑k=1
δk︷ ︸︸ ︷〈h, ek〉 ek
• For D ⊆ Rd, L2(D) is a separable Hilbert space ⇒ F = L2(D)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 13
Functional Perturbation via Laplace Noise
• Φ : coefficient sequence δ → function h =∑∞k=1 δkek
• Adjacency space:
Vq =
Φ(δ) |∞∑k=1
(kqδk)2 <∞
• Random map:
M(f,η) = Φ(Φ−1(f) + η
)= f + Φ(η)
Functional
Perturbation
Theorem
For ηk ∼ Lap( γkp ), q > 1, and p ∈(12 , q −
12
), M guarantees ε-DP with
ε =1
γ
√ζ(2(q − p))
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 14
Functional Perturbation via Laplace Noise
• Φ : coefficient sequence δ → function h =∑∞k=1 δkek
• Adjacency space:
Vq =
Φ(δ) |∞∑k=1
(kqδk)2 <∞
• Random map:
M(f,η) = Φ(Φ−1(f) + η
)= f + Φ(η)
Functional
Perturbation
Theorem
For ηk ∼ Lap( γkp ), q > 1, and p ∈(12 , q −
12
), M guarantees ε-DP with
ε =1
γ
√ζ(2(q − p))
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 14
Functional Perturbation via Laplace Noise
• Φ : coefficient sequence δ → function h =∑∞k=1 δkek
• Adjacency space:
Vq =
Φ(δ) |∞∑k=1
(kqδk)2 <∞
• Random map:
M(f,η) = Φ(Φ−1(f) + η
)= f + Φ(η)
Functional
Perturbation
Theorem
For ηk ∼ Lap( γkp ), q > 1, and p ∈(12 , q −
12
), M guarantees ε-DP with
ε =1
γ
√ζ(2(q − p))
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 14
Outline
1 DP Distributed Optimization
Problem Formulation
Impossibility Result
2 Functional Perturbation
Perturbation Design
3 DP Distributed Optimization via Functional Perturbation
Regularization
Algorithm Design and Analysis
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 14
Resilience to Post-processing
Algorithm sketch:
1. Each agent perturbs its own objective function (offline)
2. Agents participate in an arbitrary distributed optimization
algorithm with perturbed functions (online)
• M : L2(D)n × Ω→ L2(D)n
• F : L2(D)n → X , where (X ,ΣX ) is an arbitrary measurable space
Corollary (special case of [Ny & Pappas 2014, Theorem 1])
If M is ε-DP, then F M : L2(D)n × Ω→ X is ε-DP.
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 15
Resilience to Post-processing
Algorithm sketch:
1. Each agent perturbs its own objective function (offline)
2. Agents participate in an arbitrary distributed optimization
algorithm with perturbed functions (online)
• M : L2(D)n × Ω→ L2(D)n
• F : L2(D)n → X , where (X ,ΣX ) is an arbitrary measurable space
Corollary (special case of [Ny & Pappas 2014, Theorem 1])
If M is ε-DP, then F M : L2(D)n × Ω→ X is ε-DP.
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 15
Ensuring Regularity of Perturbed Functions
• fi =M(fi,ηi) may be discontinuous/non-convex/...
• S = Regular functions ⊂ C2(D) ⊂ L2(D)
• Ensuring Smoothness: C2(D) is dense in L2(D) so
∀εi > 0 pick fsi ∈ C2(D) such that ‖fi − fsi ‖ < εi
• Ensuring Regularity:
fi = projS(fsi )
fi
εi
fsi
Proposition
S is convex and closed relative to C2(D)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 16
Ensuring Regularity of Perturbed Functions
• fi =M(fi,ηi) may be discontinuous/non-convex/...
• S = Regular functions ⊂ C2(D) ⊂ L2(D)
• Ensuring Smoothness: C2(D) is dense in L2(D) so
∀εi > 0 pick fsi ∈ C2(D) such that ‖fi − fsi ‖ < εi
• Ensuring Regularity:
fi = projS(fsi )
fi
εi
fsi
Proposition
S is convex and closed relative to C2(D)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 16
Ensuring Regularity of Perturbed Functions
• fi =M(fi,ηi) may be discontinuous/non-convex/...
• S = Regular functions ⊂ C2(D) ⊂ L2(D)
• Ensuring Smoothness: C2(D) is dense in L2(D) so
∀εi > 0 pick fsi ∈ C2(D) such that ‖fi − fsi ‖ < εi
• Ensuring Regularity:
fi = projS(fsi )
fi
εi
fsi
Proposition
S is convex and closed relative to C2(D)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 16
Ensuring Regularity of Perturbed Functions
• fi =M(fi,ηi) may be discontinuous/non-convex/...
• S = Regular functions ⊂ C2(D) ⊂ L2(D)
• Ensuring Smoothness: C2(D) is dense in L2(D) so
∀εi > 0 pick fsi ∈ C2(D) such that ‖fi − fsi ‖ < εi
• Ensuring Regularity:
fi = projS(fsi )
fi
εi
fsi
Proposition
S is convex and closed relative to C2(D)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 16
Algorithm
1. Each agent perturbs its function:
fi =M(fi,ηi) = fi + Φ(ηi), ηi,k ∼ Lap(bi,k), bi,k =γikpi
2. Each agent selects fsi ∈ S0 such that
‖fi − fsi ‖ < εi
3. Each agent projects fsi onto S:
fi = projS(fsi )
4. Agents participate in any distributed optimization algorithm
with (fi)ni=1
Offline
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 17
Algorithm
1. Each agent perturbs its function:
fi =M(fi,ηi) = fi + Φ(ηi), ηi,k ∼ Lap(bi,k), bi,k =γikpi
2. Each agent selects fsi ∈ S0 such that
‖fi − fsi ‖ < εi
3. Each agent projects fsi onto S:
fi = projS(fsi )
4. Agents participate in any distributed optimization algorithm
with (fi)ni=1
Offline
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 17
Accuracy Analysis
• Set of “regular” functions:
S = h ∈ C2(D) | αId ≤ ∇2h(x) ≤ βId and |∇h(x)| ≤ u
Lemma (K-Lipschitzness of argmin)
For f, g ∈ S, ∣∣ argminx∈X
f − argminx∈X
g∣∣ ≤ κα,β(‖f − g‖)
• Definex∗ = argmin
x∈X
n∑i=1
fi and x∗ = argminx∈X
n∑i=1
fi,
Theorem (Accuracy)
E |x∗ − x∗| ≤n∑i=1
κn
(ζ(qi)
εi
)+ κn(εi)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 18
Accuracy Analysis
• Set of “regular” functions:
S = h ∈ C2(D) | αId ≤ ∇2h(x) ≤ βId and |∇h(x)| ≤ u
Lemma (K-Lipschitzness of argmin)
For f, g ∈ S, ∣∣ argminx∈X
f − argminx∈X
g∣∣ ≤ κα,β(‖f − g‖)
• Definex∗ = argmin
x∈X
n∑i=1
fi and x∗ = argminx∈X
n∑i=1
fi,
Theorem (Accuracy)
E |x∗ − x∗| ≤n∑i=1
κn
(ζ(qi)
εi
)+ κn(εi)
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 18
Simulation ResultsLinear Classification with Logistic Loss Function
ǫ
10-2
10-1
100
101
102
103
|x∗−
x∗|
10-5
10-4
10-3
10-2
10-1
100
101
102
103
104
Theoretical bound
Empirical data
Piecewise linear fit
ǫ
10-2
10-1
100
101
102
103
10-5
10-4
10-3
10-2
10-1
100
101
102
103
104
Theoretical bound
2nd order
6th order
14th order
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 19
Conclusions and Future Work
In this talk, we
• Proposed a definition of DP for functions
• Illustrated a fundamental limitation of message-perturbing strategies
• Proposed the method of functional perturbation
• Discussed how functional perturbation can be applied to distributed
convex optimization
Future work includes
• relaxation of the smoothness, convexity, and compactness assumptions
• comparing the numerical efficiency of different bases for L2
• characterizing the expected sub-optimality gap of the algorithm and
the optimal privacy-accuracy trade-off curve
• further understanding the appropriate scales of privacy parameters for
particular applications
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 20
Conclusions and Future Work
In this talk, we
• Proposed a definition of DP for functions
• Illustrated a fundamental limitation of message-perturbing strategies
• Proposed the method of functional perturbation
• Discussed how functional perturbation can be applied to distributed
convex optimization
Future work includes
• relaxation of the smoothness, convexity, and compactness assumptions
• comparing the numerical efficiency of different bases for L2
• characterizing the expected sub-optimality gap of the algorithm and
the optimal privacy-accuracy trade-off curve
• further understanding the appropriate scales of privacy parameters for
particular applications
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 20
Questions and Comments
Full results of this talk available in:
E. Nozari, P. Tallapragada, J. Cortes, “Differentially Private Distributed Convex
Optimization via Functional Perturbation,” IEEE Trans. on Control of Net. Sys.,
provisionally accepted, http://arxiv.org/abs/1512.00369
Erfan Nozari (UCSD) Differentially Private Distributed Optimization 21
Formal Definitionin original context [Dwork et. al., 2006]
Context:
• D ∈ D: A database of records
• Adjacency: D1, D2 ∈ D are adjacent if they differ by at most 1 record
• (Ω,Σ,P): Probability space
• q : D → X: (Honest) query function
• M : D × Ω→ X: Randomized/sanitized query function
• ε > 0: Level of privacy
Definition
M is ε-DP if
∀ adjacent D1, D2 ∈ D ∀O ⊆ X PM(D1) ∈ O ≤ eεPM(D2) ∈ O
• Adjacency is symmetric ⇒
PM(D1) ∈ O ≤ eεPM(D2) ∈ OPM(D2) ∈ O ≤ eεPM(D1) ∈ O
Erfan Nozari (UCSD) Differentially Private Distributed Optimization
Formal Definition: Geometric Interpretationin original context
Definition
M is ε-DP if
∀ adjacent D1, D2 ∈ D ∀O ⊆ X PM(D1) ∈ O ≤ eεPM(D2) ∈ O
D
X
D1
D2
q(D1)
q(D2)
q
q
M
M
M(D1, ω)
M(D2, ω)
O
Erfan Nozari (UCSD) Differentially Private Distributed Optimization
Operational Meaning of DPA binary decision example [Geng&Pramod, 2013]
• Adversary’s decision =
TRUE if M(D,ω) ∈ OFALSE if M(D,ω) ∈ Oc
• MD = M(D1, ω) ∈ Oc
• FA = M(D2, ω) ∈ O
D X
D1
D2
q(D1)
q(D2)
q
q
O
• If M is ε-DP thenPM(D1, ω) ∈ O ≤ eεPM(D2, ω) ∈ OPM(D2, ω) ∈ Oc ≤ eεPM(D1, ω) ∈ Oc
⇒
1− pMD ≤ eεpFA1− pFA ≤ eεpMD
⇒ pMD, pFA ≥eε − 1
e2ε − 1
Erfan Nozari (UCSD) Differentially Private Distributed Optimization
Generalizing the Definition: Using Metrics[Chatzikokolakis et. al., 2013]
• If D1, D2 differ in N elements then
PM(D1, ω) ∈ O ≤ eNεPM(D2, ω) ∈ O
• d : D ×D → [0,∞) metric on D
Definition –revisited
M gives/preserves ε-differential privacy if
∀D1, D2 ∈ D ∀O ⊆ X we have
PM(D1, ω) ∈ O ≤ eεd(D1, D2)PM(D2, ω) ∈ O
Erfan Nozari (UCSD) Differentially Private Distributed Optimization