33
Model Reduction for Inverse Network Models Christian Himpe ([email protected]) Mario Ohlberger ([email protected]) WWU Münster Institute for Computational and Applied Mathematics 06.12.2013

Model Reduction for Inverse Network Models

  • Upload
    gramian

  • View
    113

  • Download
    1

Embed Size (px)

DESCRIPTION

held at Mathematical Technology of Networks, ZiF Bielefeld - december 2013

Citation preview

Page 1: Model Reduction for Inverse Network Models

Model Reduction for Inverse Network Models

Christian Himpe ([email protected])Mario Ohlberger ([email protected])

WWU MünsterInstitute for Computational and Applied Mathematics

06.12.2013

Page 2: Model Reduction for Inverse Network Models

Motivation

Intracranial EEG and tuned model output1:

1[Himpe’11]

Page 3: Model Reduction for Inverse Network Models

Application

Effective ConnectivityCausal ConnectivityLearning / Unlearning

Experiments (EEG/MEG, fMRI/fNIRS)

Deduce Connectivity

Network ModelDynamical SystemControl System

Large-ScaleInverse Problem

Page 4: Model Reduction for Inverse Network Models

Application

Effective ConnectivityCausal ConnectivityLearning / Unlearning

Experiments (EEG/MEG, fMRI/fNIRS)

Deduce Connectivity

Network ModelDynamical SystemControl System

Large-ScaleInverse Problem

Page 5: Model Reduction for Inverse Network Models

Control Systems

Linear Dynamical System

x(t) = Ax(t) x ∈ RN , A ∈ RN×N (1)x(0) = x0

Linear Control System

x(t) = Ax(t) + Bu(t) u ∈ RM , B ∈ RN×M

y(t) = Cx(t) x ∈ RN , A ∈ RN×N

x(0) = x0 y ∈ RO , C ∈ RO×N

(1) A is the adjacency matrix of a weighted directed graph.

Page 6: Model Reduction for Inverse Network Models

Control Systems

Linear Dynamical System

x(t) = Ax(t) x ∈ RN , A ∈ RN×N (1)x(0) = x0

Linear Control System

x(t) = Ax(t) + Bu(t) u ∈ RM , B ∈ RN×M

y(t) = Cx(t) x ∈ RN , A ∈ RN×N

x(0) = x0 y ∈ RO , C ∈ RO×N

(1) A is the adjacency matrix of a weighted directed graph.

Page 7: Model Reduction for Inverse Network Models

Model Reduction

Context:N = dim(x)� 1M = dim(u)� NO = dim(y)� N

Reduced Order Model:

M � N‖Y − y‖ � 1

Page 8: Model Reduction for Inverse Network Models

Model Reduction

Context:N = dim(x)� 1M = dim(u)� NO = dim(y)� N

Reduced Order Model:

M � N‖Y − y‖ � 1

Page 9: Model Reduction for Inverse Network Models

Linear Model Reduction

Aim:1 Identifiy important and less important states.2 Compute projection sorting states by importance.3 Truncate neglectable states.

Input-To-Output map:

u 7−−−→ x 7−−−→ y

Input-To-State map:

u 7−−−→ x

Controllability

State-To-Output map:

x 7−−−→ y

Observability

Page 10: Model Reduction for Inverse Network Models

Linear Model Reduction

Aim:1 Identifiy important and less important states.2 Compute projection sorting states by importance.3 Truncate neglectable states.

Input-To-Output map:

u 7−−−→ x 7−−−→ y

Input-To-State map:

u 7−−−→ x

Controllability

State-To-Output map:

x 7−−−→ y

Observability

Page 11: Model Reduction for Inverse Network Models

Controllability & Observability

Impulse Response:

G (t) = CeAtB, t > 0

Hankel Operator:

H =

∫ ∞0

CeAtBdt

Controllability Operator:

C =

∫ ∞0

eAtBdt

Controllability Gramian:

WC := CC ∗

Observability Operator:

O =

∫ ∞0

CeAtdt

Observability Gramian:

WO := O∗O

σi := λ(H) =√λ(WOWC )

Page 12: Model Reduction for Inverse Network Models

Balanced Truncation2

Controllability Gramian WC :

Lyapunov equation:

AWC + WCAT = −BBT

If Re(λ(A)) < 0):

WC =

∫ ∞0

eAtBBT eAT tdt

Observability Gramian WO :

Lyapunov equation:

ATWO + WOA = −CTC

If Re(λ(A)) < 0):

WO =

∫ ∞0

eAT tCTCeAtdt

Balancing: Sort by least controllable AND least observable states.

∃U,V : UV = 1,VWCV T = UTWOU =

σ1 . . .σn

2[Moore’81]

Page 13: Model Reduction for Inverse Network Models

Direct Truncation4 (Approximate Balancing)

Cross Gramian3 WX :

Sylvester equation:

AWX + WXA = −BC

If Re(λ(A)) < 0):

WX =

∫ ∞0

eAtBCeAtdt

NO balancing required!

WOWC = W 2X ⇒ σi = |λ(WX )| ⇒WX = UDV ≈ U

σ1 . . .σn

V

3Assume Σ = {A,B,C} symmetric ⇐ CA−1B symmetric!4Review in [Antoulas’05]

Page 14: Model Reduction for Inverse Network Models

Truncation

1

x = Ax + Buy = Cx

x(0) = x0

2

x = VAUx + VBuy = CUx

x(0) = Vx0

3

x = Ax + Buy = Cx

x(0) = x0

4

x =

(A11 A12

A21 A22

)(x1x2

)+

(B1

B2

)u

y =(C1 C2

)(x1x2

)

x(0) =

(x0,1x0,2

)

5

˙x = A11x1 + B1uy = C1x1

x(0) = x0,1

Page 15: Model Reduction for Inverse Network Models

Truncation

1

x = Ax + Buy = Cx

x(0) = x0

2

x = VAUx + VBuy = CUx

x(0) = Vx0

3

x = Ax + Buy = Cx

x(0) = x0

4

x =

(A11 A12

A21 A22

)(x1x2

)+

(B1

B2

)u

y =(C1 C2

)(x1x2

)

x(0) =

(x0,1x0,2

)

5

˙x = A11x1 + B1uy = C1x1

x(0) = x0,1

Page 16: Model Reduction for Inverse Network Models

Truncation

1

x = Ax + Buy = Cx

x(0) = x0

2

x = VAUx + VBuy = CUx

x(0) = Vx0

3

x = Ax + Buy = Cx

x(0) = x0

4

x =

(A11 A12

A21 A22

)(x1x2

)+

(B1

B2

)u

y =(C1 C2

)(x1x2

)

x(0) =

(x0,1x0,2

)

5

˙x = A11x1 + B1uy = C1x1

x(0) = x0,1

Page 17: Model Reduction for Inverse Network Models

Truncation

1

x = Ax + Buy = Cx

x(0) = x0

2

x = VAUx + VBuy = CUx

x(0) = Vx0

3

x = Ax + Buy = Cx

x(0) = x0

4

x =

(A11 A12

A21 A22

)(x1x2

)+

(B1

B2

)u

y =(C1 C2

)(x1x2

)

x(0) =

(x0,1x0,2

)

5

˙x = A11x1 + B1uy = C1x1

x(0) = x0,1

Page 18: Model Reduction for Inverse Network Models

Truncation

1

x = Ax + Buy = Cx

x(0) = x0

2

x = VAUx + VBuy = CUx

x(0) = Vx0

3

x = Ax + Buy = Cx

x(0) = x0

4

x =

(A11 A12

A21 A22

)(x1x2

)+

(B1

B2

)u

y =(C1 C2

)(x1x2

)

x(0) =

(x0,1x0,2

)

5

˙x = A11x1 + B1uy = C1x1

x(0) = x0,1

Page 19: Model Reduction for Inverse Network Models

Nonlinear Control Systems

Linear Control System

x(t) = Ax(t) + Bu(t) u ∈ RM , B ∈ RN×M

y(t) = Cx(t) x ∈ RN , A ∈ RN×N

x(0) = x0 y ∈ RO , C ∈ RO×N

General Control System

x(t) = f (x(t), u(t)) u ∈ RM ,

y(t) = g(x(t), u(t)) x ∈ RN , f : RN ×RM → RN

x(0) = x0 y ∈ RO , g : RN ×RM → RO

What now?

Page 20: Model Reduction for Inverse Network Models

Empirical Gramians5

Empirical Controllability Gramian: WC = 〈∫∞0 xU(t)x∗U(t)dt〉U

Empirical Observability Gramian: WO = 〈∫∞0 ρ(y∗X (t)yX (t))dt〉X

Empirical Cross Gramian: WX = 〈∫∞0 ϕ(xU(t), yX (t))dt〉U×X

with perturbation spaces

1 U for perturbing the input u,

2 X for perturbing the initial state x0

assembled from

rotations (orthogonal matrices),

scaling (real numbers),

for each input / state (unit normal vectors),

determined by the operating range of the underlying contol system.5[Lall’99],[Hahn’02],[Streif’06],[Himpe’13a]

Page 21: Model Reduction for Inverse Network Models

Empirical Gramians II

Note:For linear systems, the “empirical” equal the “classic” gramians.Computation requires only basic matrix and vector operations.

General Projection Framework:

˙x(t) = Vf (Ux(t), u(t))y(t) = g(Ux(t), u(t))x(0) = Vx0

Thus, Nonlinear Model Reduction can also useBalanced Truncation,Direct Truncation.

The same tools as for linear model reduction.

Page 22: Model Reduction for Inverse Network Models

Parameter Reduction

Parametrized Control System

x(t) = f (x(t), u(t), θ) u ∈ RM , θ ∈ RP

y(t) = g(x(t), u(t), θ) x ∈ RN , f : RN ×RM ×RP → RN

x(0) = x0 y ∈ RO , g : RN ×RM ×RP → RO

Aim:1 Identifiy important and less important parameters.2 Compute projection sorting parameters by importance.3 Truncate neglectable parameters.

1 Identifiy important and less important states.2 Compute projection sorting states by importance.3 Truncate neglectable states.

Handle parameters as:constant inputsconstant states

Page 23: Model Reduction for Inverse Network Models

Parameter Reduction

Parametrized Control System

x(t) = f (x(t), u(t), θ) u ∈ RM , θ ∈ RP

y(t) = g(x(t), u(t), θ) x ∈ RN , f : RN ×RM ×RP → RN

x(0) = x0 y ∈ RO , g : RN ×RM ×RP → RO

Aim:1 Identifiy important and less important parameters.2 Compute projection sorting parameters by importance.3 Truncate neglectable parameters.

1 Identifiy important and less important states.2 Compute projection sorting states by importance.3 Truncate neglectable states.

Handle parameters as:constant inputsconstant states

Page 24: Model Reduction for Inverse Network Models

Parameter Reduction

Parametrized Control System

x(t) = f (x(t), u(t), θ) u ∈ RM , θ ∈ RP

y(t) = g(x(t), u(t), θ) x ∈ RN , f : RN ×RM ×RP → RN

x(0) = x0 y ∈ RO , g : RN ×RM ×RP → RO

Aim:1 Identifiy important and less important parameters.2 Compute projection sorting parameters by importance.3 Truncate neglectable parameters.

1 Identifiy important and less important states.2 Compute projection sorting states by importance.3 Truncate neglectable states.

Handle parameters as:constant inputsconstant states

Page 25: Model Reduction for Inverse Network Models

More Empirical Gramians6

Empirical Sensitivity Gramian (Controllability-Based) WS :

u =

(uθ

)→ x = f (x , u) = f (x , u) +

P∑k=1

f (x , θk)→WS = δi,j trace(WC ,i )

Empirical Identifiability Gramian (Observability-Based) WI :

x =

(xθ

)→ ˙x = f (x , u) =

(f (x , u, θ)

0

), x(0) =

(x0θ

)⇒WI = S(WO)

Empirical Joint Gramian (Cross-Gramian-Based) WJ :

x =

(xθ

)→ ˙x = f (x , u) =

(f (x , u, θ)

0

), x(0) =

(x0θ

)⇒WI = S(WJ := WX )

6[Sun’06], [Geffen’08], [Himpe’13a]

Page 26: Model Reduction for Inverse Network Models

Combined Reduction

Controllability-Based

1 Compute WS→WC

2 Decompose WS

3 Truncate θ4 Compute WO

5 Balance WC ,WO

6 Decompose WCO

7 Truncate x

Observability-Based

1 Compute WI→WO

2 Decompose WI

3 Truncate θ4 Compute WC

5 Balance WC ,WO

6 Decompose WCO

7 Truncate x

Cross-Gramian-Based

1 Compute WI→WX

2 Decompose WI

3 Truncate θ

4 Decompose WX

5 Truncate x

Page 27: Model Reduction for Inverse Network Models

emgr - Empirical Gramian Framework7

Gramians:Empirical Controllability GramianEmpirical Observability GramianEmpirical Cross GramianEmpirical Sensitivity GramianEmpirical Identifiability GramianEmpirical Joint Gramian

Features:Uniform InterfaceCompatible with MATLAB & OCTAVEVectorized & ParallelizableOpen-Source licensed

More info at: http://gramian.de7see [Himpe’13]

Page 28: Model Reduction for Inverse Network Models

Numerical Experiment

Hyperbolic Network Model8:

x(t) = A(θ) tanh(Kx(t)) + Bu(t)y(t) = Cx(t)x(0) = x0

Dimensions:dim(u) = {4, 5, 6, 7, 8}dim(y) = dim(u)

dim(x) = {16, 25, 36, 49, 64}, dim(x) != dim(u)

dim(θ) = dim(x)2, dim(θ)!= dim(x)2

8[Quan’01]

Page 29: Model Reduction for Inverse Network Models

Setup

Synthetic Data:1 Generate random network (θ) with ensured stability of A(θ).2 Integrate System to obtain system output.3 Add Gaussian noise to output.

Inverse Problem9:1 Offline Phase:

Compute reduced order model using combined reduction.2 Online Phase:

Optimize reduced model using least squares.

9from here on θ is unknown

Page 30: Model Reduction for Inverse Network Models

Network (N=16)

Page 31: Model Reduction for Inverse Network Models

Numerical Results

20 30 40 50 6010

0

101

102

103

104

States

Tim

e (

s)

Offline Time

Full OrderW

S + W

O

WC + W

I

WJ

20 30 40 50 6010

0

101

102

103

104

States

Tim

e (

s)

Online Time

W

S + W

O

WC + W

I

WJ

20 30 40 50 6010

−4

10−3

10−2

10−1

States

Err

or

Relative Output Error

Full OrderW

S + W

O

WC + W

I

WJ

Page 32: Model Reduction for Inverse Network Models

Reduction Effectivity

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 110

−2

10−1

100

101

102

Normalized Total Time

No

rma

lize

d E

rro

r

Effectivity

W

S + W

O

WC + W

I

WJ

Page 33: Model Reduction for Inverse Network Models

tl;dl

Large-Scale Inverse Problems: Model Reduction.Model Reduction for Control Systems: Gramian-Based.Nonlinear Model Order Reduction: Empirical Gramians.Combined Reduction: Reduction of States and Parameters.(Empirical) Cross Gramian: very efficient!

Get the Source Code: http://j.mp/zifmtn13 .

Thanks!