24
IHP-EU Network Workshop / Winter School Breaking Complexity: Nonlinear / Adaptive Approximation in High Dimensions Bad Honnef December 14-18, 2005

Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

  • Upload
    others

  • View
    11

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

IHP-EU Network Workshop / Winter School

Breaking Complexity: Nonlinear / Adaptive Approximation in High Dimensions

Bad HonnefDecember 14-18, 2005

Page 2: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using
Page 3: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

IHP-

EU N

etw

ork

Wor

ksho

p / W

inte

r Sch

ool"

Bre

akin

g C

ompl

exity

: Non

linea

r / A

dapt

ive

App

roxi

mat

ion

in H

igh

Dim

ensi

ons"

C

OM

PAC

T PR

OG

RA

M

Tues

day

Dec

13

Wed

nesd

ayD

ec 1

4Th

ursd

ayD

ec 1

5Fr

iday

Dec

16

Satu

rday

Dec

17

Sund

ayD

ec 1

8

08:0

008

:00

Brea

kfas

t

08:0

0

Brea

kfas

t

08:0

0

Brea

kfas

t

08:0

0

Brea

kfas

t

08:0

0

Brea

kfas

t08

:30

08:3

008

:30

08:3

008

:30

08:3

0

09:0

009

:00

9:00

:Wel

com

e an

d O

peni

ng9:

05-1

0:05

Mic

hael

Grie

bel

09:0

0

Wol

fgan

g H

ackb

usch

09:0

0

Rei

nhol

d Sc

hnei

der

09:0

0

Just

in R

ombe

rg

09:0

0

Rad

u-A

lexa

ndru

Tod

or09

:30

09:3

009

:30

09:3

009

:30

09:3

0

10:0

010

:00

Cof

fee

Brea

k10

:00

Cof

fee

Brea

k10

:00

Cof

fee

Brea

k10

:00

Cof

fee

Brea

k10

:00

Cof

fee

Brea

k

10:3

010

:30

And

rej N

itsch

e

10:3

0B

oris

Kho

rom

skij

10:3

0H

olge

r Rau

hut

10:3

0

Wol

fgan

g D

ahm

en

10:3

0E

lisab

eth

Ullm

ann

11:0

011

:00

11:0

011

:00

11:0

011

:00

Hei

nz-J

ürge

n Fl

ad

Jens

Kei

ner

Her

man

n M

atth

ies

11:3

011

:30

11:3

011

:30

11:3

011

:30

Mar

tin M

ohle

nkam

pO

laf K

ahrs

Sha

i Dek

el12

:00

12:0

012

:00

12:0

012

:00

12:0

0

12:3

012

:30

Lunc

h Br

eak

12:3

0

Lunc

h Br

eak

12:3

0

Lunc

h Br

eak

12:3

0

Lunc

h Br

eak

12:3

0Lu

nch

Brea

k

13:0

013

:00

13:0

013

:00

13:0

013

:00

DEP

AR

TUR

E

13:3

013

:30

13:3

013

:30

13:3

013

:30

####

###

###

####

###

###

####

###

###

14:0

014

:00

14:0

014

:00

14:0

014

:00

15:0

015

:00

Mar

kus

Heg

land

15:0

0

Har

ry Y

sere

ntan

t

15:0

0

EXC

UR

SIO

N

15:0

0

Vlad

imir

Tem

lyak

ov

15:0

0

15:3

015

:30

Ste

fan

Eng

blom

15:3

015

:30

15:3

015

:30

16:0

0C

offe

e16

:00

Cof

fee

Brea

k16

:00

Cof

fee

Brea

k16

:00

16:0

0C

offe

e Br

eak

16:0

0

16:3

0

AR

RIV

AL

16:3

0

Rob

Ste

vens

on

16:3

0

Stef

an G

oede

cker

16:3

016

:30

Chr

isto

ph S

chw

ab

16:3

0

17:0

017

:00

17:0

017

:00

17:0

017

:00

17:3

017

:30

17:3

017

:30

17:3

017

:30

Joch

en G

arck

eN

orbe

rt H

ilber

Rad

ek K

ucer

a18

:00

18:0

018

:00

18:0

018

:00

18:0

0

18:3

0D

inne

r18

:30

Din

ner

18:3

0C

ON

FER

ENC

E D

INN

ER18

:30

Din

ner

18:3

0D

inne

r18

:30

Page 4: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

IHP-

EU N

etw

ork

Wor

ksho

p / W

inte

r Sch

ool "

Bre

akin

g C

ompl

exity

: Non

linea

r / A

dapt

ive

App

roxi

mat

ion

in H

igh

Dim

ensi

ons"

PR

OG

RA

M

Tues

day

Dec

13

Wed

nesd

ayD

ec 1

4Th

ursd

ayD

ec 1

5Fr

iday

Dec

16

Satu

rday

Dec

17

Sund

ayD

ec 1

8

08:0

008

:00

Brea

kfas

t

08:0

0

Brea

kfas

t

08:0

0

Brea

kfas

t

08:0

0

Brea

kfas

t

08:0

0

Brea

kfas

t08

:30

08:3

008

:30

08:3

008

:30

08:3

0

09:0

009

:00

9:00

:Wel

com

e an

d O

peni

ng9:

05-1

0:05

Mic

hael

Grie

bel: S

pars

egr

ids

for h

ighe

r dim

ensi

onal

par

tial

diffe

rent

ial e

quat

ions

09:0

0

Wol

fgan

g H

ackb

usch

: Str

uctu

red

Dat

a-Sp

arse

Rep

rese

ntat

ion

of M

ulti-

Dim

ensi

onal

Non

loca

l Ope

rato

rs

09:0

0

Rei

nhol

d Sc

hnei

der:

Wav

e Fu

nctio

n M

etho

ds fo

r the

Num

eric

al S

olut

ion

of

the

Elec

tron

ic S

chrö

ding

er E

quat

ion

09:0

0

Just

in R

ombe

rg: R

ecov

ery

of S

pars

e Si

gnal

s fr

om In

com

plet

e an

d In

accu

rate

Mea

sure

men

ts

09:0

0

Rad

u-A

lexa

ndru

Tod

or: S

pars

ePe

rtur

batio

n A

lgor

ithm

s fo

r Elli

ptic

Pr

oble

ms

with

Sto

chas

tic D

ata

09:3

009

:30

09:3

009

:30

09:3

009

:30

10:0

010

:00

Cof

fee

Brea

k10

:00

Cof

fee

Brea

k10

:00

Cof

fee

Brea

k10

:00

Cof

fee

Brea

k10

:00

Cof

fee

Brea

k

10:3

010

:30

And

rej N

itsch

e: A

ppro

xim

atio

n Th

eory

fo

r Ten

sor P

rodu

ct M

etho

ds

10:3

0B

oris

Kho

rom

skij:

Stru

ctur

ed T

enso

rD

ecom

posi

tion

of M

ulti-

Dim

ensi

onal

O

p era

tors

10:3

0H

olge

r Rau

hut: R

ando

m S

ampl

ing

of

Spar

se T

rigon

omet

ric P

olyn

omia

ls

10:3

0

Wol

fgan

g D

ahm

en: U

nive

rsal

Alg

orith

ms

for L

earn

ing

Theo

ry

10:3

0E

lisab

eth

Ullm

ann:

On

Solv

ing

Larg

eLi

near

Sys

tem

s Ar

isin

g fro

m th

e St

ocha

stic

Fin

ite E

lem

ent M

etho

d11

:00

11:0

011

:00

11:0

011

:00

11:0

0

Hei

nz-J

ürge

n Fl

ad: T

enso

r Pro

duct

A

ppro

xim

atio

ns in

Qua

ntum

C

hem

istr

y

Jens

Kei

ner:

Fast

Sum

mat

ion

of R

adia

l Fu

nctio

ns o

n th

e Sp

here

Her

man

n M

atth

ies:

Ada

ptiv

eU

ncer

tain

ty Q

uant

ifica

tion

with

St

ocha

stic

Par

tial D

iffer

entia

l Eq

uatio

ns

11:3

011

:30

11:3

011

:30

11:3

011

:30

Mar

tin M

ohle

nkam

p: A

lgor

ithm

s fo

rC

ompu

ting

with

Sum

s of

Sep

arab

le

Func

tions

/ O

p era

tors

Ola

f Kah

rs: I

ncre

men

tal I

dent

ifica

tion

ofN

ARX

Mod

els

by S

pars

e G

rid

App r

oxim

atio

n

Sha

i Dek

el: Im

age

Cod

ing

with

An

isot

ropi

c W

avel

ets

12:0

012

:00

12:0

012

:00

12:0

012

:00

12:3

012

:30

Lunc

h Br

eak

12:3

0

Lunc

h Br

eak

12:3

0

Lunc

h Br

eak

12:3

0

Lunc

h Br

eak

12:3

0Lu

nch

13:0

013

:00

13:0

013

:00

13:0

013

:00

DEP

AR

TUR

E

13:3

013

:30

13:3

013

:30

13:3

013

:30

####

###

###

####

###

###

####

###

###

14:0

014

:00

14:0

014

:00

14:0

014

:00

15:0

015

:00

Mar

kus

Heg

land

: Sol

ving

the

Stoc

hast

icM

aste

r Equ

atio

ns fo

r Gen

e R

egul

ator

y N

etw

orks

with

Sp a

rse

Grid

s

15:0

0

Har

ry Y

sere

ntan

t: The

Hyp

erbo

lic

Cro

ss S

pace

App

roxi

mat

ion

of

Elec

tron

ic W

avef

unct

ions

15:0

0

EXC

UR

SIO

N

15:0

0

Vlad

imir

Tem

lyak

ov: T

he V

olum

e Es

timat

es a

nd th

e M

ultiv

aria

te

App

roxi

mat

ion

15:0

0

15:3

015

:30

Ste

fan

Eng

blom

: Num

eric

al S

olut

ion

Met

hods

for t

he M

aste

r Equ

atio

n

15:3

015

:30

15:3

015

:30

16:0

0C

offe

e16

:00

Cof

fee

Brea

k16

:00

Cof

fee

Brea

k16

:00

16:0

0C

offe

e Br

eak

16:0

0

16:3

0

AR

RIV

AL

16:3

0

Rob

Ste

vens

on: A

dapt

ive

Tens

or

Prod

uct W

avel

et M

etho

ds -

Avo

idin

g th

e C

urse

of D

imen

sion

ality

16:3

0

Stef

an G

oede

cker

: Dau

bech

ies

Wav

elet

s fo

r Ele

ctro

nic

Stru

ctur

e C

alcu

latio

ns

16:3

016

:30

Chr

isto

ph S

chw

ab: S

pars

e W

avel

et

Met

hods

for O

pera

tor E

quat

ions

with

St

ocha

stic

Dat

a

16:3

0

17:0

017

:00

17:0

017

:00

17:0

017

:00

17:3

017

:30

17:3

017

:30

17:3

017

:30

Joch

en G

arck

e: U

sing

the

Opt

imiz

edC

ombi

natio

n Te

chni

que

for R

egre

ssio

n Pr

oble

ms

Nor

bert

Hilb

er: S

pars

e W

avel

et M

etho

dsfo

r Opt

ion

Pric

ing

unde

rSt

ocha

stic

Vol

atilit

y

Rad

ek K

ucer

a: A

Fas

t Wav

elet

Sol

utio

nof

Sep

arab

le P

DEs

by

the

Fict

itiou

s D

omai

n M

etho

d18

:00

18:0

018

:00

18:0

018

:00

18:0

0

18:3

0D

inne

r18

:30

Din

ner

18:3

0C

ON

FER

ENC

E D

INN

ER18

:30

Din

ner

18:3

0D

inne

r18

:30

Page 5: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Participants1 Marcel Bieri ETH Zuerich [email protected]

2 Dietrich Braess Ruhr-Universitaet Bochum [email protected]

3 Juergen Braun Universitaet Bonn [email protected]

4 Kolja Brix RWTH Aachen [email protected]

5 Carsten Burstedde Universitaet Bonn [email protected]

6 Claudio Canuto Politecnico di Torino [email protected]

7 Martin Campos Pinto RWTH Aachen [email protected]

8 Gabriela Constantinescu Universitaet Bonn [email protected]

9 Wolfgang Dahmen RWTH Aachen [email protected]

10 Shai Dekel IDX Systems [email protected]

11 Tammo Jan Dijkema University of Utrecht [email protected]

12 Stefan Engblom Uppsala University [email protected]

13 Mike Espig MPI Leipzig [email protected]

14 Konstantin Fackeldey Universitaet Bonn [email protected]

15 Abul K.M. Fahimuddin TU Braunschweig [email protected]

16 Christian Feuersaenger Universitaet Bonn [email protected]

17 Heinz-Jürgen Flad MPI Leipzig [email protected]

18 Benjamin Frowein Universitaet Bonn [email protected]

19 Tsogtgerel Gantumur University of Utrecht [email protected]

20 Jochen Garcke ANU [email protected]

21 Luigi Genovese LMC-IMAG Grenoble [email protected]

22 Stefan Goedecker Universitaet Basel [email protected]

23 Michael Griebel Universitaet Bonn [email protected]

24 Wolfgang Hackbusch MPI Leipzig [email protected]

25 Jan Hamaekers Universitaet Bonn [email protected]

26 Markus Hegland Australian National Univ. [email protected]

27 Norbert Hilber ETH Zuerich [email protected]

28 Holger Hoffmann Universitaet Bonn [email protected]

29 Julia Holtermann RWTH Aachen [email protected]

30 Markus Holtz Universitaet Bonn [email protected]

31 Isioro Tokunbo Jaboro York University [email protected]

32 Lukas Jager Universitaet Bonn [email protected]

33 Olaf Kahrs RWTH Aachen [email protected]

34 Hamid Reza Karimi University of Tehran [email protected]

35 Jens Keiner Universitaet zu Luebeck [email protected]

36 Abdul Khaliq Middle Tennessee Stat Univ. [email protected]

37 Boris Khoromskij MPI Leipzig [email protected]

38 Rolf Krause Universitaet Bonn [email protected]

39 Tristan Kremp RWTH Aachen [email protected]

40 Radek Kucera VSB-TU Ostrava [email protected]

41 Stefan Kunis Universitaet Wien [email protected]

42 Angela Kunoth Universitaet Bonn [email protected]

43 Jan Maes Katholieke Universiteit Leuven [email protected]

44 Hermann Matthies TU Braunschweig [email protected]

45 Borislav Minchev University of Wales [email protected]

46 Vinod Mishra SLIET [email protected]

47 Martin Mohlenkamp Ohio University [email protected]

48 Andrej Nitsche Australian National Univ. [email protected]

49 Roland Pabel Universitaet Bonn [email protected]

50 Holger Rauhut Universitaet Wien [email protected]

Page 6: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

51 Nils Reich ETH Zuerich [email protected]

52 Gareth Wyn Roberts University of Wales [email protected]

53 Justin Romberg Caltech [email protected]

54 Christian Schneider Universitaet Bonn [email protected]

55 Reinhold Schneider CAU Kiel [email protected]

56 Christoph Schwab ETH Zuerich [email protected]

57 Rob Stevenson University of Utrecht [email protected]

58 Anita Tabacco Politecnico di Torino [email protected]

59 Vladimir Temlyakov University of South Carolina [email protected]

60 Radu-Alexandru Todor ETH Zuerich [email protected]

61 Elisabeth Ullmann TU Bergakademie Freiberg [email protected]

62 Gerrit Welper RWTH Aachen [email protected]

63 Gisela Widmer ETH Zuerich [email protected]

64 Ralf Wildenhues Universitaet Bonn [email protected]

65 Christoph Winter ETH Zuerich [email protected]

66 Dionisio Félix Yáñez Universidad Valencia [email protected]

67 Harry Yserentant TU Berlin [email protected]

Page 7: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Universal Algorithms for Learning TheoryWolfgang Dahmen

(joint work with P. Binev, A. Cohen, R. DeVore)

This talk is concerned with estimating regression functions in supervised learning. Theusual setting reads as follows. We suppose that ρ is an unknown measure on a productspace Z := X × Y , where X is a bounded domain of Rd and Y = R. Given mindependent random observations zi = (xi, yi), i = 1, . . . ,m, identically distributedaccording to ρ, we are interested in estimating the regression function fρ(x) definedas the conditional expectation of the random variable y at x:

fρ(x) :=

Y

ydρ(y|x)

with ρ(y|x) the conditional probability measure on Y with respect to x. We shall usez = z1, . . . , zm ⊂ Zm to denote the set of observations.

One of the goals of learning is to provide estimates under minimal restrictions onthe measure ρ since this measure is generally unknown. However, the typical workingassumption is that this probability measure is supported on X × [−M,M ], i.e |y| ≤Malmost surely, which is the case in many applications.

Denoting by ρX the marginal probability measure on X defined by

ρX(S) := ρ(S × Y )

(which is assumed to be a Borel measure on X), one has dρ(x, y) = dρ(y|x)dρX(x). Itis then easy to check that fρ is the minimizer of the risk functional

E(f) :=

Z

(y − f(x))2dρ, (0.1)

over f ∈ L2(X, ρX) where this space consists of all functions from X to Y which aresquare integrable with respect to ρX . In fact, one has

E(f) = E(fρ) + f − fρ2,

where · := · L2(X,ρX).

The objective is to find an estimator fz for fρ based on z such that the quantity fz−fρis small either in probability or in expectation, i.e. the rate of decay of the quantities

Pfρ − fz ≥ η, η > 0 or E(fρ − fz2)

as the sample size m increases is as high as possible. Here both the expectation andthe probability are taken with respect to the product measure ρm defined on Zm. Esti-mations in probability are preferred since they give more information about the successof a particular algorithm and they automatically yield an estimate in expectation byintegrating with respect to η.

1

Abstracts

Page 8: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

This type of regression problem is referred to as random design or distribution-free.A recent survey on distribution free regression theory is provided in the book [2], whichincludes most existing approaches as well as the analysis of their rate of convergencein the expectation sense.

A common approach to this problem is to choose a hypothesis (or model) class Hand then to define fz, in analogy to (0.1), as the minimizer of the empirical risk

fz := argminf∈HEz(f), with Ez(f) :=1

m

mj=1

(yj − f(xj))2.

In other words, fz is the best approximation to (yj)mj=1 from H in the empirical norm

g2m :=

1

m

mj=1

|g(xj)|2.

Typically, H = Hm depends on a finite number n = n(m) of parameters.Since the estimated deviations involve an approximation error as well as the data

uncertainty, a good performance of the estimator in the above sense would be obtainedwhen both error contributions are balanced. Since the approximation error depends onthe regularity of the approximated function, in some algorithms the number n is chosenusing an a priori assumption on fρ. Here the main issue is to construct estimators thatare universal in the sense that such prior assumptions are avoided. It turns out thatthis can be achieved by adapting the number n to the data.

In this talk some recent developments on the construction of universal estima-tors based on thresholding concepts are reviewed. This offers computationally moreeconomic alternatives to complexity regularization and (in contrast to complexity reg-ularization) sometimes even provides optimal rates in probability, see e.g. [1]. It isindicated under which circumstances optimal rates can be obtained and which strate-gies can be used when the spatial dimension d of X is large.

References

[1] Binev, P., A. Cohen, W. Dahmen, R. DeVore, and V. Temlyakov (2004) Universalalgorithms in learning theory - Part I : piecewise constant functions, Journal ofMachine Learning Research (JMLR), 6(2005), 1297–1321.

[2] Gyorfy, L., M. Kohler, A. Krzyzak, A. and H. Walk (2002) A distribution-freetheory of nonparametric regression, Springer, Berlin.

2

Universal Algorithms for Learning TheoryWolfgang Dahmen

(joint work with P. Binev, A. Cohen, R. DeVore)

This talk is concerned with estimating regression functions in supervised learning. Theusual setting reads as follows. We suppose that ρ is an unknown measure on a productspace Z := X × Y , where X is a bounded domain of Rd and Y = R. Given mindependent random observations zi = (xi, yi), i = 1, . . . ,m, identically distributedaccording to ρ, we are interested in estimating the regression function fρ(x) definedas the conditional expectation of the random variable y at x:

fρ(x) :=

Y

ydρ(y|x)

with ρ(y|x) the conditional probability measure on Y with respect to x. We shall usez = z1, . . . , zm ⊂ Zm to denote the set of observations.

One of the goals of learning is to provide estimates under minimal restrictions onthe measure ρ since this measure is generally unknown. However, the typical workingassumption is that this probability measure is supported on X × [−M,M ], i.e |y| ≤Malmost surely, which is the case in many applications.

Denoting by ρX the marginal probability measure on X defined by

ρX(S) := ρ(S × Y )

(which is assumed to be a Borel measure on X), one has dρ(x, y) = dρ(y|x)dρX(x). Itis then easy to check that fρ is the minimizer of the risk functional

E(f) :=

Z

(y − f(x))2dρ, (0.1)

over f ∈ L2(X, ρX) where this space consists of all functions from X to Y which aresquare integrable with respect to ρX . In fact, one has

E(f) = E(fρ) + f − fρ2,

where · := · L2(X,ρX).

The objective is to find an estimator fz for fρ based on z such that the quantity fz−fρis small either in probability or in expectation, i.e. the rate of decay of the quantities

Pfρ − fz ≥ η, η > 0 or E(fρ − fz2)

as the sample size m increases is as high as possible. Here both the expectation andthe probability are taken with respect to the product measure ρm defined on Zm. Esti-mations in probability are preferred since they give more information about the successof a particular algorithm and they automatically yield an estimate in expectation byintegrating with respect to η.

1

Page 9: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Image coding with anisotropic wavelets

S. Dekel

(joint work with D. Alani and A. Averbuch)

We apply the theory of [DL] to low bit-rate image coding [A]. First,a (geometrically greedy) Binary Space Partition (BSP) of the imageis performed. Then, anisotropic wavelets are constructed over theadaptive BSP tree. Finally, a quantized, sparse representation of theimage is coded using tree nonlinear approximation [CDDD] and rate-distortion techniques. The algorithm seems to outperform some re-cently published anisotropic image coding algorithms [LM, SDDV] inthe low bit-rate range.

Preprints can be downloaded from: http://shaidekel.tripod.com

References

[A] D. Alani, Image coding with anisotropic wavelets, M.Sc. thesis, Tel-AvivUniversity, Israel, 2005.

[CDDD] A. Cohen, W. Dahmen, I. Daubechies, R. DeVore, Tree approximationand encoding, ACHA 11 (2001), 192-226.

[DL] S. Dekel, D. Leviatan, Adaptive multivariate approximation using binaryspace partitions and geometric wavelets, SIAM J. Num. Anal. (to appear).

[LM] E. Le Pennec, S. Mallat, Sparse geometric image representations with ban-delets IEEE Trans. Image Proc. 14 (2005), 423- 438.

[SDDV] R. Shukla, P. L. Dragotti, M. N. Do and M. Vetterli, Rate-distortion opti-mized tree structured compression algorithms for piecewise smooth images,IEEE Trans. Image Proc. 14 (2005), 343- 359.

1

Numerical Solution Methods for the Master Equation

Stefan Engblom

Derived from the Markov character only, the master equation of chemical reac-tions is an accurate stochastic description of quite general systems in chemistry.For D reacting species this is a differential-difference equation in D dimensionsgoverning the behavior of the probability distribution of the systems differentstates. As is to be expected, the equation is only rarely analytically solvable andnumerical methods for solving it are of interest both in research and practice.The most frequently used approximative solution method is to write down the

corresponding set of reaction rate equations. In many cases this approximationis not valid, or only partially so, as stochastic effects caused by the natural noisepresent in the full description of the problem are poorly captured.We will discuss some properties of the master equation along with some possible

numerical approaches for solving it.

1

Page 10: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Tensor Product Approximations in Quantum Chemistry

Heinz-Jurgen Flad

(joint work with S. R. Chinnamsetty, W. Hackbusch, B. N. Khoromskij, R.Schneider, and S. Schwinger)

The representation of many-electron wavefunctions in terms of tensor products ofone-electron wavefunctions, so called orbitals, provides the standard frameworkfor computational methods in quantum chemistry [1]. Furthermore Gaussian typeorbitals (GTO) proved to be a convenient basis for the approximation of orbitals.This basis introduces an additional tensor product structure which turned outto be beneficial from a computational point of view. We have studied an alter-native approach based on best N -term approximations for anisotropic wavelettensor products [2, 3]. Special emphasize has been laid on the singular behaviournear electron-nuclear and electron-electron cusps. A possible generalization ofthis approach, which encompasses traditional GTO bases as well, are “optimal”tensor product decompositions with lowest possible Kronecker rank. We presentsome preliminary results for orbitals and related quantities like electron densities.Together with an appropriate tensor product decomposition of the Coulomb po-tential this enables data sparse representations of Hartree potentials and exchangeoperators. Possible applications are within a recently proposed multiresolutionmethod for electron correlations [4].

References

[1] T. Helgaker, P. Jørgensen and J. Olsen, Molecular Electronic-Structure Theory (Wiley,New York, 1999).

[2] H.-J. Flad, W. Hackbusch and R. Schneider, Best N -term approximation in electronicstructure calculations.I. One-electron reduced density matrix, MPI-MIS preprint 60(2005), to appear in M2AN.

[3] H.-J. Flad, W. Hackbusch and R. Schneider, Best N -term approximation in electronicstructure calculations.II. Jastrow factors, MPI-MIS preprint 80 (2005), submitted toM2AN.

[4] H.-J. Flad, W. Hackbusch, H. Luo and D. Kolb, Diagrammatic multiresolution analysisfor electron correlations, Phys. Rev. B. 71, (2005) 125115, 18 p.

1

Page 11: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Using the optimized combination technique for regression problems

Jochen Garcke

(joint work with Markus Hegland)

We present a generalisation of the sparse grid combination technique for regres-sion in moderately high dimensions d ≤ 15. In contrast to the original combina-tion technique the coefficents in the combination formula do not depend only onthe used partial grids, but instead on the function to be reconstructed, i.e., onthe given data. The coefficients are computed to fulfill a certain optimality con-dition in a projection sense, first proposed in [He]. With this modified ansatz weare able to address instability issues of the normal combination technique, whichwere observed for some real life data sets in [Ga]. We will present results for arange of benchmark data sets for regression showing the feasibility of this newansatz in comparison to the normal combination technique as well other standardalgorithms for regression.Furthermore we will report on theoretical studies of the instability of the com-

bination technique and will relate that to angles between the employed discretespaces. Besides giving results which compare the error of the normal combinationtechnique with the optimized combination technique using the (potentially large)angles between spaces, we will show in experiments under what conditions theseangles can be large for regression problems.

References

[Ga] J. Garcke. Maschinelles Lernen durch Funktionsrekonstruktion mit verallgemeinertendunnen Gittern. Doktorarbeit, Institut fur Numerische Simulation, Universitat Bonn,2004.

[He] Markus Hegland. Additive sparse grid fitting. In Proceedings of the Fifth InternationalConference on Curves and Surfaces, Saint-Malo, France 2002, pages 209–218. NashboroPress, 2003.

1

Page 12: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Daubechies wavelets for electronic structure calculations

Stefan Goedecker

Orthogonal basis functions are desirable for electronic structure calculations. Dif-ficulties in the calculation of the potential energy term have prevented up to nowthe use of orthogonal Daubechies wavelets. I will present the method that wehave developped to calculate the potential energy rapidly and accurately for aDaubechies basis set. In addition I will describe some of the key characteristicsof the density functional electronic structure program that we are developping.

1

Sparse grids for higher dimensional partial differential equations

Michael Griebel

The numerical treatment of high(er) dimensional problems suffers in general fromthe so-called curse of dimensionality. In special cases, i.e. for special functionclasses, this exponential dependence of O(n−r/d) of the achieved accuracy on theinvested work n can be substantially reduced. Here, r denotes smoothness and ddimensionality. This is e.g. the case for spaces of functions with bounded mixedderivatives. The associated numerical schemes involve a series expansion in amultiscale basis for the one-dimensional problem. Then, a product constructionand a proper truncation of the resulting d-dimensional expansion result in a so-called sparse grid approximation which is closely related to hyperbolic crosses..Here, depending on the respective problem and the 1-dimensional multiscale ba-sis used, a variety of algorithms for higher dimensional problems result whichallow to break the curse of dimensionality, at least to some extent, and resultin complexities of the order O(n−r log(n)α(d)) . In special cases even α(d) = 0can be achieved. This is for example possible if the error is measured in the H1-seminorm or if the different dimensions as well as their interactions are not equallyimportant and dimension-adaptive strategies are used. The constant in these or-der estimates, however, is still dependent on d. It also reflects subtile details ofthe implementation of the respective numerical scheme. In general, the orderconstant grows exponentially with d. In some cases, however, it can be shownthat it decays exponentially with d. This allows to treat quite high dimensionalproblems in moderate computing time. We discuss such sparse grid algorithmsfor the numerical treatment of partial differential equations (like Fokker-Plankand Schrodinger’s equation) in higher dimensions

1

Page 13: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Structured Data-Sparse Representation of Multi-DimensionalNonlocal Operators

B. N. Khoromskij

(joint work with W. Hackbusch)

Coupling the hierarchical and tensor-product formats allows an opportunity forefficient data-sparse representation of integral and more general nonlocal oper-ators in higher dimensions (cf. [HKT], [GHK], [HK], [Kh], [Kh1]). Examplesof such nonlocal mappings are solution operators of elliptic, parabolic and hy-perbolic boundary value problems, Lyapunov and Riccati solution operators incontrol theory, spectral projection operators associated with the matrix sign func-tion for solving the Hartree-Fock equation, collision integrals in the deterministicBoltzmann equation as well as the convolution integrals in the Ornstein-Zernikeequation. We discuss how the H-matrix techniques combined with the Kroneckertensor-product approximation allow to represent a function F(A) of a discrete

elliptic operator A in a hypercube (0, 1)d ∈ Rd in the case of a high spatial di-mension d. Along with integral operators, we focus on the functions A−1 andsign(A). The asymptotic complexity of our approximations can be estimated byO(Np/d logq N), p = 1, 2, where N is the discrete problem size. Numerical resultsfor model problems will be addressed.

References

[HK] W. Hackbusch, B.N. Khoromskij: Low-Rank Kronecker-Product Approximation toMulti-Dimensional Nonlocal Operators. Parts I,II. Preprints 29/30, MPI MIS, Leipzig2004; (Computing, to appear).

[HKT] W. Hackbusch, B.N. Khoromskij, and E. Tyrtyshnikov: Hierarchical Kronecker tensor-product approximation, Preprint 35, Max-Planck-Institut fur Mathematik in den Natur-wissenschaften, Leipzig, 2003; J. Numer. Math. 13(2005), 119-156.

[GHK] I. P. Gavrilyuk, W. Hackbusch and B. N. Khoromskij: Tensor-Product Approxima-tion to Elliptic and Parabolic Solution Operators in Higher Dimensions. Computing 74(2005), 131-157.

[Kh] B.N. Khoromskij: Structured data-sparse approximation to high order tensors arisingfrom the deterministic Boltzmann equation. Preprint 4, Max-Planck-Institut fur Math-ematik in den Naturwissenschaften, Leipzig 2005 (submitted).

[Kh1] B.N. Khoromskij:An Introduction to Structured Tensor-Product Rep-resentation of Discrete Nonlocal Operators. Lecture Notes 27, Max-Planck-Institut fur Mathematik in den Naturwissenschaften, Leipzig 2005(http://www.mis.mpg.de/scicomp/Fulltext/Khoromskij).

1

Page 14: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Solving the Stochastic Master Equations for Gene RegulatoryNetworks with Sparse Grids

Markus Hegland

(joint work with C.Burden, L.Santoso, H.Booth and S.McNamara)

An important driver of the dynamics of gene regulatory networks is noise whichis generated by the random interactions of genes with their products and regu-lators. As relatvely small numbers of molecules of each substrate are involvedsuch systems are best described by stochastic models, i.e., the stochastic mas-ter equations which provide probability distributions over the state space. Herethe state is a discrete vector of the form (n1, . . . , nk) where the ni include theamounts of the different proteins, RNA, dimers and polymers and the state ofthe DNA etc. As in some cases hundreds of components can be involved in agene regulatory network the approximation of the propbability distribution hasto address the curse of dimensionality. The traditional approach uses stochasticsimulation techniques which effectively address the curse. However, many (thou-sands of) repeated simulations are required to provide precise information aboutstationary points, bifurcation phenomena and other properties of the stochasticprocesses due to the O(N−1/2) sampling error where N is the number of simula-tions. An alternative way to address the curse of dimensionality is provided bysparse grid approximations and the direct solution of the master equations. Thesparse grid methodology is applied and the application demonstrated to workefficiently for up to 10 proteins and we are currently developing techniques whichcan deal with 100s of proteins. The sparse grid methodology is generalised tothe case of integer variables ni and a multiresolution technique for this case ispresented. Error bounds are provided which confirm the effectiveness of sparsegrid approximations for “smooth” high-dimensional probability distributions

References

[MH] M. Hegland, S. McNamara, L. Santoso, H. Booth and C. Burden, A Solver for theStochastic Master Equation Applied to Gene Regulatory Network, 25 p., submitted forpublication August 2005.

1

Page 15: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Sparse wavelet methods for option pricing under stochastic volatility

Norbert Hilber

(joint work with Ana-Maria Matache and Christoph Schwab)

Prices of European plain vanilla on one risky asset in a Black-Scholes marketwith stochastic volatility are expressed as solution of degenerate parabolic partialdifferential equations in two spatial variables: the spot price S and the volatilityprocess variable y.We present and analyze a pricing algorithm based on sparse wavelet space dis-cretizations of order p ≥ 1 in (S, y) and on hp-discontinuous Galerkin time-stepping with geometric step size reduction towards maturity T .Wavelet preconditioners adapted to the volatility models for a GMRES solverallow to price contracts at all maturities 0 < t ≤ T and all spot prices for agiven strike K in essentially O(N) work with accuracy of essentially O(N−p),a performance comparable to that of the best FFT-based pricing methods forconstant volatility models (where“essentially” means up to powers of log N and| log h|, respectively).

References

[HMS] N. Hilber, A. M. Matache, C. Schwab, Sparse wavelet methods for option pricing understochastic volatility, Journal of Computational Finance, 8 (4), 2005, 1–42.

1

Page 16: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Incremental identification of NARX models by sparse gridapproximation

Olaf Kahrs

(joint work with Wolfgang Marquardt)

Nonlinear empirical models are used in various applications in the field of chem-ical engineering, such as process monitoring, control, and optimization. Duringmodel development, five major steps have to be carried out: model structureselection, determination of suitable input variables, model complexity adjust-ment, parameter estimation, and model validation. Frequently, neural networks(e.g., multilayer perceptrons and radial basis function networks) are used for thispurpose. However, the associated nonlinear parameter estimation problems aredifficult to solve and require tailored identification algorithms [B]. Often, themodel development steps have to be repeated until a satisfactory model is found,which can be very time consuming and may require user interaction.In this talk, we present an algorithm based on sparse grid function approxima-

tion that systematically integrates the five model development steps in an efficientway [KBM]. Suitable model input variables are selected automatically from a setof candidates. Model complexity is adjusted by incrementally refining the dis-cretization of the input variables and adjusting regularization. The algorithmis illustrated in a case study on the identification of a nonlinear, auto-regressivemodel with exogeneous inputs (NARX model).

References

[B] C. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995.[KBM] O. Kahrs, M. Brendel, W. Marquardt, Incremental Identification of NARX models by

sparse grid approximation, In: Proceedings of the 16th IFAC World Congress, Prague,Czech Republic, July 2005.

1

Fast summation of radial functions on the sphere

Jens Keiner

(joint work with Stefan Kunis and Daniel Potts)

Radial functions are a powerful tool in many areas of multi-dimensional approx-imation, especially when dealing with scattered data. We present a fast approx-imate algorithm for the evaluation of linear combinations of radial functions onthe sphere S2. The approach is based on a particular rank approximation of thecorresponding Gram matrix and fast algorithms for spherical Fourier transforms.The proposed method takes O(L) arithmetic operations for L arbitrarily dis-tributed nodes on the sphere and under mild assumptions on the radial function.In contrast to the panel clustering method on the sphere which can be seen asan approximation in spatial domain and which is well-suited for good localizingfunctions, our approach can be interpreted as an approximation in frequency do-main rendering the method particularly useful for moderately localized functionswith large overlap. As an advantage, we do not require the nodes to be sorted orpre-processed in any way, thus quickly adapting to different node distributions.The pre-computation effort only depends on the desired accuracy. We establishexplicit error bounds for a range of radial functions and provide numerical exam-ples covering approximation quality, speed measurements, and a comparison ofour particular matrix approximation with a truncated SVD approximation.

1

Page 17: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Structured Tensor Decomposition of Multi-Dimensional Operators

B. N. Khoromskij

Coupling the hierarchical and tensor-product formats allows an opportunity forefficient data-sparse representation to the general class of nonlocal operators inhigher dimensions (cf. [HKT], [GHK], [HK], [Kh1], [Kh2]). Examples of suchnonlocal mappings are integral operators with Green’s kernels, solution opera-tors of elliptic, parabolic and hyperbolic boundary value problems, Lyapunov andRiccati solution operators in control theory, spectral projections associated withthe matrix sign function for solving the Hartree-Fock equation, collision integralsfrom the deterministic Boltzmann equation as well as the 3D convolution inte-grals in the Ornstein-Zernike equation. We discuss how the H-matrix techniquescombined with the Kronecker tensor-product decomposition allow to represent afunction F(A) of a discrete elliptic operator A in a hypercube (0, 1)d ∈ Rd in thecase of a high spatial dimension d. Along with integral operators, we focus onthe functions A−1 and sign(A). The asymptotic complexity of our tensor decom-positions can be estimated by O(Np/d logq N), p = 1, 2, where N is the discreteproblem size. Numerical results for model problems will be addressed.

References

[HK] W. Hackbusch, B.N. Khoromskij: Low-Rank Kronecker-Product Approximation to Multi-Dimensional Nonlocal Operators. Parts I,II. Preprints 29/30, MPI MIS, Leipzig 2004;(Computing, to appear).

[HKT] W. Hackbusch, B.N. Khoromskij, and E. Tyrtyshnikov: Hierarchical Kronecker tensor-product approximation, Preprint 35, Max-Planck-Institut fur Mathematik in den Natur-wissenschaften, Leipzig, 2003; J. Numer. Math. 13(2005), 119-156.

[GHK] I. P. Gavrilyuk, W. Hackbusch and B. N. Khoromskij: Tensor-Product Approximationto Elliptic and Parabolic Solution Operators in Higher Dimensions. Computing 74 (2005),131-157.

[Kh1] B.N. Khoromskij: Structured data-sparse approximation to high order tensors arisingfrom the deterministic Boltzmann equation. Preprint 4, Max-Planck-Institut fur Mathe-matik in den Naturwissenschaften, Leipzig 2005 (Math. Comp., to appear).

[Kh2] B.N. Khoromskij:An Introduction to Structured Tensor-Product Rep-resentation of Discrete Nonlocal Operators. Lecture Notes 27, Max-Planck-Institut fur Mathematik in den Naturwissenschaften, Leipzig 2005(http://www.mis.mpg.de/scicomp/Fulltext/Khoromskij).

1

A fast wavelet solution of separable PDEsby the fictitious domain method

Radek Kucera

The contribution deals with fast solving of large saddle-point systems arisingin wavelet-Galerkin discretizations of separable elliptic PDEs. The periodizedorthonormal compactly supported wavelets of the tensor product type togetherwith the fictitious domain method are used. A special structure of matrices makepossible to utilize the fast Fourier transform and the splitting technique basedon the Kronecker product so that the resulting algorithms are highly efficient.Numerical experiments confirm theoretical results. [1]

References

[1] Kucera, R.: Complexity of an algorithm for solving saddle-point systems with singularblocks arising in wavelet-Galerkin discretizations. Appl. Math., 50(2005), No.3, pp.291-308.

1

Page 18: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Adaptive Uncertainty Quantification with Stochastic PartialDifferential Equations

Hermann G. Matthies

Uncertainty enters into many mathematical models of physical reality. Oftensuch models are partial differential equations (PDEs), and one way to quantify un-certainty is via stochastic models, resulting in toto in a stochastic PDE (SPDE).Galerkin discretisations of such linear and non-linear SPDEs result in huge andcomplex systems of linear or non-linear equations. Often the desired result is afunctional of the solution, or the probability of an event - also a functional of thesolution. Using duality techniques and the weighted residuals, an adaptive anderror-controlled computation of such a functional may be established. This usesa posteriori estimates of the error in the functional to control the approximatingspaces.

1

Algorithms for Computing with Sums of SeparableFunctions/Operators

Martin J. Mohlenkamp

(joint work with Gregory Beylkin)

One approach for working with functions/operators in high dimensions is toapproximate them by sums of separable functions/operators. Besides the the-oretical question of how good such approximations are, there is the importantpractical issue of how to use them. In this talk I will focus on the techniques forusing them within numerical analysis algorithms. I will describe how to

(1) apply a matrix to a vector;(2) approximate one vector with another using fewer terms;(3) control the condition number;(4) solve a linear system;(5) work within an antisymmetric pseudonorm;

and(6) represent the action of nonseparable electron-interaction potentials.

Some of this work has appeared in [BM02, BM05].

References

[BM02] G. Beylkin and M. J. Mohlenkamp. Numerical operator calculus in higherdimensions. Proc. Natl. Acad. Sci. USA, 99(16):10246–10251, August 2002.http://www.pnas.org/cgi/content/abstract/112329799v1.

[BM05] Gregory Beylkin and Martin J. Mohlenkamp. Algorithms for numerical anal-ysis in high dimensions. SIAM J. Sci. Comput., 26(6):2133–2159, July 2005.http://www.math.ohiou.edu/∼mjm/research/BEY-MOH2004P.pdf

1

Page 19: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Approximation Theory for Tensor Product Methods

Andrej Nitsche

We present approximation theory for tensor product methods in both the lin-ear and nonlinear/adaptive setting. In the linear sparse grid case, the relevantregularity is an L2-based Sobolev regularity of mixed highest derivatives. In thenonlinear/adaptive case, this is an Lp-based Besov regularity of mixed highestderivatives, p < 2.In the linear case Sobolev-mix regularity is a stronger requirement than (isotropic)Sobolev regularity. However, in the nonlinear case the relevant Besov-mix and(isotropic) Besov spaces are based on different Lp-norms and are not containedin each other. If Besov regularity is low but Besov-mix regularity high, adaptivetensor product methods have an advantage over methods based on isotropicallysupported bases. This seems to be the case for solutions to elliptic PDEs.

References

[1] R. DeVore: Nonlinear Approximation, in Acta Numerica 7, Cambridge UniversityPress, Cambridge, UK, pp. 51-150, 1998

[2] A. Nitsche: Best N Term Approximation Spaces for Sparse Grids; to appear in Con-str. Approx.

1

Random Sampling of Sparse Trigonometric Polynomials

Holger Rauhut

We study the problem of reconstructing a sparse multivariate trigonometric poly-nomial from few random samples. By “sparse” we mean that only a small numberof Fourier coefficients of the polynomial do not vanish. However, one does notknow a priori which coefficients are non-zero. The random samples are takenfrom the uniform distribution on the unit cube.Inspired by recent work of E. Candes, J. Romberg and T. Tao [1, 2] we propose

to recover the polynomial by Basis Pursuit. That is, we minimize the 1-norm ofthe Fourier coefficients subject to the condition that the corresponding trigono-metric polynomial matches the original one on the sampling points. Numericalexperiments show that in many cases the trigonometric polynomial can be re-covered exactly. Of course, to this end the number of samples N has to be highenough compared to the “sparsity”, i.e., the number of non-vanishing coefficients.However, N can be chosen small compared to the dimension D of the space oftrigonometric polynomials that one is working with, i.e., D = (2q+1)d where q isthe maximal degree of the trigonometric polynomial and d the underlying spacedimension. So in some sense this scheme overcomes the Nyquist rate.We will present a theoretical result that explains this observation to some

extent. Unexpectly, it connects this problem to an interesting combinatorialproblem concerning set partitions, which seemingly has not yet been consideredbefore.

References

[1] E. Candes, J. Romberg, T. Tao, Robust Uncertainty Principles: Exact Signal Recon-struction from Highly Incomplete Frequency Information, Preprint, 2004.

[2] E. Candes, T. Tao, Near Optimal Signal Recovery From Random Projections: UniversalEncoding Strategies?, Preprint 2004.

1

Page 20: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Recovery of Sparse Signals from Incomplete and InaccurateMeasurements

Justin Romberg

(joint work with Emmanuel Candes and Terence Tao)

We will discuss a recent series of results concerning a fundamental problem inthe applied sciences: When can an object of interest be recovered from a smallnumber of linear measurements?

As a concrete example, consider the following problem. Suppose that a discretesignal x ∈ RN is sparse in the frequency domain in that its Discrete Fouriertransfrom x is nonzero only on an unknown and arbitrary set of size B. Insteadof observing all N points of x directly, we take K N samples in the time-domain. We will show that for an overwhelming percentage of sample sets ofsize

K ≥ Const · B · logN,

the signal x can be covered perfectly by solving the convex program

minz

z1 :=ω

|z(ω)| subject to z(t) = x(t), t ∈ T

where z is the Fourier transform of z, and T is the set of sampling locations,|T | = K.

This result can be thought of as a nonlinear sampling theorem. The classi-cal Shannon sampling theorem states that a signal whose Fourier transform issupported on a known an connected set of size B around the origin can be recon-structed from B equally spaced samples in the time domain. The above resultssays that we can reconstruct a signal whose Fourier transform is supported onan arbitrary and unknown set of size B from ∼ B logN samples. Moreover, thechoice of sample locations is not at all critical.

The recovery theorem can be extended to signals which are sparse in a generalorthobasis Ψ, and measured using a series of test functions ψk. More precisely,given K linear measurements

yk = ψk, y, k = 1, . . . , K

we can recover a signal which is sparse in the Ψ-domain by solving

minz

Ψz1 subject to ψk, y, k = 1, . . . , K

(Ψz is the set of Ψ-domain coefficients of z). The recovery condition now dependson the basis Ψ and the set of measurement vectors φk obeying an uncertaintyprinciple.

Finally, we will show that the recovery procedure can be made wonderfullyrobust against the presence of noise and other measurement error. We will endby discussing ongoing research on applications of these ideas to medical imagingand data compression.

1

Page 21: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Wave Function Methods for the Numerical Solution of the ElectronicSchrodinger Equation

Reinhold Schneider

The electronic Schrodinger equation describes a nonrelativistic quantum mechan-ical system of N electron in the presence of an electrostatic potential given by aconfiguration of M nucleons, resp. atoms. It is the basic equation for modellingproblems in chemistry, solid physics etc. We will present an overview over thenumerical approximation of the ground state solution. All these methods arebased on anti-symmetric tensor product approximation, namely Slater determi-nants. We will emphasize on the treatment of the Coupled Cluster Method. Inparticular we would like to present an analysis of the Projected Coupled ClusterMethod.

1

Sparse Wavelet Methods for Operator Equations with StochasticData

Christoph Schwab

(joint work with Tobias von Petersdorff)

Strongly elliptic operator equations with stochastic data in a domain D ⊂Rd are considered. We discretize the mean field problem using a FEM withhierarchical basis and N degrees of freedom.To compute the k-th moment Mku of the random solution u both, stochastic

and deterministic methods are analyzed. Key tool in both types of algorithmsare k-fold sparse tensor products of the FEM in D.Monte-Carlo FEM withM samples (i.e., deterministic solves) is proved to yield

approximations toMku converging with optimal rate in work O(MN(logN)k−1).Direct deterministic FEM approximation for Mku is based on equations for

Mku in Dk ⊂ Rkd. These equations are derived and their strong ellipticity andregularity in scales of anisotropic Sobolev spaces is established. Their Galerkinapproximation using sparse tensor products of the FE spaces in D allows approx-imation of Mku with N(logN)k−1 degrees of freedom converging at an optimalrate.Multilevel preconditioning and, for nonlocal operators, wavelet compression of

the stiffness matrices is shown to yield deterministic approximations to Mku incomplexity in N(logN)c for some c > 0 proportional to k.

1

Page 22: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Adaptive tensor product wavelet methods – avoiding the curse ofdimensionality

Rob Stevenson

Best N -term approximations with (isotropic) wavelets of order d in n-spacedimensions converge, in L2-norm, for s < d with a rate N−s/n iff the function tobe approximated is in the Besov space Bs

τ,τ , whereas a rate beyond N−d/n cannever be expected. As shown in [N], using tensor product wavelets, for s < d oneobtains a rate N−s iff u ∈ Bs

τ,τ ⊗τ · · · ⊗τ Bsτ,τ . So not only that wavelets of lower

order suffice to obtain the same rate, but also, since Bsτ,τ ⊗τ · · · ⊗τ B

sτ,τ is much

larger than Bsτ,τ , one gets this rate under much milder regularity assumptions.

When u is implicitly given as solution of an operator equation, these rates canbe realized with adaptive wavelet methods as introduced by Cohen, Dahmen andDeVore, assuming that the operator in wavelet coordinates, i.e., the (infinite) stiff-ness matrix, is, dependent on s, sufficiently close to a computable sparse matrix.Due to the higher rates of N -term approximations, for tensor product waveletsthe requirements concerning near-sparsity are much stronger. Nevertheless, wewill show that these requirements are satisfied for partial differential operatorswith variable coefficients when the mixed derivatives of sufficiently large order ofthese coefficients are bounded.

References

[N] Andrej Nitsche, Sparse Tensor Product Approximation of Elliptic Problems, PhD-thesis, ETH Zurich, 2004.

1

The volume estimates and the multivariate approximation

V.N. Temlyakov

(joint work with B.S. Kashin)

I will discuss new estimates for the entropy numbers of classes of multivariatefunctions with bounded mixed derivative. It is known that the investigation ofthese classes requires development of new techniques comparing to the univari-ate classes. In a recent joint paper with B. Kashin we continued to developthe technique based on estimates of volumes of sets of the Fourier coefficients oftrigonometric polynomials with frequences in special regions. We obtained newvolume estimates and used them to get right orders of decay of the entropy num-bers of classes of functions of two variables with a mixed derivative bounded inthe L1-norm. This is the first such result for these classes. This result essentiallycompletes the investigation of orders of decay of the entropy numbers of classesof functions of two variables with bounded mixed derivative. The case of similarclasses of functions of more than two variables is still open.

1

Page 23: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

Sparse Perturbation Algorithms for Elliptic Problems with StochasticData

Radu Alexandru Todor

We consider the moment problem for stochastic diffusion in a bounded physicaldomain D ⊂ Rd,

(1) −div(a(·, ω)∇u(·, ω)) = f(·, ω) in H−1(D), P -a.e. ω ∈ Ω,

with homogeneous boundary conditions. Here (Ω,Σ, P ) is a probability spacemodelling the data uncertainty.For k ∈ N+ the moment of order k of u solution to (1) is defined on the kd-dimensional domain Dk := D ×D × · · · ×D (k times) by

(2) Mk(u)(x1, x2, · · · , xk) =

Ω

u(x1, ω)u(x2, ω) · · ·u(xk, ω) dP (ω).

For the moment computation of the stochastic solution u to (1) we developperturbation algorithms of nearly optimal complexity, under spatial regularityassumption on the random fluctuation of the data. We employ sparse tensorproduct spaces, wavelet preconditioning and best N -term approximation tech-niques to efficiently represent the higher order moments Mk(u) for k ∈ N+.

References

[FST] P. Frauenfelder, Ch. Schwab and R.A.Todor, Finite elements for elliptic problems withstochastic coefficients, CMAME 194(2005), 205-228.

[ST] Ch. Schwab and R.A.Todor, Sparse Finite Elements for Stochastic Elliptic Problems -higher order moments, Computing 71(2003), 43-63.

[RT] Radu Alexandru Todor, Sparse Perturbation Algorithms for Elliptic Problems withStochastic Data, ETH Dissertation, Zurich 2005.

1

On solving large linear systems arising from the Stochastic FiniteElement Method

Elisabeth Ullmann

The discretization of elliptic boundary value problems with random data by thestochastic finite element method (SFEM) [1] requires the solution of a linearsystem in many unknowns. Using approriate stochastic shape functions, it ispossible to break up this system into a sequence of independent lower dimensionalproblems.

Recently de Sturler et al. [2] proposed two algorithms, a modification of GCROTand GCRO-DR, that recycle Krylov subspaces when solving a sequence of linearsystems in order to reduce the total cost of the solution process. We present theresults of numerical experiments when solving such decoupled systems by meansof Krylov subspace recycling.

References

[1] R.G. Ghanem, P.D. Spanos, Stochastic finite elements: a spectral approach,Springer-Verlag, New York, 1991.

[2] M.L. Parks, E. de Sturler, G. Mackey D.D. Johnson, S. Maiti, Recycling Krylov Sub-spaces for sequences of linear systems, Tech. Report UIUCDCS-R-2004-2421(CS),University of Illinois, March 2004.

1

Page 24: Breaking Complexity: Nonlinear / Adaptive Approximation in ... · Stochastic Data 16:30 17:00 17:00 17:00 17:00 17:00 17:00 17:30 17:30 17:30 17:30 17:30 17:30 Jochen Garcke: Using

The hyperbolic cross space approximation of electronic wavefunctions

Harry Yserentant

The electronic Schrodinger equation describes the motion of electrons underCoulomb interaction forces in the field of clamped nuclei and forms the basisof quantum chemistry. It was shown in [Y1] that the solutions of this equation,the electronic wavefunctions, possess square integrable high-order mixed weakderivatives as they are needed to underpin sparse grid- or hyperbolic cross spaceapproximation techniques theoretically. The talk is concerned with a quantitativeversion of this result. Explicit and essentially sharp estimates for the norms ofthese derivatives are given that underline the enormous potential of hyperboliccross space approximation methods in quantum theory.

References

[Y1] H. Yserentant, On the regularity of the electronic Schrodinger equation in Hilbertspaces of mixed derivatives, Numer. Math. 98, 731–759 (2004)

[Y2] H. Yserentant, Sparse grid spaces for the numerical solution of the electronicSchrodinger equation. Numer. Math. 101, 381–389 (2005)

[Y3] H. Yserentant, The hyperbolic cross space approximation of electronic wavefunctions,in preparation

1