144
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 [email protected] MATH 590 – Chapter 33 1

MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

MATH 590: Meshfree MethodsChapter 33: Adaptive Iteration

Greg Fasshauer

Department of Applied MathematicsIllinois Institute of Technology

Fall 2010

[email protected] MATH 590 – Chapter 33 1

Page 2: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

Outline

1 A Greedy Adaptive Algorithm

2 The Faul-Powell Algorithm

[email protected] MATH 590 – Chapter 33 2

Page 3: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The two adaptive algorithms discussed in this chapter both yield anapproximate solution to the RBF interpolation problem.

The algorithms have some similarity with some of the omitted materialfrom Chapters 21, 31 and 32.

The contents of this chapter are based mostly on the papers[Faul and Powell (1999), Faul and Powell (2000),Schaback and Wendland (2000a), Schaback and Wendland (2000b)]and the book [Wendland (2005a)].

As always, we concentrate on systems for strictly positive definitekernels (variations for strictly conditionally positive definite kernels alsoexist).

[email protected] MATH 590 – Chapter 33 3

Page 4: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The two adaptive algorithms discussed in this chapter both yield anapproximate solution to the RBF interpolation problem.

The algorithms have some similarity with some of the omitted materialfrom Chapters 21, 31 and 32.

The contents of this chapter are based mostly on the papers[Faul and Powell (1999), Faul and Powell (2000),Schaback and Wendland (2000a), Schaback and Wendland (2000b)]and the book [Wendland (2005a)].

As always, we concentrate on systems for strictly positive definitekernels (variations for strictly conditionally positive definite kernels alsoexist).

[email protected] MATH 590 – Chapter 33 3

Page 5: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The two adaptive algorithms discussed in this chapter both yield anapproximate solution to the RBF interpolation problem.

The algorithms have some similarity with some of the omitted materialfrom Chapters 21, 31 and 32.

The contents of this chapter are based mostly on the papers[Faul and Powell (1999), Faul and Powell (2000),Schaback and Wendland (2000a), Schaback and Wendland (2000b)]and the book [Wendland (2005a)].

As always, we concentrate on systems for strictly positive definitekernels (variations for strictly conditionally positive definite kernels alsoexist).

[email protected] MATH 590 – Chapter 33 3

Page 6: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The two adaptive algorithms discussed in this chapter both yield anapproximate solution to the RBF interpolation problem.

The algorithms have some similarity with some of the omitted materialfrom Chapters 21, 31 and 32.

The contents of this chapter are based mostly on the papers[Faul and Powell (1999), Faul and Powell (2000),Schaback and Wendland (2000a), Schaback and Wendland (2000b)]and the book [Wendland (2005a)].

As always, we concentrate on systems for strictly positive definitekernels (variations for strictly conditionally positive definite kernels alsoexist).

[email protected] MATH 590 – Chapter 33 3

Page 7: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Outline

1 A Greedy Adaptive Algorithm

2 The Faul-Powell Algorithm

[email protected] MATH 590 – Chapter 33 4

Page 8: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

One of the central ingredients is the use of the native space innerproduct discussed in Chapter 13.

As always, we assume that our data sites are X = x1, . . . ,xN.We also consider a second set Y ⊆ X .

Let PYf be the interpolant to f on Y ⊆ X .

Then the first orthogonality lemma from Chapter 18 (with g = f ) yields

〈f − PYf ,PYf 〉NK (Ω) = 0.

This leads to the energy split (see Chapter 18)

‖f‖2NK (Ω) = ‖f − PYf ‖2NK (Ω) + ‖PYf ‖

2NK (Ω).

[email protected] MATH 590 – Chapter 33 5

Page 9: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

One of the central ingredients is the use of the native space innerproduct discussed in Chapter 13.

As always, we assume that our data sites are X = x1, . . . ,xN.

We also consider a second set Y ⊆ X .

Let PYf be the interpolant to f on Y ⊆ X .

Then the first orthogonality lemma from Chapter 18 (with g = f ) yields

〈f − PYf ,PYf 〉NK (Ω) = 0.

This leads to the energy split (see Chapter 18)

‖f‖2NK (Ω) = ‖f − PYf ‖2NK (Ω) + ‖PYf ‖

2NK (Ω).

[email protected] MATH 590 – Chapter 33 5

Page 10: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

One of the central ingredients is the use of the native space innerproduct discussed in Chapter 13.

As always, we assume that our data sites are X = x1, . . . ,xN.We also consider a second set Y ⊆ X .

Let PYf be the interpolant to f on Y ⊆ X .

Then the first orthogonality lemma from Chapter 18 (with g = f ) yields

〈f − PYf ,PYf 〉NK (Ω) = 0.

This leads to the energy split (see Chapter 18)

‖f‖2NK (Ω) = ‖f − PYf ‖2NK (Ω) + ‖PYf ‖

2NK (Ω).

[email protected] MATH 590 – Chapter 33 5

Page 11: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

One of the central ingredients is the use of the native space innerproduct discussed in Chapter 13.

As always, we assume that our data sites are X = x1, . . . ,xN.We also consider a second set Y ⊆ X .

Let PYf be the interpolant to f on Y ⊆ X .

Then the first orthogonality lemma from Chapter 18 (with g = f ) yields

〈f − PYf ,PYf 〉NK (Ω) = 0.

This leads to the energy split (see Chapter 18)

‖f‖2NK (Ω) = ‖f − PYf ‖2NK (Ω) + ‖PYf ‖

2NK (Ω).

[email protected] MATH 590 – Chapter 33 5

Page 12: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

One of the central ingredients is the use of the native space innerproduct discussed in Chapter 13.

As always, we assume that our data sites are X = x1, . . . ,xN.We also consider a second set Y ⊆ X .

Let PYf be the interpolant to f on Y ⊆ X .

Then the first orthogonality lemma from Chapter 18 (with g = f ) yields

〈f − PYf ,PYf 〉NK (Ω) = 0.

This leads to the energy split (see Chapter 18)

‖f‖2NK (Ω) = ‖f − PYf ‖2NK (Ω) + ‖PYf ‖

2NK (Ω).

[email protected] MATH 590 – Chapter 33 5

Page 13: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

One of the central ingredients is the use of the native space innerproduct discussed in Chapter 13.

As always, we assume that our data sites are X = x1, . . . ,xN.We also consider a second set Y ⊆ X .

Let PYf be the interpolant to f on Y ⊆ X .

Then the first orthogonality lemma from Chapter 18 (with g = f ) yields

〈f − PYf ,PYf 〉NK (Ω) = 0.

This leads to the energy split (see Chapter 18)

‖f‖2NK (Ω) = ‖f − PYf ‖2NK (Ω) + ‖PYf ‖

2NK (Ω).

[email protected] MATH 590 – Chapter 33 5

Page 14: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We now consider an iteration on residuals.

We pretend to start with our desired interpolant r0 = PXf on the entireset X .

We also pick an appropriate sequence of sets Yk ⊆ X , k = 0,1, . . . (wewill discuss some possible heuristics for choosing these sets later).

Then we iteratively define the residual functions

rk+1 = rk − PYkrk, k = 0,1, . . . . (1)

RemarkIn the actual algorithm below we will only deal with discrete vectors.Thus the vector r0 will be given by the data values (since Pf issupposed to interpolate f on X ).

[email protected] MATH 590 – Chapter 33 6

Page 15: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We now consider an iteration on residuals.

We pretend to start with our desired interpolant r0 = PXf on the entireset X .

We also pick an appropriate sequence of sets Yk ⊆ X , k = 0,1, . . . (wewill discuss some possible heuristics for choosing these sets later).

Then we iteratively define the residual functions

rk+1 = rk − PYkrk, k = 0,1, . . . . (1)

RemarkIn the actual algorithm below we will only deal with discrete vectors.Thus the vector r0 will be given by the data values (since Pf issupposed to interpolate f on X ).

[email protected] MATH 590 – Chapter 33 6

Page 16: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We now consider an iteration on residuals.

We pretend to start with our desired interpolant r0 = PXf on the entireset X .

We also pick an appropriate sequence of sets Yk ⊆ X , k = 0,1, . . . (wewill discuss some possible heuristics for choosing these sets later).

Then we iteratively define the residual functions

rk+1 = rk − PYkrk, k = 0,1, . . . . (1)

RemarkIn the actual algorithm below we will only deal with discrete vectors.Thus the vector r0 will be given by the data values (since Pf issupposed to interpolate f on X ).

[email protected] MATH 590 – Chapter 33 6

Page 17: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We now consider an iteration on residuals.

We pretend to start with our desired interpolant r0 = PXf on the entireset X .

We also pick an appropriate sequence of sets Yk ⊆ X , k = 0,1, . . . (wewill discuss some possible heuristics for choosing these sets later).

Then we iteratively define the residual functions

rk+1 = rk − PYkrk, k = 0,1, . . . . (1)

RemarkIn the actual algorithm below we will only deal with discrete vectors.Thus the vector r0 will be given by the data values (since Pf issupposed to interpolate f on X ).

[email protected] MATH 590 – Chapter 33 6

Page 18: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We now consider an iteration on residuals.

We pretend to start with our desired interpolant r0 = PXf on the entireset X .

We also pick an appropriate sequence of sets Yk ⊆ X , k = 0,1, . . . (wewill discuss some possible heuristics for choosing these sets later).

Then we iteratively define the residual functions

rk+1 = rk − PYkrk, k = 0,1, . . . . (1)

RemarkIn the actual algorithm below we will only deal with discrete vectors.Thus the vector r0 will be given by the data values (since Pf issupposed to interpolate f on X ).

[email protected] MATH 590 – Chapter 33 6

Page 19: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Now, the energy splitting identity with f = rk gives us

‖rk‖2NK (Ω) = ‖rk − PYkrk‖2NK (Ω) + ‖PYk

rk‖2NK (Ω) (2)

or, using the iteration formula (1),

‖rk‖2NK (Ω) = ‖rk+1‖2NK (Ω) + ‖rk − rk+1‖2NK (Ω). (3)

[email protected] MATH 590 – Chapter 33 7

Page 20: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Now, the energy splitting identity with f = rk gives us

‖rk‖2NK (Ω) = ‖rk − PYkrk‖2NK (Ω) + ‖PYk

rk‖2NK (Ω) (2)

or, using the iteration formula (1),

‖rk‖2NK (Ω) = ‖rk+1‖2NK (Ω) + ‖rk − rk+1‖2NK (Ω). (3)

[email protected] MATH 590 – Chapter 33 7

Page 21: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We have the following telescoping sum for the partial sums of the normof the residual updates PYk

rk:

M∑k=0

‖PYkrk‖2NK (Ω)

(1)=

M∑k=0

‖rk − rk+1‖2NK (Ω)

(3)=

M∑k=0

‖rk‖2NK (Ω) − ‖rk+1‖2NK (Ω)

= ‖r0‖2NK (Ω) − ‖rM+1‖2NK (Ω) ≤ ‖r0‖2NK (Ω).

RemarkThis estimate shows that the sequence of partial sums is monotoneincreasing and bounded, and therefore convergent — even for a poorchoice of the sets Yk .

[email protected] MATH 590 – Chapter 33 8

Page 22: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We have the following telescoping sum for the partial sums of the normof the residual updates PYk

rk:

M∑k=0

‖PYkrk‖2NK (Ω)

(1)=

M∑k=0

‖rk − rk+1‖2NK (Ω)

(3)=

M∑k=0

‖rk‖2NK (Ω) − ‖rk+1‖2NK (Ω)

= ‖r0‖2NK (Ω) − ‖rM+1‖2NK (Ω) ≤ ‖r0‖2NK (Ω).

RemarkThis estimate shows that the sequence of partial sums is monotoneincreasing and bounded, and therefore convergent — even for a poorchoice of the sets Yk .

[email protected] MATH 590 – Chapter 33 8

Page 23: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We have the following telescoping sum for the partial sums of the normof the residual updates PYk

rk:

M∑k=0

‖PYkrk‖2NK (Ω)

(1)=

M∑k=0

‖rk − rk+1‖2NK (Ω)

(3)=

M∑k=0

‖rk‖2NK (Ω) − ‖rk+1‖2NK (Ω)

= ‖r0‖2NK (Ω) − ‖rM+1‖2NK (Ω) ≤ ‖r0‖2NK (Ω).

RemarkThis estimate shows that the sequence of partial sums is monotoneincreasing and bounded, and therefore convergent — even for a poorchoice of the sets Yk .

[email protected] MATH 590 – Chapter 33 8

Page 24: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

We have the following telescoping sum for the partial sums of the normof the residual updates PYk

rk:

M∑k=0

‖PYkrk‖2NK (Ω)

(1)=

M∑k=0

‖rk − rk+1‖2NK (Ω)

(3)=

M∑k=0

‖rk‖2NK (Ω) − ‖rk+1‖2NK (Ω)

= ‖r0‖2NK (Ω) − ‖rM+1‖2NK (Ω) ≤ ‖r0‖2NK (Ω).

RemarkThis estimate shows that the sequence of partial sums is monotoneincreasing and bounded, and therefore convergent — even for a poorchoice of the sets Yk .

[email protected] MATH 590 – Chapter 33 8

Page 25: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If we can show that the residuals rk converge to zero,

then we wouldhave that the iteratively computed approximation

uM+1 =M∑

k=0

PYkrk

=M∑

k=0

(rk − rk+1) = r0 − rM+1 (4)

converges to the original interpolant r0 = PXf .

RemarkThe omitted chapters contain iterative methods by which weapproximate the interpolant by iterating an approximation method onthe full data set.Here we are approximating the interpolant by iterating an interpolationmethod on nested (increasing) adaptively chosen subsets of the data.

[email protected] MATH 590 – Chapter 33 9

Page 26: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If we can show that the residuals rk converge to zero, then we wouldhave that the iteratively computed approximation

uM+1 =M∑

k=0

PYkrk

=M∑

k=0

(rk − rk+1) = r0 − rM+1 (4)

converges to the original interpolant r0 = PXf .

RemarkThe omitted chapters contain iterative methods by which weapproximate the interpolant by iterating an approximation method onthe full data set.Here we are approximating the interpolant by iterating an interpolationmethod on nested (increasing) adaptively chosen subsets of the data.

[email protected] MATH 590 – Chapter 33 9

Page 27: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If we can show that the residuals rk converge to zero, then we wouldhave that the iteratively computed approximation

uM+1 =M∑

k=0

PYkrk

=M∑

k=0

(rk − rk+1)

= r0 − rM+1 (4)

converges to the original interpolant r0 = PXf .

RemarkThe omitted chapters contain iterative methods by which weapproximate the interpolant by iterating an approximation method onthe full data set.Here we are approximating the interpolant by iterating an interpolationmethod on nested (increasing) adaptively chosen subsets of the data.

[email protected] MATH 590 – Chapter 33 9

Page 28: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If we can show that the residuals rk converge to zero, then we wouldhave that the iteratively computed approximation

uM+1 =M∑

k=0

PYkrk

=M∑

k=0

(rk − rk+1) = r0 − rM+1 (4)

converges to the original interpolant r0 = PXf .

RemarkThe omitted chapters contain iterative methods by which weapproximate the interpolant by iterating an approximation method onthe full data set.Here we are approximating the interpolant by iterating an interpolationmethod on nested (increasing) adaptively chosen subsets of the data.

[email protected] MATH 590 – Chapter 33 9

Page 29: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If we can show that the residuals rk converge to zero, then we wouldhave that the iteratively computed approximation

uM+1 =M∑

k=0

PYkrk

=M∑

k=0

(rk − rk+1) = r0 − rM+1 (4)

converges to the original interpolant r0 = PXf .

RemarkThe omitted chapters contain iterative methods by which weapproximate the interpolant by iterating an approximation method onthe full data set.

Here we are approximating the interpolant by iterating an interpolationmethod on nested (increasing) adaptively chosen subsets of the data.

[email protected] MATH 590 – Chapter 33 9

Page 30: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If we can show that the residuals rk converge to zero, then we wouldhave that the iteratively computed approximation

uM+1 =M∑

k=0

PYkrk

=M∑

k=0

(rk − rk+1) = r0 − rM+1 (4)

converges to the original interpolant r0 = PXf .

RemarkThe omitted chapters contain iterative methods by which weapproximate the interpolant by iterating an approximation method onthe full data set.Here we are approximating the interpolant by iterating an interpolationmethod on nested (increasing) adaptively chosen subsets of the data.

[email protected] MATH 590 – Chapter 33 9

Page 31: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe present method also has some similarities with the (omitted)multilevel algorithms of Chapter 32.

However,here: we compute the interpolant PXf on the set X based on a

single kernel KChapter 32: the final interpolant is given as the result of using the

spaces⋃M

k=1NKk (Ω), where Kk is an appropriately scaledversion of the kernel K .

Moreover, the goal in Chapter 32 is to approximate f , not Pf .

[email protected] MATH 590 – Chapter 33 10

Page 32: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe present method also has some similarities with the (omitted)multilevel algorithms of Chapter 32.

However,here: we compute the interpolant PXf on the set X based on a

single kernel K

Chapter 32: the final interpolant is given as the result of using thespaces

⋃Mk=1NKk (Ω), where Kk is an appropriately scaled

version of the kernel K .Moreover, the goal in Chapter 32 is to approximate f , not Pf .

[email protected] MATH 590 – Chapter 33 10

Page 33: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe present method also has some similarities with the (omitted)multilevel algorithms of Chapter 32.

However,here: we compute the interpolant PXf on the set X based on a

single kernel KChapter 32: the final interpolant is given as the result of using the

spaces⋃M

k=1NKk (Ω), where Kk is an appropriately scaledversion of the kernel K .

Moreover, the goal in Chapter 32 is to approximate f , not Pf .

[email protected] MATH 590 – Chapter 33 10

Page 34: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe present method also has some similarities with the (omitted)multilevel algorithms of Chapter 32.

However,here: we compute the interpolant PXf on the set X based on a

single kernel KChapter 32: the final interpolant is given as the result of using the

spaces⋃M

k=1NKk (Ω), where Kk is an appropriately scaledversion of the kernel K .

Moreover, the goal in Chapter 32 is to approximate f , not Pf .

[email protected] MATH 590 – Chapter 33 10

Page 35: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

To prove convergence of the residual iteration, we assume that we canfind sets of points Yk such that at step k at least some fixed percentageof the energy of the residual is picked up by its interpolant, i.e.,

‖PYkrk‖2NK (Ω) ≥ γ‖rk‖2NK (Ω) (5)

with some fixed γ ∈ (0,1].

Then (3) and the iteration formula (1) imply

‖rk+1‖2NK (Ω) = ‖rk‖2NK (Ω) − ‖PYkrk‖2NK (Ω),

and therefore

‖rk+1‖2NK (Ω) ≤ ‖rk‖2NK (Ω) − γ‖rk‖2NK (Ω) = (1− γ)‖rk‖2NK (Ω).

[email protected] MATH 590 – Chapter 33 11

Page 36: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

To prove convergence of the residual iteration, we assume that we canfind sets of points Yk such that at step k at least some fixed percentageof the energy of the residual is picked up by its interpolant, i.e.,

‖PYkrk‖2NK (Ω) ≥ γ‖rk‖2NK (Ω) (5)

with some fixed γ ∈ (0,1].

Then (3) and the iteration formula (1) imply

‖rk+1‖2NK (Ω) = ‖rk‖2NK (Ω) − ‖PYkrk‖2NK (Ω),

and therefore

‖rk+1‖2NK (Ω) ≤ ‖rk‖2NK (Ω) − γ‖rk‖2NK (Ω) = (1− γ)‖rk‖2NK (Ω).

[email protected] MATH 590 – Chapter 33 11

Page 37: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

To prove convergence of the residual iteration, we assume that we canfind sets of points Yk such that at step k at least some fixed percentageof the energy of the residual is picked up by its interpolant, i.e.,

‖PYkrk‖2NK (Ω) ≥ γ‖rk‖2NK (Ω) (5)

with some fixed γ ∈ (0,1].

Then (3) and the iteration formula (1) imply

‖rk+1‖2NK (Ω) = ‖rk‖2NK (Ω) − ‖PYkrk‖2NK (Ω),

and therefore

‖rk+1‖2NK (Ω) ≤ ‖rk‖2NK (Ω) − γ‖rk‖2NK (Ω) = (1− γ)‖rk‖2NK (Ω).

[email protected] MATH 590 – Chapter 33 11

Page 38: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

To prove convergence of the residual iteration, we assume that we canfind sets of points Yk such that at step k at least some fixed percentageof the energy of the residual is picked up by its interpolant, i.e.,

‖PYkrk‖2NK (Ω) ≥ γ‖rk‖2NK (Ω) (5)

with some fixed γ ∈ (0,1].

Then (3) and the iteration formula (1) imply

‖rk+1‖2NK (Ω) = ‖rk‖2NK (Ω) − ‖PYkrk‖2NK (Ω),

and therefore

‖rk+1‖2NK (Ω) ≤ ‖rk‖2NK (Ω) − γ‖rk‖2NK (Ω)

= (1− γ)‖rk‖2NK (Ω).

[email protected] MATH 590 – Chapter 33 11

Page 39: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

To prove convergence of the residual iteration, we assume that we canfind sets of points Yk such that at step k at least some fixed percentageof the energy of the residual is picked up by its interpolant, i.e.,

‖PYkrk‖2NK (Ω) ≥ γ‖rk‖2NK (Ω) (5)

with some fixed γ ∈ (0,1].

Then (3) and the iteration formula (1) imply

‖rk+1‖2NK (Ω) = ‖rk‖2NK (Ω) − ‖PYkrk‖2NK (Ω),

and therefore

‖rk+1‖2NK (Ω) ≤ ‖rk‖2NK (Ω) − γ‖rk‖2NK (Ω) = (1− γ)‖rk‖2NK (Ω).

[email protected] MATH 590 – Chapter 33 11

Page 40: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Applying the bound

‖rk+1‖2NK (Ω) ≤ (1− γ)‖rk‖2NK (Ω)

recursively yields

Theorem

If the choice of sets Yk satisfies ‖PYkrk‖2NK (Ω) ≥ γ‖rk‖2NK (Ω), then the

residual iteration (see (4))

uM =M−1∑k=0

PYkrk

= r0 − rM , rk+1 = rk − PYkrk, k = 0,1, . . .

converges linearly in the native space norm.

After M steps of iterativerefinement there is an error bound

‖PXf − uM‖2NK (Ω) = ‖r0 − uM‖2NK (Ω) = ‖rM‖2NK (Ω) ≤ (1− γ)M‖r0‖2NK (Ω).

[email protected] MATH 590 – Chapter 33 12

Page 41: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Applying the bound

‖rk+1‖2NK (Ω) ≤ (1− γ)‖rk‖2NK (Ω)

recursively yields

Theorem

If the choice of sets Yk satisfies ‖PYkrk‖2NK (Ω) ≥ γ‖rk‖2NK (Ω), then the

residual iteration (see (4))

uM =M−1∑k=0

PYkrk

= r0 − rM , rk+1 = rk − PYkrk, k = 0,1, . . .

converges linearly in the native space norm. After M steps of iterativerefinement there is an error bound

‖PXf − uM‖2NK (Ω) = ‖r0 − uM‖2NK (Ω) = ‖rM‖2NK (Ω) ≤ (1− γ)M‖r0‖2NK (Ω).

[email protected] MATH 590 – Chapter 33 12

Page 42: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThis theorem has various limitations:

The norm involves the kernel K which makes it difficult to find setsYk that satisfy (5).The native space norm of the initial residual r0 is not known.

A way around these problems is to use an equivalent discrete norm onthe set X .

[email protected] MATH 590 – Chapter 33 13

Page 43: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThis theorem has various limitations:

The norm involves the kernel K which makes it difficult to find setsYk that satisfy (5).

The native space norm of the initial residual r0 is not known.

A way around these problems is to use an equivalent discrete norm onthe set X .

[email protected] MATH 590 – Chapter 33 13

Page 44: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThis theorem has various limitations:

The norm involves the kernel K which makes it difficult to find setsYk that satisfy (5).The native space norm of the initial residual r0 is not known.

A way around these problems is to use an equivalent discrete norm onthe set X .

[email protected] MATH 590 – Chapter 33 13

Page 45: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThis theorem has various limitations:

The norm involves the kernel K which makes it difficult to find setsYk that satisfy (5).The native space norm of the initial residual r0 is not known.

A way around these problems is to use an equivalent discrete norm onthe set X .

[email protected] MATH 590 – Chapter 33 13

Page 46: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Schaback and Wendland establish an estimate of the form

‖r0 − uM‖2X ≤Cc

(1− δ c2

C2

)M/2

‖r0‖2X ,

where c and C are constants denoting the norm equivalence, i.e.,

c‖u‖X ≤ ‖u‖NK (Ω) ≤ C‖u‖X

for any u ∈ NK (Ω), and where δ is a constant analogous to γ (butbased on use of the discrete norm ‖ · ‖X in (5)).

In fact, any discrete `p norm on X can be used.

In the implementation below we will use the maximum norm.

[email protected] MATH 590 – Chapter 33 14

Page 47: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Schaback and Wendland establish an estimate of the form

‖r0 − uM‖2X ≤Cc

(1− δ c2

C2

)M/2

‖r0‖2X ,

where c and C are constants denoting the norm equivalence, i.e.,

c‖u‖X ≤ ‖u‖NK (Ω) ≤ C‖u‖X

for any u ∈ NK (Ω), and where δ is a constant analogous to γ (butbased on use of the discrete norm ‖ · ‖X in (5)).

In fact, any discrete `p norm on X can be used.

In the implementation below we will use the maximum norm.

[email protected] MATH 590 – Chapter 33 14

Page 48: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Schaback and Wendland establish an estimate of the form

‖r0 − uM‖2X ≤Cc

(1− δ c2

C2

)M/2

‖r0‖2X ,

where c and C are constants denoting the norm equivalence, i.e.,

c‖u‖X ≤ ‖u‖NK (Ω) ≤ C‖u‖X

for any u ∈ NK (Ω), and where δ is a constant analogous to γ (butbased on use of the discrete norm ‖ · ‖X in (5)).

In fact, any discrete `p norm on X can be used.

In the implementation below we will use the maximum norm.

[email protected] MATH 590 – Chapter 33 14

Page 49: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

In [Schaback and Wendland (2000b)] a basic version of thisalgorithm — where the sets Yk consist of a single point — isdescribed and tested.

The resulting approximation yields the best M-term approximationto the interpolant.

RemarkThis idea is related to the concepts of

greedy approximation algorithms (see, e.g., [Temlyakov (1998)])andsparse approximation (see, e.g., [Girosi (1998)]).

[email protected] MATH 590 – Chapter 33 15

Page 50: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

In [Schaback and Wendland (2000b)] a basic version of thisalgorithm — where the sets Yk consist of a single point — isdescribed and tested.

The resulting approximation yields the best M-term approximationto the interpolant.

RemarkThis idea is related to the concepts of

greedy approximation algorithms (see, e.g., [Temlyakov (1998)])andsparse approximation (see, e.g., [Girosi (1998)]).

[email protected] MATH 590 – Chapter 33 15

Page 51: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

In [Schaback and Wendland (2000b)] a basic version of thisalgorithm — where the sets Yk consist of a single point — isdescribed and tested.

The resulting approximation yields the best M-term approximationto the interpolant.

RemarkThis idea is related to the concepts of

greedy approximation algorithms (see, e.g., [Temlyakov (1998)])andsparse approximation (see, e.g., [Girosi (1998)]).

[email protected] MATH 590 – Chapter 33 15

Page 52: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If the set Yk consists of only a single point yk , then the partialinterpolant PYk

rkis particularly simple:

PYkrk

= βK (·,yk )

withβ =

rk (yk )

K (yk ,yk )

This follows immediately from the usual RBF expansion (whichconsists of only one term here) and the interpolation conditionPYk

rk(yk ) = rk (yk ).

[email protected] MATH 590 – Chapter 33 16

Page 53: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If the set Yk consists of only a single point yk , then the partialinterpolant PYk

rkis particularly simple:

PYkrk

= βK (·,yk )

withβ =

rk (yk )

K (yk ,yk )

This follows immediately from the usual RBF expansion (whichconsists of only one term here) and the interpolation conditionPYk

rk(yk ) = rk (yk ).

[email protected] MATH 590 – Chapter 33 16

Page 54: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

If the set Yk consists of only a single point yk , then the partialinterpolant PYk

rkis particularly simple:

PYkrk

= βK (·,yk )

withβ =

rk (yk )

K (yk ,yk )

This follows immediately from the usual RBF expansion (whichconsists of only one term here) and the interpolation conditionPYk

rk(yk ) = rk (yk ).

[email protected] MATH 590 – Chapter 33 16

Page 55: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

The point yk is picked to be the point in X where the residual islargest, i.e.,

|rk (yk )| = ‖rk‖∞.

This choice of “set” Yk certainly satisfies the constraint (5):

‖PYkrk‖2NK (Ω) = ‖βK (·,yk )‖2NK (Ω)

=

∥∥∥∥ rk (yk )

K (yk ,yk )K (·,yk )

∥∥∥∥2

NK (Ω)

≤ γ‖rk‖2NK (Ω), 0 < γ ≤ 1.

Here we require K (·,yk ) ≤ K (yk ,yk ) which is certainly true forpositive definite translation invariant kernels (cf. Chapter 3).However, in general we only know that|K (x ,y)|2 ≤ K (x ,x)K (y ,y) (see[Berlinet and Thomas-Agnan (2004)]).The interpolation problem is (approximately) solved without havingto invert any linear systems.

[email protected] MATH 590 – Chapter 33 17

Page 56: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

The point yk is picked to be the point in X where the residual islargest, i.e.,

|rk (yk )| = ‖rk‖∞.

This choice of “set” Yk certainly satisfies the constraint (5):

‖PYkrk‖2NK (Ω) = ‖βK (·,yk )‖2NK (Ω)

=

∥∥∥∥ rk (yk )

K (yk ,yk )K (·,yk )

∥∥∥∥2

NK (Ω)

≤ γ‖rk‖2NK (Ω), 0 < γ ≤ 1.

Here we require K (·,yk ) ≤ K (yk ,yk ) which is certainly true forpositive definite translation invariant kernels (cf. Chapter 3).However, in general we only know that|K (x ,y)|2 ≤ K (x ,x)K (y ,y) (see[Berlinet and Thomas-Agnan (2004)]).The interpolation problem is (approximately) solved without havingto invert any linear systems.

[email protected] MATH 590 – Chapter 33 17

Page 57: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

The point yk is picked to be the point in X where the residual islargest, i.e.,

|rk (yk )| = ‖rk‖∞.

This choice of “set” Yk certainly satisfies the constraint (5):

‖PYkrk‖2NK (Ω) = ‖βK (·,yk )‖2NK (Ω)

=

∥∥∥∥ rk (yk )

K (yk ,yk )K (·,yk )

∥∥∥∥2

NK (Ω)

≤ γ‖rk‖2NK (Ω), 0 < γ ≤ 1.

Here we require K (·,yk ) ≤ K (yk ,yk ) which is certainly true forpositive definite translation invariant kernels (cf. Chapter 3).However, in general we only know that|K (x ,y)|2 ≤ K (x ,x)K (y ,y) (see[Berlinet and Thomas-Agnan (2004)]).The interpolation problem is (approximately) solved without havingto invert any linear systems.

[email protected] MATH 590 – Chapter 33 17

Page 58: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

The point yk is picked to be the point in X where the residual islargest, i.e.,

|rk (yk )| = ‖rk‖∞.

This choice of “set” Yk certainly satisfies the constraint (5):

‖PYkrk‖2NK (Ω) = ‖βK (·,yk )‖2NK (Ω)

=

∥∥∥∥ rk (yk )

K (yk ,yk )K (·,yk )

∥∥∥∥2

NK (Ω)

≤ γ‖rk‖2NK (Ω), 0 < γ ≤ 1.

Here we require K (·,yk ) ≤ K (yk ,yk ) which is certainly true forpositive definite translation invariant kernels (cf. Chapter 3).However, in general we only know that|K (x ,y)|2 ≤ K (x ,x)K (y ,y) (see[Berlinet and Thomas-Agnan (2004)]).The interpolation problem is (approximately) solved without havingto invert any linear systems.

[email protected] MATH 590 – Chapter 33 17

Page 59: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

The point yk is picked to be the point in X where the residual islargest, i.e.,

|rk (yk )| = ‖rk‖∞.

This choice of “set” Yk certainly satisfies the constraint (5):

‖PYkrk‖2NK (Ω) = ‖βK (·,yk )‖2NK (Ω)

=

∥∥∥∥ rk (yk )

K (yk ,yk )K (·,yk )

∥∥∥∥2

NK (Ω)

≤ γ‖rk‖2NK (Ω), 0 < γ ≤ 1.

Here we require K (·,yk ) ≤ K (yk ,yk ) which is certainly true forpositive definite translation invariant kernels (cf. Chapter 3).However, in general we only know that|K (x ,y)|2 ≤ K (x ,x)K (y ,y) (see[Berlinet and Thomas-Agnan (2004)]).The interpolation problem is (approximately) solved without havingto invert any linear systems.

[email protected] MATH 590 – Chapter 33 17

Page 60: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Algorithm (Greedy one-point)

Input data locations X , associated values f of f , tolerance tol > 0Set initial residual r0 = PXf |X = f , initialize u0 = 0, e =∞, k = 0Choose starting point yk ∈ XWhile e > tol do

Set β =rk (yk )

K (yk ,yk )For 1 ≤ i ≤ N do

rk+1(x i) = rk (x i)− βK (x i , yk )uk+1(x i) = uk (x i) + βK (x i , yk )

endFind e = max

X|rk+1| and the point yk+1 where it occurs

Increment k = k + 1

end

[email protected] MATH 590 – Chapter 33 18

Page 61: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIt is important to realize that in our MATLAB implementation wenever actually compute the initial residual r0 = PXf .

All we require are the values of r0 on the grid X of data sites.

However, since PXf |X = f |X the values r0(x i) are given by theinterpolation data f (x i) (see line 5 of the code).

Moreover, since the sets Yk are subsets of X the value rk (yk )required to determine β is actually one of the current residualvalues (see line 10 of the code).

[email protected] MATH 590 – Chapter 33 19

Page 62: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIt is important to realize that in our MATLAB implementation wenever actually compute the initial residual r0 = PXf .

All we require are the values of r0 on the grid X of data sites.

However, since PXf |X = f |X the values r0(x i) are given by theinterpolation data f (x i) (see line 5 of the code).

Moreover, since the sets Yk are subsets of X the value rk (yk )required to determine β is actually one of the current residualvalues (see line 10 of the code).

[email protected] MATH 590 – Chapter 33 19

Page 63: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIt is important to realize that in our MATLAB implementation wenever actually compute the initial residual r0 = PXf .

All we require are the values of r0 on the grid X of data sites.

However, since PXf |X = f |X the values r0(x i) are given by theinterpolation data f (x i) (see line 5 of the code).

Moreover, since the sets Yk are subsets of X the value rk (yk )required to determine β is actually one of the current residualvalues (see line 10 of the code).

[email protected] MATH 590 – Chapter 33 19

Page 64: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIt is important to realize that in our MATLAB implementation wenever actually compute the initial residual r0 = PXf .

All we require are the values of r0 on the grid X of data sites.

However, since PXf |X = f |X the values r0(x i) are given by theinterpolation data f (x i) (see line 5 of the code).

Moreover, since the sets Yk are subsets of X the value rk (yk )required to determine β is actually one of the current residualvalues (see line 10 of the code).

[email protected] MATH 590 – Chapter 33 19

Page 65: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkWe use DistanceMatrix together with rbf to compute both

K (yk ,yk ) (on lines 9 and 10) andK (x i ,yk ) needed for the updates of the residual rk+1 and theapproximation uk+1 on lines 11–14.

Note that the “matrices” DM_data, IM, DM_res, RM, DM_eval,EM are only column vectors since only one center, yk , is involved.

[email protected] MATH 590 – Chapter 33 20

Page 66: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkWe use DistanceMatrix together with rbf to compute both

K (yk ,yk ) (on lines 9 and 10) andK (x i ,yk ) needed for the updates of the residual rk+1 and theapproximation uk+1 on lines 11–14.

Note that the “matrices” DM_data, IM, DM_res, RM, DM_eval,EM are only column vectors since only one center, yk , is involved.

[email protected] MATH 590 – Chapter 33 20

Page 67: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe algorithm demands that we compute the residuals rk on thedata sites.

The partial approximants uk to the interpolant can be evaluatedanywhere.

If we do this also at the data sites, then we are required to use aplotting routine that differs from our usual one (such as trisurfbuilt on a triangulation of the data sites obtained with the help ofdelaunayn).We instead follow the same procedure as in all of our otherprograms, i.e., to evaluate uk on a 40× 40 grid of equally spacedpoints. This has been implemented on lines 11–15 of the program.

Note that the updating procedure has been vectorized in MATLAB

allowing us to avoid the for-loop over i in the algorithm.

[email protected] MATH 590 – Chapter 33 21

Page 68: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe algorithm demands that we compute the residuals rk on thedata sites.

The partial approximants uk to the interpolant can be evaluatedanywhere.

If we do this also at the data sites, then we are required to use aplotting routine that differs from our usual one (such as trisurfbuilt on a triangulation of the data sites obtained with the help ofdelaunayn).We instead follow the same procedure as in all of our otherprograms, i.e., to evaluate uk on a 40× 40 grid of equally spacedpoints. This has been implemented on lines 11–15 of the program.

Note that the updating procedure has been vectorized in MATLAB

allowing us to avoid the for-loop over i in the algorithm.

[email protected] MATH 590 – Chapter 33 21

Page 69: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe algorithm demands that we compute the residuals rk on thedata sites.

The partial approximants uk to the interpolant can be evaluatedanywhere.

If we do this also at the data sites, then we are required to use aplotting routine that differs from our usual one (such as trisurfbuilt on a triangulation of the data sites obtained with the help ofdelaunayn).

We instead follow the same procedure as in all of our otherprograms, i.e., to evaluate uk on a 40× 40 grid of equally spacedpoints. This has been implemented on lines 11–15 of the program.

Note that the updating procedure has been vectorized in MATLAB

allowing us to avoid the for-loop over i in the algorithm.

[email protected] MATH 590 – Chapter 33 21

Page 70: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe algorithm demands that we compute the residuals rk on thedata sites.

The partial approximants uk to the interpolant can be evaluatedanywhere.

If we do this also at the data sites, then we are required to use aplotting routine that differs from our usual one (such as trisurfbuilt on a triangulation of the data sites obtained with the help ofdelaunayn).We instead follow the same procedure as in all of our otherprograms, i.e., to evaluate uk on a 40× 40 grid of equally spacedpoints. This has been implemented on lines 11–15 of the program.

Note that the updating procedure has been vectorized in MATLAB

allowing us to avoid the for-loop over i in the algorithm.

[email protected] MATH 590 – Chapter 33 21

Page 71: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkThe algorithm demands that we compute the residuals rk on thedata sites.

The partial approximants uk to the interpolant can be evaluatedanywhere.

If we do this also at the data sites, then we are required to use aplotting routine that differs from our usual one (such as trisurfbuilt on a triangulation of the data sites obtained with the help ofdelaunayn).We instead follow the same procedure as in all of our otherprograms, i.e., to evaluate uk on a 40× 40 grid of equally spacedpoints. This has been implemented on lines 11–15 of the program.

Note that the updating procedure has been vectorized in MATLAB

allowing us to avoid the for-loop over i in the algorithm.

[email protected] MATH 590 – Chapter 33 21

Page 72: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Program (RBFGreedyOnePoint2D.m)1 rbf = @(e,r) exp(-(e*r).^2); ep = 5.5;2 N = 16641; dsites = CreatePoints(N,2,’h’);3 neval = 40; epoints = CreatePoints(neval^2,2,’u’);4 tol = 1e-5; kmax = 1000;5 res = testfunctionsD(dsites); u = 0;6 k = 1; maxres(k) = 999999;7 ykidx = (N+1)/2; yk(k,:) = dsites(ykidx,:);8 while (maxres(k) > tol && k < kmax)9 DM_data = DistanceMatrix(yk(k,:),yk(k,:));

10 IM = rbf(ep,DM_data); beta = res(ykidx)/IM;11 DM_res = DistanceMatrix(dsites,yk(k,:));12 RM = rbf(ep,DM_res);13 DM_eval = DistanceMatrix(epoints,yk(k,:));14 EM = rbf(ep,DM_eval);15 res = res - beta*RM; u = u + beta*EM;16 [maxres(k+1), ykidx] = max(abs(res));17 yk(k+1,:) = dsites(ykidx,:); k = k + 1;18 end19 exact = testfunctionsD(epoints);20 rms_err = norm(u-exact)/neval

[email protected] MATH 590 – Chapter 33 22

Page 73: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

To illustrate the greedy one-point algorithm we perform twoexperiments.

Both tests use data obtained by sampling Franke’s function at 16641Halton points in [0,1]2.

Test 1 is based on Gaussians,Test 2 uses inverse multiquadrics.

For both tests we use the same shape parameter ε = 5.5.

[email protected] MATH 590 – Chapter 33 23

Page 74: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Figure: 1000 selected points and residual for greedy one point algorithm withGaussian RBFs and N = 16641 data points.

[email protected] MATH 590 – Chapter 33 24

Page 75: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Figure: Fits of Franke’s function for greedy one point algorithm with GaussianRBFs and N = 16641 data points. Top left to bottom right: 1 point, 2 points, 4points, final fit with 1000 points.

[email protected] MATH 590 – Chapter 33 25

Page 76: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn order to obtain our approximate interpolants we used

a tolerance of 10−5

along with an additional upper limit of kmax=1000 on the number ofiterations.

For both tests the algorithm uses up all 1000 iterations.The final maximum residual is

maxres = 0.0075 for Gaussians, andmaxres = 0.0035 for inverse MQs.

In both cases there occurred several multiple point selections.Contrary to interpolation problems based on the solution of alinear system, multiple point selections do not pose a problemhere.

[email protected] MATH 590 – Chapter 33 26

Page 77: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn order to obtain our approximate interpolants we used

a tolerance of 10−5

along with an additional upper limit of kmax=1000 on the number ofiterations.

For both tests the algorithm uses up all 1000 iterations.

The final maximum residual ismaxres = 0.0075 for Gaussians, andmaxres = 0.0035 for inverse MQs.

In both cases there occurred several multiple point selections.Contrary to interpolation problems based on the solution of alinear system, multiple point selections do not pose a problemhere.

[email protected] MATH 590 – Chapter 33 26

Page 78: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn order to obtain our approximate interpolants we used

a tolerance of 10−5

along with an additional upper limit of kmax=1000 on the number ofiterations.

For both tests the algorithm uses up all 1000 iterations.The final maximum residual is

maxres = 0.0075 for Gaussians, andmaxres = 0.0035 for inverse MQs.

In both cases there occurred several multiple point selections.Contrary to interpolation problems based on the solution of alinear system, multiple point selections do not pose a problemhere.

[email protected] MATH 590 – Chapter 33 26

Page 79: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn order to obtain our approximate interpolants we used

a tolerance of 10−5

along with an additional upper limit of kmax=1000 on the number ofiterations.

For both tests the algorithm uses up all 1000 iterations.The final maximum residual is

maxres = 0.0075 for Gaussians, andmaxres = 0.0035 for inverse MQs.

In both cases there occurred several multiple point selections.

Contrary to interpolation problems based on the solution of alinear system, multiple point selections do not pose a problemhere.

[email protected] MATH 590 – Chapter 33 26

Page 80: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn order to obtain our approximate interpolants we used

a tolerance of 10−5

along with an additional upper limit of kmax=1000 on the number ofiterations.

For both tests the algorithm uses up all 1000 iterations.The final maximum residual is

maxres = 0.0075 for Gaussians, andmaxres = 0.0035 for inverse MQs.

In both cases there occurred several multiple point selections.Contrary to interpolation problems based on the solution of alinear system, multiple point selections do not pose a problemhere.

[email protected] MATH 590 – Chapter 33 26

Page 81: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Figure: 1000 selected points and residual for greedy one point algorithm withIMQ RBFs and N = 16641 data points.

[email protected] MATH 590 – Chapter 33 27

Page 82: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Figure: Fits of Franke’s function for greedy one point algorithm with IMQRBFs and N = 16641 data points. Top left to bottom right: 1 point, 2 points, 4points, final fit with 1000 points.

[email protected] MATH 590 – Chapter 33 28

Page 83: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkWe note that the inverse multiquadrics have a more globalinfluence than the Gaussians (for the same shape parameter).

This effect is clearly evident in the first few approximations to theinterpolants in the figures.From the last figure we see that the greedy algorithm enforcesinterpolation of the data only on the most recent set Yk (i.e., forthe one-point algorithm studied here only at a single point).If one wants to maintain the interpolation achieved in previousiterations, then the sets Yk should be nested.This, however, would have a significant effect on the executiontime of the algorithm since the matrices at each step wouldincrease in size.

[email protected] MATH 590 – Chapter 33 29

Page 84: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkWe note that the inverse multiquadrics have a more globalinfluence than the Gaussians (for the same shape parameter).This effect is clearly evident in the first few approximations to theinterpolants in the figures.

From the last figure we see that the greedy algorithm enforcesinterpolation of the data only on the most recent set Yk (i.e., forthe one-point algorithm studied here only at a single point).If one wants to maintain the interpolation achieved in previousiterations, then the sets Yk should be nested.This, however, would have a significant effect on the executiontime of the algorithm since the matrices at each step wouldincrease in size.

[email protected] MATH 590 – Chapter 33 29

Page 85: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkWe note that the inverse multiquadrics have a more globalinfluence than the Gaussians (for the same shape parameter).This effect is clearly evident in the first few approximations to theinterpolants in the figures.From the last figure we see that the greedy algorithm enforcesinterpolation of the data only on the most recent set Yk (i.e., forthe one-point algorithm studied here only at a single point).

If one wants to maintain the interpolation achieved in previousiterations, then the sets Yk should be nested.This, however, would have a significant effect on the executiontime of the algorithm since the matrices at each step wouldincrease in size.

[email protected] MATH 590 – Chapter 33 29

Page 86: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkWe note that the inverse multiquadrics have a more globalinfluence than the Gaussians (for the same shape parameter).This effect is clearly evident in the first few approximations to theinterpolants in the figures.From the last figure we see that the greedy algorithm enforcesinterpolation of the data only on the most recent set Yk (i.e., forthe one-point algorithm studied here only at a single point).If one wants to maintain the interpolation achieved in previousiterations, then the sets Yk should be nested.

This, however, would have a significant effect on the executiontime of the algorithm since the matrices at each step wouldincrease in size.

[email protected] MATH 590 – Chapter 33 29

Page 87: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkWe note that the inverse multiquadrics have a more globalinfluence than the Gaussians (for the same shape parameter).This effect is clearly evident in the first few approximations to theinterpolants in the figures.From the last figure we see that the greedy algorithm enforcesinterpolation of the data only on the most recent set Yk (i.e., forthe one-point algorithm studied here only at a single point).If one wants to maintain the interpolation achieved in previousiterations, then the sets Yk should be nested.This, however, would have a significant effect on the executiontime of the algorithm since the matrices at each step wouldincrease in size.

[email protected] MATH 590 – Chapter 33 29

Page 88: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkOne advantage of this very simple algorithm is that no linearsystems need to be solved.

This allows us to approximate the interpolants for large data setseven for globally supported kernels,and also with small values of ε (and therefore an associatedill-conditioned interpolation matrix).

One should not expect too much in this case, however, as theresults in the following figure show where we used a value ofε = 0.1 for the shape parameter.A lot of smoothing occurs so that the convergence to the RBFinterpolant is very slow.

[email protected] MATH 590 – Chapter 33 30

Page 89: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkOne advantage of this very simple algorithm is that no linearsystems need to be solved.This allows us to approximate the interpolants for large data sets

even for globally supported kernels,

and also with small values of ε (and therefore an associatedill-conditioned interpolation matrix).

One should not expect too much in this case, however, as theresults in the following figure show where we used a value ofε = 0.1 for the shape parameter.A lot of smoothing occurs so that the convergence to the RBFinterpolant is very slow.

[email protected] MATH 590 – Chapter 33 30

Page 90: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkOne advantage of this very simple algorithm is that no linearsystems need to be solved.This allows us to approximate the interpolants for large data sets

even for globally supported kernels,and also with small values of ε (and therefore an associatedill-conditioned interpolation matrix).

One should not expect too much in this case, however, as theresults in the following figure show where we used a value ofε = 0.1 for the shape parameter.A lot of smoothing occurs so that the convergence to the RBFinterpolant is very slow.

[email protected] MATH 590 – Chapter 33 30

Page 91: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkOne advantage of this very simple algorithm is that no linearsystems need to be solved.This allows us to approximate the interpolants for large data sets

even for globally supported kernels,and also with small values of ε (and therefore an associatedill-conditioned interpolation matrix).

One should not expect too much in this case, however, as theresults in the following figure show where we used a value ofε = 0.1 for the shape parameter.

A lot of smoothing occurs so that the convergence to the RBFinterpolant is very slow.

[email protected] MATH 590 – Chapter 33 30

Page 92: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkOne advantage of this very simple algorithm is that no linearsystems need to be solved.This allows us to approximate the interpolants for large data sets

even for globally supported kernels,and also with small values of ε (and therefore an associatedill-conditioned interpolation matrix).

One should not expect too much in this case, however, as theresults in the following figure show where we used a value ofε = 0.1 for the shape parameter.A lot of smoothing occurs so that the convergence to the RBFinterpolant is very slow.

[email protected] MATH 590 – Chapter 33 30

Page 93: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

Figure: 1000 selected points (only 20 of them distinct) and fit of Franke’sfunction for greedy one point algorithm with flat Gaussian RBFs (ε = 0.1) andN = 16641 data points.

[email protected] MATH 590 – Chapter 33 31

Page 94: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn the pseudo-code of the algorithm matrix-vector multiplicationsare not required.

However, MATLAB allows for a vectorization of the for-loop whichdoes result in two matrix-vector multiplications.For practical situations, e.g., for smooth kernels and denselydistributed points in X the convergence can be rather slow.The simple greedy algorithm described above is extended in[Schaback and Wendland (2000b)] to a version that adaptivelyuses kernels of varying scales.

[email protected] MATH 590 – Chapter 33 32

Page 95: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn the pseudo-code of the algorithm matrix-vector multiplicationsare not required.However, MATLAB allows for a vectorization of the for-loop whichdoes result in two matrix-vector multiplications.

For practical situations, e.g., for smooth kernels and denselydistributed points in X the convergence can be rather slow.The simple greedy algorithm described above is extended in[Schaback and Wendland (2000b)] to a version that adaptivelyuses kernels of varying scales.

[email protected] MATH 590 – Chapter 33 32

Page 96: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn the pseudo-code of the algorithm matrix-vector multiplicationsare not required.However, MATLAB allows for a vectorization of the for-loop whichdoes result in two matrix-vector multiplications.For practical situations, e.g., for smooth kernels and denselydistributed points in X the convergence can be rather slow.

The simple greedy algorithm described above is extended in[Schaback and Wendland (2000b)] to a version that adaptivelyuses kernels of varying scales.

[email protected] MATH 590 – Chapter 33 32

Page 97: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

A Greedy Adaptive Algorithm

RemarkIn the pseudo-code of the algorithm matrix-vector multiplicationsare not required.However, MATLAB allows for a vectorization of the for-loop whichdoes result in two matrix-vector multiplications.For practical situations, e.g., for smooth kernels and denselydistributed points in X the convergence can be rather slow.The simple greedy algorithm described above is extended in[Schaback and Wendland (2000b)] to a version that adaptivelyuses kernels of varying scales.

[email protected] MATH 590 – Chapter 33 32

Page 98: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Outline

1 A Greedy Adaptive Algorithm

2 The Faul-Powell Algorithm

[email protected] MATH 590 – Chapter 33 33

Page 99: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Another iterative algorithm was suggested in[Faul and Powell (1999), Faul and Powell (2000)].

From our earlier discussions we know that it is possible to express akernel interpolant in terms of cardinal functions u∗j , j = 1, . . . ,N, i.e.,

Pf (x) =N∑

j=1

f (x j)u∗j (x).

The basic idea of the Faul-Powell algorithm is to use approximatecardinal functions Ψj instead.

Of course, this will only give an approximate value for the interpolant,and therefore an iteration on the residuals is suggested to improve theaccuracy of this approximation.

[email protected] MATH 590 – Chapter 33 34

Page 100: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Another iterative algorithm was suggested in[Faul and Powell (1999), Faul and Powell (2000)].

From our earlier discussions we know that it is possible to express akernel interpolant in terms of cardinal functions u∗j , j = 1, . . . ,N, i.e.,

Pf (x) =N∑

j=1

f (x j)u∗j (x).

The basic idea of the Faul-Powell algorithm is to use approximatecardinal functions Ψj instead.

Of course, this will only give an approximate value for the interpolant,and therefore an iteration on the residuals is suggested to improve theaccuracy of this approximation.

[email protected] MATH 590 – Chapter 33 34

Page 101: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Another iterative algorithm was suggested in[Faul and Powell (1999), Faul and Powell (2000)].

From our earlier discussions we know that it is possible to express akernel interpolant in terms of cardinal functions u∗j , j = 1, . . . ,N, i.e.,

Pf (x) =N∑

j=1

f (x j)u∗j (x).

The basic idea of the Faul-Powell algorithm is to use approximatecardinal functions Ψj instead.

Of course, this will only give an approximate value for the interpolant,and therefore an iteration on the residuals is suggested to improve theaccuracy of this approximation.

[email protected] MATH 590 – Chapter 33 34

Page 102: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Another iterative algorithm was suggested in[Faul and Powell (1999), Faul and Powell (2000)].

From our earlier discussions we know that it is possible to express akernel interpolant in terms of cardinal functions u∗j , j = 1, . . . ,N, i.e.,

Pf (x) =N∑

j=1

f (x j)u∗j (x).

The basic idea of the Faul-Powell algorithm is to use approximatecardinal functions Ψj instead.

Of course, this will only give an approximate value for the interpolant,and therefore an iteration on the residuals is suggested to improve theaccuracy of this approximation.

[email protected] MATH 590 – Chapter 33 34

Page 103: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

The approximate cardinal functions Ψj , j = 1, . . . ,N, are determined aslinear combinations of the basis functions K (·,x`) for the interpolant,i.e.,

Ψj =∑`∈Lj

bj`K (·,x`), (6)

where Lj is an index set consisting of n (n ≈ 50) indices that are usedto determine the approximate cardinal function.

ExampleThe n nearest neighbors of x j will usually do.

RemarkThe basic philosophy of this algorithm is very similar to that of theomitted fixed level iteration of Chapter 31 where approximate MLSgenerating functions were used as approximate cardinal functions.The Faul-Powell algorithm can be interpreted as a Krylovsubspace method.

[email protected] MATH 590 – Chapter 33 35

Page 104: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

The approximate cardinal functions Ψj , j = 1, . . . ,N, are determined aslinear combinations of the basis functions K (·,x`) for the interpolant,i.e.,

Ψj =∑`∈Lj

bj`K (·,x`), (6)

where Lj is an index set consisting of n (n ≈ 50) indices that are usedto determine the approximate cardinal function.

ExampleThe n nearest neighbors of x j will usually do.

RemarkThe basic philosophy of this algorithm is very similar to that of theomitted fixed level iteration of Chapter 31 where approximate MLSgenerating functions were used as approximate cardinal functions.The Faul-Powell algorithm can be interpreted as a Krylovsubspace method.

[email protected] MATH 590 – Chapter 33 35

Page 105: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

The approximate cardinal functions Ψj , j = 1, . . . ,N, are determined aslinear combinations of the basis functions K (·,x`) for the interpolant,i.e.,

Ψj =∑`∈Lj

bj`K (·,x`), (6)

where Lj is an index set consisting of n (n ≈ 50) indices that are usedto determine the approximate cardinal function.

ExampleThe n nearest neighbors of x j will usually do.

RemarkThe basic philosophy of this algorithm is very similar to that of theomitted fixed level iteration of Chapter 31 where approximate MLSgenerating functions were used as approximate cardinal functions.

The Faul-Powell algorithm can be interpreted as a Krylovsubspace method.

[email protected] MATH 590 – Chapter 33 35

Page 106: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

The approximate cardinal functions Ψj , j = 1, . . . ,N, are determined aslinear combinations of the basis functions K (·,x`) for the interpolant,i.e.,

Ψj =∑`∈Lj

bj`K (·,x`), (6)

where Lj is an index set consisting of n (n ≈ 50) indices that are usedto determine the approximate cardinal function.

ExampleThe n nearest neighbors of x j will usually do.

RemarkThe basic philosophy of this algorithm is very similar to that of theomitted fixed level iteration of Chapter 31 where approximate MLSgenerating functions were used as approximate cardinal functions.The Faul-Powell algorithm can be interpreted as a Krylovsubspace method.

[email protected] MATH 590 – Chapter 33 35

Page 107: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkIn general, the choice of index sets allows much freedom, and thisis the reason why we include the algorithm in this chapter onadaptive iterative methods.

As pointed out at the end of this section, there is a certain dualitybetween the Faul-Powell algorithm and the greedy algorithm of theprevious section.

[email protected] MATH 590 – Chapter 33 36

Page 108: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkIn general, the choice of index sets allows much freedom, and thisis the reason why we include the algorithm in this chapter onadaptive iterative methods.

As pointed out at the end of this section, there is a certain dualitybetween the Faul-Powell algorithm and the greedy algorithm of theprevious section.

[email protected] MATH 590 – Chapter 33 36

Page 109: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

For every j = 1, . . . ,N, the coefficients bj` are found as solution of the(relatively small) n × n linear system

Ψj(x i) = δjk , i ∈ Lj . (7)

These approximate cardinal functions are computed in apre-processing step.In its simplest form the residual iteration can be formulated as

u(0)(x) =N∑

j=1

f (x j)Ψj(x)

u(k+1)(x) = u(k)(x) +N∑

j=1

[f (x j)− u(k)(x j)

]Ψj(x), k = 0,1, . . . .

[email protected] MATH 590 – Chapter 33 37

Page 110: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

For every j = 1, . . . ,N, the coefficients bj` are found as solution of the(relatively small) n × n linear system

Ψj(x i) = δjk , i ∈ Lj . (7)

These approximate cardinal functions are computed in apre-processing step.

In its simplest form the residual iteration can be formulated as

u(0)(x) =N∑

j=1

f (x j)Ψj(x)

u(k+1)(x) = u(k)(x) +N∑

j=1

[f (x j)− u(k)(x j)

]Ψj(x), k = 0,1, . . . .

[email protected] MATH 590 – Chapter 33 37

Page 111: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

For every j = 1, . . . ,N, the coefficients bj` are found as solution of the(relatively small) n × n linear system

Ψj(x i) = δjk , i ∈ Lj . (7)

These approximate cardinal functions are computed in apre-processing step.In its simplest form the residual iteration can be formulated as

u(0)(x) =N∑

j=1

f (x j)Ψj(x)

u(k+1)(x) = u(k)(x) +N∑

j=1

[f (x j)− u(k)(x j)

]Ψj(x), k = 0,1, . . . .

[email protected] MATH 590 – Chapter 33 37

Page 112: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Instead of adding the contribution of all approximate cardinal functionsat the same time, this is done in a three-step process in theFaul-Powell algorithm.

To this end, we choose index sets Lj , j = 1, . . . ,N − n, such that

Lj ⊆ j , j + 1, . . . ,N

while making sure that j ∈ Lj .

RemarkIf one wants to use this algorithm to approximate the interpolant basedon conditionally positive definite kernels of order m, then one needs toensure that the corresponding centers form an (m − 1)-unisolvent setand append a polynomial to the local expansion (6).

[email protected] MATH 590 – Chapter 33 38

Page 113: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Instead of adding the contribution of all approximate cardinal functionsat the same time, this is done in a three-step process in theFaul-Powell algorithm.

To this end, we choose index sets Lj , j = 1, . . . ,N − n, such that

Lj ⊆ j , j + 1, . . . ,N

while making sure that j ∈ Lj .

RemarkIf one wants to use this algorithm to approximate the interpolant basedon conditionally positive definite kernels of order m, then one needs toensure that the corresponding centers form an (m − 1)-unisolvent setand append a polynomial to the local expansion (6).

[email protected] MATH 590 – Chapter 33 38

Page 114: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Instead of adding the contribution of all approximate cardinal functionsat the same time, this is done in a three-step process in theFaul-Powell algorithm.

To this end, we choose index sets Lj , j = 1, . . . ,N − n, such that

Lj ⊆ j , j + 1, . . . ,N

while making sure that j ∈ Lj .

RemarkIf one wants to use this algorithm to approximate the interpolant basedon conditionally positive definite kernels of order m, then one needs toensure that the corresponding centers form an (m − 1)-unisolvent setand append a polynomial to the local expansion (6).

[email protected] MATH 590 – Chapter 33 38

Page 115: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1

We define u(k)0 = u(k), and then iterate

u(k)j = u(k)

j−1 + θ(k)j Ψj , j = 1, . . . ,N − n, (8)

with

θ(k)j =

〈Pf − u(k)j−1,Ψj〉NK (Ω)

〈Ψj ,Ψj〉NK (Ω). (9)

Remark

The stepsize θ(k)j is chosen so that the native space best

approximation to the residual Pf − u(k)j−1 from the space spanned by the

approximate cardinal functions Ψj is added.

[email protected] MATH 590 – Chapter 33 39

Page 116: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1

We define u(k)0 = u(k), and then iterate

u(k)j = u(k)

j−1 + θ(k)j Ψj , j = 1, . . . ,N − n, (8)

with

θ(k)j =

〈Pf − u(k)j−1,Ψj〉NK (Ω)

〈Ψj ,Ψj〉NK (Ω). (9)

Remark

The stepsize θ(k)j is chosen so that the native space best

approximation to the residual Pf − u(k)j−1 from the space spanned by the

approximate cardinal functions Ψj is added.

[email protected] MATH 590 – Chapter 33 39

Page 117: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1

We define u(k)0 = u(k), and then iterate

u(k)j = u(k)

j−1 + θ(k)j Ψj , j = 1, . . . ,N − n, (8)

with

θ(k)j =

〈Pf − u(k)j−1,Ψj〉NK (Ω)

〈Ψj ,Ψj〉NK (Ω). (9)

Remark

The stepsize θ(k)j is chosen so that the native space best

approximation to the residual Pf − u(k)j−1 from the space spanned by the

approximate cardinal functions Ψj is added.

[email protected] MATH 590 – Chapter 33 39

Page 118: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Using the representation

Ψj =∑`∈Lj

bj`K (·,x`),

the reproducing kernel property of K , and the (local) cardinalityproperty Ψj(x i) = δjk , i ∈ Lj we can calculate the denominator of (9) as

〈Ψj ,Ψj〉NK (Ω) = 〈Ψj ,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Ψj ,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`Ψj(x`) = bjj

since we have j ∈ Lj by construction of the index set Lj .

[email protected] MATH 590 – Chapter 33 40

Page 119: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Using the representation

Ψj =∑`∈Lj

bj`K (·,x`),

the reproducing kernel property of K , and the (local) cardinalityproperty Ψj(x i) = δjk , i ∈ Lj we can calculate the denominator of (9) as

〈Ψj ,Ψj〉NK (Ω) = 〈Ψj ,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Ψj ,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`Ψj(x`) = bjj

since we have j ∈ Lj by construction of the index set Lj .

[email protected] MATH 590 – Chapter 33 40

Page 120: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Using the representation

Ψj =∑`∈Lj

bj`K (·,x`),

the reproducing kernel property of K , and the (local) cardinalityproperty Ψj(x i) = δjk , i ∈ Lj we can calculate the denominator of (9) as

〈Ψj ,Ψj〉NK (Ω) = 〈Ψj ,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Ψj ,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`Ψj(x`) = bjj

since we have j ∈ Lj by construction of the index set Lj .

[email protected] MATH 590 – Chapter 33 40

Page 121: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Using the representation

Ψj =∑`∈Lj

bj`K (·,x`),

the reproducing kernel property of K , and the (local) cardinalityproperty Ψj(x i) = δjk , i ∈ Lj we can calculate the denominator of (9) as

〈Ψj ,Ψj〉NK (Ω) = 〈Ψj ,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Ψj ,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`Ψj(x`)

= bjj

since we have j ∈ Lj by construction of the index set Lj .

[email protected] MATH 590 – Chapter 33 40

Page 122: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Using the representation

Ψj =∑`∈Lj

bj`K (·,x`),

the reproducing kernel property of K , and the (local) cardinalityproperty Ψj(x i) = δjk , i ∈ Lj we can calculate the denominator of (9) as

〈Ψj ,Ψj〉NK (Ω) = 〈Ψj ,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Ψj ,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`Ψj(x`) = bjj

since we have j ∈ Lj by construction of the index set Lj [email protected] MATH 590 – Chapter 33 40

Page 123: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Similarly, we get for the numerator

〈Pf − u(k)j−1,Ψj〉NK (Ω) = 〈Pf − u(k)

j−1,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Pf − u(k)j−1,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`

(Pf − u(k)

j−1

)(x`)

=∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)).

Therefore (8) and (9) can be written as

u(k)j = u(k)

j−1 +Ψj

bjj

∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)), j = 1, . . . ,N − n.

[email protected] MATH 590 – Chapter 33 41

Page 124: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Similarly, we get for the numerator

〈Pf − u(k)j−1,Ψj〉NK (Ω) = 〈Pf − u(k)

j−1,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Pf − u(k)j−1,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`

(Pf − u(k)

j−1

)(x`)

=∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)).

Therefore (8) and (9) can be written as

u(k)j = u(k)

j−1 +Ψj

bjj

∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)), j = 1, . . . ,N − n.

[email protected] MATH 590 – Chapter 33 41

Page 125: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Similarly, we get for the numerator

〈Pf − u(k)j−1,Ψj〉NK (Ω) = 〈Pf − u(k)

j−1,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Pf − u(k)j−1,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`

(Pf − u(k)

j−1

)(x`)

=∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)).

Therefore (8) and (9) can be written as

u(k)j = u(k)

j−1 +Ψj

bjj

∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)), j = 1, . . . ,N − n.

[email protected] MATH 590 – Chapter 33 41

Page 126: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Similarly, we get for the numerator

〈Pf − u(k)j−1,Ψj〉NK (Ω) = 〈Pf − u(k)

j−1,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Pf − u(k)j−1,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`

(Pf − u(k)

j−1

)(x`)

=∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)).

Therefore (8) and (9) can be written as

u(k)j = u(k)

j−1 +Ψj

bjj

∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)), j = 1, . . . ,N − n.

[email protected] MATH 590 – Chapter 33 41

Page 127: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 1 (cont.)

Similarly, we get for the numerator

〈Pf − u(k)j−1,Ψj〉NK (Ω) = 〈Pf − u(k)

j−1,∑`∈Lj

bj`K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`〈Pf − u(k)j−1,K (·,x`)〉NK (Ω)

=∑`∈Lj

bj`

(Pf − u(k)

j−1

)(x`)

=∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)).

Therefore (8) and (9) can be written as

u(k)j = u(k)

j−1 +Ψj

bjj

∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)), j = 1, . . . ,N − n.

[email protected] MATH 590 – Chapter 33 41

Page 128: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 2

Next we interpolate the residual on the remaining n points (collectedvia the index set L∗).

Thus, we find a function v (k) in spanK (·,x j) : j ∈ L∗ such that

v (k)(x i) = f (x i)− u(k)N−n(x i), i ∈ L∗,

and the approximation is updated, i.e.,

u(k+1) = u(k)N−n + v (k).

[email protected] MATH 590 – Chapter 33 42

Page 129: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 2

Next we interpolate the residual on the remaining n points (collectedvia the index set L∗).

Thus, we find a function v (k) in spanK (·,x j) : j ∈ L∗ such that

v (k)(x i) = f (x i)− u(k)N−n(x i), i ∈ L∗,

and the approximation is updated, i.e.,

u(k+1) = u(k)N−n + v (k).

[email protected] MATH 590 – Chapter 33 42

Page 130: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 2

Next we interpolate the residual on the remaining n points (collectedvia the index set L∗).

Thus, we find a function v (k) in spanK (·,x j) : j ∈ L∗ such that

v (k)(x i) = f (x i)− u(k)N−n(x i), i ∈ L∗,

and the approximation is updated, i.e.,

u(k+1) = u(k)N−n + v (k).

[email protected] MATH 590 – Chapter 33 42

Page 131: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 3

Finally, the residuals are updated, i.e.,

r (k+1)i = f (x i)− u(k+1)(x i), i = 1, . . . ,N. (10)

RemarkThe outer iteration (on k) is now repeated unless the largest of theseresiduals is small enough.

[email protected] MATH 590 – Chapter 33 43

Page 132: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Step 3

Finally, the residuals are updated, i.e.,

r (k+1)i = f (x i)− u(k+1)(x i), i = 1, . . . ,N. (10)

RemarkThe outer iteration (on k) is now repeated unless the largest of theseresiduals is small enough.

[email protected] MATH 590 – Chapter 33 43

Page 133: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Algorithm (Pre-processing step)

Choose nFor 1 ≤ j ≤ N − n do

Determine the index set LjFind the coefficients bj` of the approximate cardinal function Ψj bysolving

Ψj (x i ) = δjk , i ∈ Lj

end

[email protected] MATH 590 – Chapter 33 44

Page 134: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

Algorithm (Faul-Powell)Input data locations X , associated values of f , tolerance tol > 0Perform pre-processing stepInitialize: k = 0, u(k)

0 = 0, r (k)i = f (x i ), i = 1, . . . ,N, e = max

i=1,...,N|r (k)

i |

While e > tol do

Update

u(k)j = u(k)

j−1 +Ψj

bjj

∑`∈Lj

bj`

(f (x`)− u(k)

j−1(x`)), 1 ≤ j ≤ N − n

Solve the interpolation problem

v (k)(x i ) = f (x i )− u(k)N−n(x i ), i ∈ L∗

Update the approximation

u(k+1)0 = u(k)

N−n + v (k)

Compute new residuals r (k+1)i = f (x i )− u(k+1)

0 (x i ), i = 1, . . . ,NSet new value for e = max

i=1,...,N|r (k+1)

i |Increment k = k + 1

[email protected] MATH 590 – Chapter 33 45

Page 135: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkFaul and Powell prove that this algorithm converges to the solutionof the original interpolation problem.

One needs to make sure that the residuals are evaluatedefficiently by using, e.g.,

a fast multipole expansion,fast Fourier transform, orcompactly supported kernels.

[email protected] MATH 590 – Chapter 33 46

Page 136: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkFaul and Powell prove that this algorithm converges to the solutionof the original interpolation problem.

One needs to make sure that the residuals are evaluatedefficiently by using, e.g.,

a fast multipole expansion,fast Fourier transform, orcompactly supported kernels.

[email protected] MATH 590 – Chapter 33 46

Page 137: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkIn its most basic form the Krylov subspace algorithm of Faul andPowell can also be explained as a dual approach to the greedyresidual iteration algorithm of Schaback and Wendland.

Instead of defining appropriate sets of points Yk , in the Faul andPowell algorithm one picks certain subspaces Uk of the nativespace.In particular, if Uk is the one-dimensional space Uk = spanΨk(where Ψk is a local approximation to the cardinal function) we getthe Schaback-Wendland algorithm described above.

For more details see [Schaback and Wendland (2000b)].

Implementation of this algorithm is omitted.

[email protected] MATH 590 – Chapter 33 47

Page 138: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkIn its most basic form the Krylov subspace algorithm of Faul andPowell can also be explained as a dual approach to the greedyresidual iteration algorithm of Schaback and Wendland.

Instead of defining appropriate sets of points Yk , in the Faul andPowell algorithm one picks certain subspaces Uk of the nativespace.

In particular, if Uk is the one-dimensional space Uk = spanΨk(where Ψk is a local approximation to the cardinal function) we getthe Schaback-Wendland algorithm described above.

For more details see [Schaback and Wendland (2000b)].

Implementation of this algorithm is omitted.

[email protected] MATH 590 – Chapter 33 47

Page 139: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkIn its most basic form the Krylov subspace algorithm of Faul andPowell can also be explained as a dual approach to the greedyresidual iteration algorithm of Schaback and Wendland.

Instead of defining appropriate sets of points Yk , in the Faul andPowell algorithm one picks certain subspaces Uk of the nativespace.In particular, if Uk is the one-dimensional space Uk = spanΨk(where Ψk is a local approximation to the cardinal function) we getthe Schaback-Wendland algorithm described above.

For more details see [Schaback and Wendland (2000b)].

Implementation of this algorithm is omitted.

[email protected] MATH 590 – Chapter 33 47

Page 140: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkIn its most basic form the Krylov subspace algorithm of Faul andPowell can also be explained as a dual approach to the greedyresidual iteration algorithm of Schaback and Wendland.

Instead of defining appropriate sets of points Yk , in the Faul andPowell algorithm one picks certain subspaces Uk of the nativespace.In particular, if Uk is the one-dimensional space Uk = spanΨk(where Ψk is a local approximation to the cardinal function) we getthe Schaback-Wendland algorithm described above.

For more details see [Schaback and Wendland (2000b)].

Implementation of this algorithm is omitted.

[email protected] MATH 590 – Chapter 33 47

Page 141: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

The Faul-Powell Algorithm

RemarkIn its most basic form the Krylov subspace algorithm of Faul andPowell can also be explained as a dual approach to the greedyresidual iteration algorithm of Schaback and Wendland.

Instead of defining appropriate sets of points Yk , in the Faul andPowell algorithm one picks certain subspaces Uk of the nativespace.In particular, if Uk is the one-dimensional space Uk = spanΨk(where Ψk is a local approximation to the cardinal function) we getthe Schaback-Wendland algorithm described above.

For more details see [Schaback and Wendland (2000b)].

Implementation of this algorithm is omitted.

[email protected] MATH 590 – Chapter 33 47

Page 142: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

Appendix References

References I

Berlinet, A., Thomas-Agnan, C. (2004).Reproducing Kernel Hilbert Spaces in Probability and Statistics.Kluwer, Dordrecht.

Buhmann, M. D. (2003).Radial Basis Functions: Theory and Implementations.Cambridge University Press.

Fasshauer, G. E. (2007).Meshfree Approximation Methods with MATLAB.World Scientific Publishers.

Higham, D. J. and Higham, N. J. (2005).MATLAB Guide.SIAM (2nd ed.), Philadelphia.

Iske, A. (2004).Multiresolution Methods in Scattered Data Modelling.Lecture Notes in Computational Science and Engineering 37, Springer Verlag(Berlin).

[email protected] MATH 590 – Chapter 33 48

Page 143: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

Appendix References

References II

G. Wahba (1990).Spline Models for Observational Data.CBMS-NSF Regional Conference Series in Applied Mathematics 59, SIAM(Philadelphia).

Wendland, H. (2005a).Scattered Data Approximation.Cambridge University Press (Cambridge).

Faul, A. C. and Powell, M. J. D. (1999).Proof of convergence of an iterative technique for thin plate spline interpolation intwo dimensions.Adv. Comput. Math. 11, pp. 183–192.

Faul, A. C. and Powell, M. J. D. (2000).Krylov subspace methods for radial basis function interpolation.in Numerical Analysis 1999 (Dundee), Chapman & Hall/CRC (Boca Raton, FL),pp. 115–141.

[email protected] MATH 590 – Chapter 33 49

Page 144: MATH 590: Meshfree Methods - Chapter 33: Adaptive - CiteSeer

Appendix References

References III

Girosi, F. (1998).An equivalence between sparse approximation and support vector machines.Neural Computation 10, pp. 1455–1480.

Schaback, R. and Wendland, H. (2000a).Numerical techniques based on radial basis functions.in Curve and Surface Fitting: Saint-Malo 1999, A. Cohen, C. Rabut, andL. L. Schumaker (eds.), Vanderbilt University Press (Nashville, TN), 359-374.

Schaback, R. and Wendland, H. (2000b).Adaptive greedy techniques for approximate solution of large RBF systems.Numer. Algorithms 24, pp. 239–254.

Temlyakov, V. N. (1998).The best m-term approximation and greedy algorithms.Adv. in Comp. Math. 8, pp. 249–265.

[email protected] MATH 590 – Chapter 33 50