Upload
others
View
9
Download
0
Embed Size (px)
Citation preview
MATH 590: Meshfree MethodsChapter 6: Scattered Data Interpolation with Polynomial Precision
Greg Fasshauer
Department of Applied MathematicsIllinois Institute of Technology
Fall 2010
[email protected] MATH 590 – Chapter 6 1
Outline
1 Interpolation with Multivariate Polynomials
2 Example: Reproduction of Linear Functions using Gaussian RBFs
3 Scattered Data Fitting w/ More General Polynomial Precision
4 Almost Negative Definite Matrices and Reproduction of Constants
[email protected] MATH 590 – Chapter 6 2
Interpolation with Multivariate Polynomials
Outline
1 Interpolation with Multivariate Polynomials
2 Example: Reproduction of Linear Functions using Gaussian RBFs
3 Scattered Data Fitting w/ More General Polynomial Precision
4 Almost Negative Definite Matrices and Reproduction of Constants
[email protected] MATH 590 – Chapter 6 3
Interpolation with Multivariate Polynomials
It is not easy to use polynomials for multivariate scattered datainterpolation.Only if the data sites are in certain special locations can we guaranteewell-posedness of multivariate polynomial interpolation.
Definition
We call a set of points X = {x1, . . . ,xN} ⊂ Rs m-unisolvent if the onlypolynomial of total degree at most m interpolating zero data on X isthe zero polynomial.
RemarkThe definition guarantees a unique solution for interpolation to givendata at any subset of cardinality M =
(m+sm
)of the points x1, . . . ,xN by
a polynomial of degree m.M: dimension of the linear space Πs
m of polynomials of totaldegree less than or equal to m in s variables
[email protected] MATH 590 – Chapter 6 4
Interpolation with Multivariate Polynomials
It is not easy to use polynomials for multivariate scattered datainterpolation.Only if the data sites are in certain special locations can we guaranteewell-posedness of multivariate polynomial interpolation.
Definition
We call a set of points X = {x1, . . . ,xN} ⊂ Rs m-unisolvent if the onlypolynomial of total degree at most m interpolating zero data on X isthe zero polynomial.
RemarkThe definition guarantees a unique solution for interpolation to givendata at any subset of cardinality M =
(m+sm
)of the points x1, . . . ,xN by
a polynomial of degree m.M: dimension of the linear space Πs
m of polynomials of totaldegree less than or equal to m in s variables
[email protected] MATH 590 – Chapter 6 4
Interpolation with Multivariate Polynomials
It is not easy to use polynomials for multivariate scattered datainterpolation.Only if the data sites are in certain special locations can we guaranteewell-posedness of multivariate polynomial interpolation.
Definition
We call a set of points X = {x1, . . . ,xN} ⊂ Rs m-unisolvent if the onlypolynomial of total degree at most m interpolating zero data on X isthe zero polynomial.
RemarkThe definition guarantees a unique solution for interpolation to givendata at any subset of cardinality M =
(m+sm
)of the points x1, . . . ,xN by
a polynomial of degree m.M: dimension of the linear space Πs
m of polynomials of totaldegree less than or equal to m in s variables
[email protected] MATH 590 – Chapter 6 4
Interpolation with Multivariate Polynomials
For polynomial interpolation at N distinct data sites in Rs to be awell-posed problem, the polynomial degree needs to be chosenaccordingly, i.e.,
we need M = N,and the data sites need to form an m-unisolvent set.
This is rather restrictive.
ExampleIt is not clear how to perform polynomial interpolation at N = 7 pointsin R2 in a unique way. We could consider using
bivariate quadratic polynomials (for which M = 6),or bivariate cubic polynomials (with M = 10).
There exists no natural space of bivariate polynomials for which M = 7.
[email protected] MATH 590 – Chapter 6 5
Interpolation with Multivariate Polynomials
For polynomial interpolation at N distinct data sites in Rs to be awell-posed problem, the polynomial degree needs to be chosenaccordingly, i.e.,
we need M = N,and the data sites need to form an m-unisolvent set.
This is rather restrictive.
ExampleIt is not clear how to perform polynomial interpolation at N = 7 pointsin R2 in a unique way. We could consider using
bivariate quadratic polynomials (for which M = 6),or bivariate cubic polynomials (with M = 10).
There exists no natural space of bivariate polynomials for which M = 7.
[email protected] MATH 590 – Chapter 6 5
Interpolation with Multivariate Polynomials
For polynomial interpolation at N distinct data sites in Rs to be awell-posed problem, the polynomial degree needs to be chosenaccordingly, i.e.,
we need M = N,and the data sites need to form an m-unisolvent set.
This is rather restrictive.
ExampleIt is not clear how to perform polynomial interpolation at N = 7 pointsin R2 in a unique way. We could consider using
bivariate quadratic polynomials (for which M = 6),or bivariate cubic polynomials (with M = 10).
There exists no natural space of bivariate polynomials for which M = 7.
[email protected] MATH 590 – Chapter 6 5
Interpolation with Multivariate Polynomials
For polynomial interpolation at N distinct data sites in Rs to be awell-posed problem, the polynomial degree needs to be chosenaccordingly, i.e.,
we need M = N,and the data sites need to form an m-unisolvent set.
This is rather restrictive.
ExampleIt is not clear how to perform polynomial interpolation at N = 7 pointsin R2 in a unique way. We could consider using
bivariate quadratic polynomials (for which M = 6),
or bivariate cubic polynomials (with M = 10).There exists no natural space of bivariate polynomials for which M = 7.
[email protected] MATH 590 – Chapter 6 5
Interpolation with Multivariate Polynomials
For polynomial interpolation at N distinct data sites in Rs to be awell-posed problem, the polynomial degree needs to be chosenaccordingly, i.e.,
we need M = N,and the data sites need to form an m-unisolvent set.
This is rather restrictive.
ExampleIt is not clear how to perform polynomial interpolation at N = 7 pointsin R2 in a unique way. We could consider using
bivariate quadratic polynomials (for which M = 6),or bivariate cubic polynomials (with M = 10).
There exists no natural space of bivariate polynomials for which M = 7.
[email protected] MATH 590 – Chapter 6 5
Interpolation with Multivariate Polynomials
RemarkWe will see in the next chapter that m-unisolvent sets play animportant role in the context of conditionally positive definite functions.
There, however, even though we will be interested in interpolating Npieces of data, the polynomial degree will be small (usuallym = 1,2,3), and the restrictions imposed on the locations of the datasites by the unisolvency conditions will be rather mild.
[email protected] MATH 590 – Chapter 6 6
Interpolation with Multivariate Polynomials
A sufficient condition (see [Chui (1988), Ch. 9], but already known to[Berzolari (1914)] and [Radon (1948)]) on the points x1, . . . ,xN to forman m-unisolvent set in R2 is
Theorem
Suppose {L0, . . . ,Lm} is a set of m + 1 distinct lines in R2, and thatU = {u1, . . . ,uM} is a set of M = (m + 1)(m + 2)/2 distinct points suchthat the first point lies on L0, the next two points lie on L1 but not on L0,and so on, so that the last m + 1 points lie on Lm but not on any of theprevious lines L0, . . . ,Lm−1. Then there exists a unique interpolationpolynomial of total degree at most m to arbitrary data given at thepoints in U .Furthermore, if the data sites {x1, . . . ,xN} contain U as a subset thenthey form an m-unisolvent set on R2.
[email protected] MATH 590 – Chapter 6 7
Interpolation with Multivariate Polynomials
A sufficient condition (see [Chui (1988), Ch. 9], but already known to[Berzolari (1914)] and [Radon (1948)]) on the points x1, . . . ,xN to forman m-unisolvent set in R2 is
Theorem
Suppose {L0, . . . ,Lm} is a set of m + 1 distinct lines in R2, and thatU = {u1, . . . ,uM} is a set of M = (m + 1)(m + 2)/2 distinct points suchthat the first point lies on L0, the next two points lie on L1 but not on L0,and so on, so that the last m + 1 points lie on Lm but not on any of theprevious lines L0, . . . ,Lm−1. Then there exists a unique interpolationpolynomial of total degree at most m to arbitrary data given at thepoints in U .
Furthermore, if the data sites {x1, . . . ,xN} contain U as a subset thenthey form an m-unisolvent set on R2.
[email protected] MATH 590 – Chapter 6 7
Interpolation with Multivariate Polynomials
A sufficient condition (see [Chui (1988), Ch. 9], but already known to[Berzolari (1914)] and [Radon (1948)]) on the points x1, . . . ,xN to forman m-unisolvent set in R2 is
Theorem
Suppose {L0, . . . ,Lm} is a set of m + 1 distinct lines in R2, and thatU = {u1, . . . ,uM} is a set of M = (m + 1)(m + 2)/2 distinct points suchthat the first point lies on L0, the next two points lie on L1 but not on L0,and so on, so that the last m + 1 points lie on Lm but not on any of theprevious lines L0, . . . ,Lm−1. Then there exists a unique interpolationpolynomial of total degree at most m to arbitrary data given at thepoints in U .Furthermore, if the data sites {x1, . . . ,xN} contain U as a subset thenthey form an m-unisolvent set on R2.
[email protected] MATH 590 – Chapter 6 7
Interpolation with Multivariate Polynomials
Proof
We use induction on m.
For m = 0 the result is trivial.Take R to be the matrix arising from polynomial interpolation at thepoints uj ∈ U , i.e.,
Rjk = pk (uj), j , k = 1, . . . ,M,
where the pk form a basis of Π2m.
We want to show that the only possible solution to Rc = 0 is c = 0.This is equivalent to showing that if p ∈ Π2
m satisfies
p(ui) = 0, i = 1, . . . ,M,
then p is the zero polynomial.
[email protected] MATH 590 – Chapter 6 8
Interpolation with Multivariate Polynomials
Proof
We use induction on m.For m = 0 the result is trivial.
Take R to be the matrix arising from polynomial interpolation at thepoints uj ∈ U , i.e.,
Rjk = pk (uj), j , k = 1, . . . ,M,
where the pk form a basis of Π2m.
We want to show that the only possible solution to Rc = 0 is c = 0.This is equivalent to showing that if p ∈ Π2
m satisfies
p(ui) = 0, i = 1, . . . ,M,
then p is the zero polynomial.
[email protected] MATH 590 – Chapter 6 8
Interpolation with Multivariate Polynomials
Proof
We use induction on m.For m = 0 the result is trivial.Take R to be the matrix arising from polynomial interpolation at thepoints uj ∈ U , i.e.,
Rjk = pk (uj), j , k = 1, . . . ,M,
where the pk form a basis of Π2m.
We want to show that the only possible solution to Rc = 0 is c = 0.
This is equivalent to showing that if p ∈ Π2m satisfies
p(ui) = 0, i = 1, . . . ,M,
then p is the zero polynomial.
[email protected] MATH 590 – Chapter 6 8
Interpolation with Multivariate Polynomials
Proof
We use induction on m.For m = 0 the result is trivial.Take R to be the matrix arising from polynomial interpolation at thepoints uj ∈ U , i.e.,
Rjk = pk (uj), j , k = 1, . . . ,M,
where the pk form a basis of Π2m.
We want to show that the only possible solution to Rc = 0 is c = 0.This is equivalent to showing that if p ∈ Π2
m satisfies
p(ui) = 0, i = 1, . . . ,M,
then p is the zero polynomial.
[email protected] MATH 590 – Chapter 6 8
Interpolation with Multivariate Polynomials
For each i = 1, . . . ,m, let the equation of the line Li be
αix + βiy = γi ,
where x = (x , y) ∈ R2.
Suppose now that p interpolates zero data at all the points ui as statedabove.Since p reduces to a univariate polynomial of degree m on Lm whichvanishes at m + 1 distinct points on Lm, it follows that p vanishesidentically on Lm, and so
p(x , y) = (αmx + βmy − γm)q(x , y),
where q is a polynomial of degree m − 1.But now q satisfies the hypothesis of the theorem with m replaced bym − 1 and U replaced by U consisting of the first
(m+12
)points of U .
By induction, therefore q ≡ 0, and thus p ≡ 0. This establishes theuniqueness of the interpolation polynomial.The last statement of the theorem is obvious. �
[email protected] MATH 590 – Chapter 6 9
Interpolation with Multivariate Polynomials
For each i = 1, . . . ,m, let the equation of the line Li be
αix + βiy = γi ,
where x = (x , y) ∈ R2.Suppose now that p interpolates zero data at all the points ui as statedabove.
Since p reduces to a univariate polynomial of degree m on Lm whichvanishes at m + 1 distinct points on Lm, it follows that p vanishesidentically on Lm, and so
p(x , y) = (αmx + βmy − γm)q(x , y),
where q is a polynomial of degree m − 1.But now q satisfies the hypothesis of the theorem with m replaced bym − 1 and U replaced by U consisting of the first
(m+12
)points of U .
By induction, therefore q ≡ 0, and thus p ≡ 0. This establishes theuniqueness of the interpolation polynomial.The last statement of the theorem is obvious. �
[email protected] MATH 590 – Chapter 6 9
Interpolation with Multivariate Polynomials
For each i = 1, . . . ,m, let the equation of the line Li be
αix + βiy = γi ,
where x = (x , y) ∈ R2.Suppose now that p interpolates zero data at all the points ui as statedabove.Since p reduces to a univariate polynomial of degree m on Lm whichvanishes at m + 1 distinct points on Lm, it follows that p vanishesidentically on Lm, and so
p(x , y) = (αmx + βmy − γm)q(x , y),
where q is a polynomial of degree m − 1.
But now q satisfies the hypothesis of the theorem with m replaced bym − 1 and U replaced by U consisting of the first
(m+12
)points of U .
By induction, therefore q ≡ 0, and thus p ≡ 0. This establishes theuniqueness of the interpolation polynomial.The last statement of the theorem is obvious. �
[email protected] MATH 590 – Chapter 6 9
Interpolation with Multivariate Polynomials
For each i = 1, . . . ,m, let the equation of the line Li be
αix + βiy = γi ,
where x = (x , y) ∈ R2.Suppose now that p interpolates zero data at all the points ui as statedabove.Since p reduces to a univariate polynomial of degree m on Lm whichvanishes at m + 1 distinct points on Lm, it follows that p vanishesidentically on Lm, and so
p(x , y) = (αmx + βmy − γm)q(x , y),
where q is a polynomial of degree m − 1.But now q satisfies the hypothesis of the theorem with m replaced bym − 1 and U replaced by U consisting of the first
(m+12
)points of U .
By induction, therefore q ≡ 0, and thus p ≡ 0. This establishes theuniqueness of the interpolation polynomial.The last statement of the theorem is obvious. �
[email protected] MATH 590 – Chapter 6 9
Interpolation with Multivariate Polynomials
For each i = 1, . . . ,m, let the equation of the line Li be
αix + βiy = γi ,
where x = (x , y) ∈ R2.Suppose now that p interpolates zero data at all the points ui as statedabove.Since p reduces to a univariate polynomial of degree m on Lm whichvanishes at m + 1 distinct points on Lm, it follows that p vanishesidentically on Lm, and so
p(x , y) = (αmx + βmy − γm)q(x , y),
where q is a polynomial of degree m − 1.But now q satisfies the hypothesis of the theorem with m replaced bym − 1 and U replaced by U consisting of the first
(m+12
)points of U .
By induction, therefore q ≡ 0, and thus p ≡ 0. This establishes theuniqueness of the interpolation polynomial.
The last statement of the theorem is obvious. �
[email protected] MATH 590 – Chapter 6 9
Interpolation with Multivariate Polynomials
For each i = 1, . . . ,m, let the equation of the line Li be
αix + βiy = γi ,
where x = (x , y) ∈ R2.Suppose now that p interpolates zero data at all the points ui as statedabove.Since p reduces to a univariate polynomial of degree m on Lm whichvanishes at m + 1 distinct points on Lm, it follows that p vanishesidentically on Lm, and so
p(x , y) = (αmx + βmy − γm)q(x , y),
where q is a polynomial of degree m − 1.But now q satisfies the hypothesis of the theorem with m replaced bym − 1 and U replaced by U consisting of the first
(m+12
)points of U .
By induction, therefore q ≡ 0, and thus p ≡ 0. This establishes theuniqueness of the interpolation polynomial.The last statement of the theorem is obvious. �
[email protected] MATH 590 – Chapter 6 9
Interpolation with Multivariate Polynomials
RemarkA similar theorem was already proved in [Chung and Yao (1977)].Chui’s theorem can be generalized to Rs by using hyperplanes. Theproof is constructed with the help of an additional induction on s.
Remark
For later reference we note that (m − 1)-unisolvency of the pointsx1, . . . ,xN is equivalent to the fact that the matrix P with
Pjl = pl(x j), j = 1, . . . ,N, l = 1, . . . ,M,
has full (column-)rank. For N = M =(m−1+s
m−1
)this is the polynomial
interpolation matrix.
[email protected] MATH 590 – Chapter 6 10
Interpolation with Multivariate Polynomials
RemarkA similar theorem was already proved in [Chung and Yao (1977)].Chui’s theorem can be generalized to Rs by using hyperplanes. Theproof is constructed with the help of an additional induction on s.
Remark
For later reference we note that (m − 1)-unisolvency of the pointsx1, . . . ,xN is equivalent to the fact that the matrix P with
Pjl = pl(x j), j = 1, . . . ,N, l = 1, . . . ,M,
has full (column-)rank. For N = M =(m−1+s
m−1
)this is the polynomial
interpolation matrix.
[email protected] MATH 590 – Chapter 6 10
Interpolation with Multivariate Polynomials
Example
Three collinear points in R2 are not 1-unisolvent, since a linearinterpolant, i.e., a plane through three arbitrary heights at these threecollinear points is not uniquely determined.This can easily be verified.
On the other hand, if a set of points in R2 contains three non-collinearpoints, then it is 1-unisolvent.
[email protected] MATH 590 – Chapter 6 11
Interpolation with Multivariate Polynomials
Example
Three collinear points in R2 are not 1-unisolvent, since a linearinterpolant, i.e., a plane through three arbitrary heights at these threecollinear points is not uniquely determined.This can easily be verified.On the other hand, if a set of points in R2 contains three non-collinearpoints, then it is 1-unisolvent.
[email protected] MATH 590 – Chapter 6 11
Interpolation with Multivariate Polynomials
We used the difficulties associated with multivariate polynomialinterpolation as one of the motivations for the use of radial basisfunctions.
However, sometimes it is desirable to have an interpolant that exactlyreproduces certain types of functions.For example, if the data are constant, or come from a linear function,then it would be nice if our interpolant were also constant or linear,respectively.
Unfortunately, the methods presented so far do not reproduce thesesimple polynomial functions.
[email protected] MATH 590 – Chapter 6 12
Interpolation with Multivariate Polynomials
We used the difficulties associated with multivariate polynomialinterpolation as one of the motivations for the use of radial basisfunctions.
However, sometimes it is desirable to have an interpolant that exactlyreproduces certain types of functions.For example, if the data are constant, or come from a linear function,then it would be nice if our interpolant were also constant or linear,respectively.
Unfortunately, the methods presented so far do not reproduce thesesimple polynomial functions.
[email protected] MATH 590 – Chapter 6 12
Interpolation with Multivariate Polynomials
We used the difficulties associated with multivariate polynomialinterpolation as one of the motivations for the use of radial basisfunctions.
However, sometimes it is desirable to have an interpolant that exactlyreproduces certain types of functions.For example, if the data are constant, or come from a linear function,then it would be nice if our interpolant were also constant or linear,respectively.
Unfortunately, the methods presented so far do not reproduce thesesimple polynomial functions.
[email protected] MATH 590 – Chapter 6 12
Interpolation with Multivariate Polynomials
RemarkLater on we will be interested in applying our interpolation methods tothe numerical solution of partial differential equations, and practitioners(especially of finite element methods) often judge an interpolationmethod by its ability to pass the so-called patch test.
An interpolation method passes the standard patch test if it canreproduce linear functions. In engineering applications this translatesinto exact calculation of constant stress and strain.We will see later that in order to prove error estimates for meshfreeapproximation methods it is not necessary to be able to reproducepolynomials globally (but local polynomial reproduction is an essentialingredient).Thus, if we are only concerned with the approximation power of anumerical method there is really no need for the standard patch test tohold.
[email protected] MATH 590 – Chapter 6 13
Interpolation with Multivariate Polynomials
RemarkLater on we will be interested in applying our interpolation methods tothe numerical solution of partial differential equations, and practitioners(especially of finite element methods) often judge an interpolationmethod by its ability to pass the so-called patch test.An interpolation method passes the standard patch test if it canreproduce linear functions. In engineering applications this translatesinto exact calculation of constant stress and strain.
We will see later that in order to prove error estimates for meshfreeapproximation methods it is not necessary to be able to reproducepolynomials globally (but local polynomial reproduction is an essentialingredient).Thus, if we are only concerned with the approximation power of anumerical method there is really no need for the standard patch test tohold.
[email protected] MATH 590 – Chapter 6 13
Interpolation with Multivariate Polynomials
RemarkLater on we will be interested in applying our interpolation methods tothe numerical solution of partial differential equations, and practitioners(especially of finite element methods) often judge an interpolationmethod by its ability to pass the so-called patch test.An interpolation method passes the standard patch test if it canreproduce linear functions. In engineering applications this translatesinto exact calculation of constant stress and strain.We will see later that in order to prove error estimates for meshfreeapproximation methods it is not necessary to be able to reproducepolynomials globally (but local polynomial reproduction is an essentialingredient).
Thus, if we are only concerned with the approximation power of anumerical method there is really no need for the standard patch test tohold.
[email protected] MATH 590 – Chapter 6 13
Interpolation with Multivariate Polynomials
RemarkLater on we will be interested in applying our interpolation methods tothe numerical solution of partial differential equations, and practitioners(especially of finite element methods) often judge an interpolationmethod by its ability to pass the so-called patch test.An interpolation method passes the standard patch test if it canreproduce linear functions. In engineering applications this translatesinto exact calculation of constant stress and strain.We will see later that in order to prove error estimates for meshfreeapproximation methods it is not necessary to be able to reproducepolynomials globally (but local polynomial reproduction is an essentialingredient).Thus, if we are only concerned with the approximation power of anumerical method there is really no need for the standard patch test tohold.
[email protected] MATH 590 – Chapter 6 13
Example: Reproduction of Linear Functions using Gaussian RBFs
Outline
1 Interpolation with Multivariate Polynomials
2 Example: Reproduction of Linear Functions using Gaussian RBFs
3 Scattered Data Fitting w/ More General Polynomial Precision
4 Almost Negative Definite Matrices and Reproduction of Constants
[email protected] MATH 590 – Chapter 6 14
Example: Reproduction of Linear Functions using Gaussian RBFs
We can try to fit the bivariate linear function f (x , y) = (x + y)/2 withGaussians (with ε = 6).
Figure: Gaussian interpolant to bivariate linear function with N = 1089 (left)and associated abolute error (right).
RemarkClearly the interpolant is not completely planar — not even to machineprecision.
[email protected] MATH 590 – Chapter 6 15
Example: Reproduction of Linear Functions using Gaussian RBFs
We can try to fit the bivariate linear function f (x , y) = (x + y)/2 withGaussians (with ε = 6).
Figure: Gaussian interpolant to bivariate linear function with N = 1089 (left)and associated abolute error (right).
RemarkClearly the interpolant is not completely planar — not even to machineprecision.
[email protected] MATH 590 – Chapter 6 15
Example: Reproduction of Linear Functions using Gaussian RBFs
We can try to fit the bivariate linear function f (x , y) = (x + y)/2 withGaussians (with ε = 6).
Figure: Gaussian interpolant to bivariate linear function with N = 1089 (left)and associated abolute error (right).
RemarkClearly the interpolant is not completely planar — not even to machineprecision.
[email protected] MATH 590 – Chapter 6 15
Example: Reproduction of Linear Functions using Gaussian RBFs
There is a simple remedy for this problem.
Just add the polynomial functions
x 7→ 1, x 7→ x , x 7→ y
to the Gaussian basis
{e−ε2‖·−x1‖2, . . . ,e−ε
2‖·−xN‖2}.
ProblemNow we have N + 3 unknowns, namely the coefficients ck ,k = 1, . . . ,N + 3, in the expansion
Pf (x) =N∑
k=1
cke−ε2‖x−xk‖2
+cN+1 +cN+2x +cN+3y , x = (x , y) ∈ R2,
but we have only N conditions to determine them, namely theinterpolation conditions
Pf (x j) = f (x j) = (xj + yj)/2, j = 1, . . . ,N.
[email protected] MATH 590 – Chapter 6 16
Example: Reproduction of Linear Functions using Gaussian RBFs
There is a simple remedy for this problem.Just add the polynomial functions
x 7→ 1, x 7→ x , x 7→ y
to the Gaussian basis
{e−ε2‖·−x1‖2, . . . ,e−ε
2‖·−xN‖2}.
ProblemNow we have N + 3 unknowns, namely the coefficients ck ,k = 1, . . . ,N + 3, in the expansion
Pf (x) =N∑
k=1
cke−ε2‖x−xk‖2
+cN+1 +cN+2x +cN+3y , x = (x , y) ∈ R2,
but we have only N conditions to determine them, namely theinterpolation conditions
Pf (x j) = f (x j) = (xj + yj)/2, j = 1, . . . ,N.
[email protected] MATH 590 – Chapter 6 16
Example: Reproduction of Linear Functions using Gaussian RBFs
There is a simple remedy for this problem.Just add the polynomial functions
x 7→ 1, x 7→ x , x 7→ y
to the Gaussian basis
{e−ε2‖·−x1‖2, . . . ,e−ε
2‖·−xN‖2}.
ProblemNow we have N + 3 unknowns, namely the coefficients ck ,k = 1, . . . ,N + 3, in the expansion
Pf (x) =N∑
k=1
cke−ε2‖x−xk‖2
+cN+1 +cN+2x +cN+3y , x = (x , y) ∈ R2,
but we have only N conditions to determine them, namely theinterpolation conditions
Pf (x j) = f (x j) = (xj + yj)/2, j = 1, . . . ,N.
[email protected] MATH 590 – Chapter 6 16
Example: Reproduction of Linear Functions using Gaussian RBFs
There is a simple remedy for this problem.Just add the polynomial functions
x 7→ 1, x 7→ x , x 7→ y
to the Gaussian basis
{e−ε2‖·−x1‖2, . . . ,e−ε
2‖·−xN‖2}.
ProblemNow we have N + 3 unknowns, namely the coefficients ck ,k = 1, . . . ,N + 3, in the expansion
Pf (x) =N∑
k=1
cke−ε2‖x−xk‖2
+cN+1 +cN+2x +cN+3y , x = (x , y) ∈ R2,
but we have only N conditions to determine them, namely theinterpolation conditions
Pf (x j) = f (x j) = (xj + yj)/2, j = 1, . . . ,[email protected] MATH 590 – Chapter 6 16
Example: Reproduction of Linear Functions using Gaussian RBFs
What can we do to obtain a (non-singular) square system?
As we will see below, we can add the following three conditions:
N∑k=1
ck = 0,
N∑k=1
ckxk = 0,
N∑k=1
ckyk = 0.
[email protected] MATH 590 – Chapter 6 17
Example: Reproduction of Linear Functions using Gaussian RBFs
What can we do to obtain a (non-singular) square system?
As we will see below, we can add the following three conditions:
N∑k=1
ck = 0,
N∑k=1
ckxk = 0,
N∑k=1
ckyk = 0.
[email protected] MATH 590 – Chapter 6 17
Example: Reproduction of Linear Functions using Gaussian RBFs
How do we have to modify our existing MATLAB program for scattereddata interpolation to incorporate these modifications?
Earlier we solvedAc = y ,
with Ajk = e−ε2‖x j−xk‖2
, j , k = 1, . . . ,N, c = [c1, . . . , cN ]T , andy = [f (x1), . . . , f (xN)]T .
Now we have to solve the augmented system[A P
PT O
] [cd
]=
[y0
], (1)
where A, c, and y are as before, and Pjl = pl(x j), j = 1, . . . ,N,l = 1, . . . ,3, with p1(x) = 1, p2(x) = x , and p3(x) = y . Moreover, 0 isa zero vector of length 3, and O is a zero matrix of size 3× 3.
[email protected] MATH 590 – Chapter 6 18
Example: Reproduction of Linear Functions using Gaussian RBFs
How do we have to modify our existing MATLAB program for scattereddata interpolation to incorporate these modifications?
Earlier we solvedAc = y ,
with Ajk = e−ε2‖x j−xk‖2
, j , k = 1, . . . ,N, c = [c1, . . . , cN ]T , andy = [f (x1), . . . , f (xN)]T .
Now we have to solve the augmented system[A P
PT O
] [cd
]=
[y0
], (1)
where A, c, and y are as before, and Pjl = pl(x j), j = 1, . . . ,N,l = 1, . . . ,3, with p1(x) = 1, p2(x) = x , and p3(x) = y . Moreover, 0 isa zero vector of length 3, and O is a zero matrix of size 3× 3.
[email protected] MATH 590 – Chapter 6 18
Example: Reproduction of Linear Functions using Gaussian RBFs
How do we have to modify our existing MATLAB program for scattereddata interpolation to incorporate these modifications?
Earlier we solvedAc = y ,
with Ajk = e−ε2‖x j−xk‖2
, j , k = 1, . . . ,N, c = [c1, . . . , cN ]T , andy = [f (x1), . . . , f (xN)]T .
Now we have to solve the augmented system[A P
PT O
] [cd
]=
[y0
], (1)
where A, c, and y are as before, and Pjl = pl(x j), j = 1, . . . ,N,l = 1, . . . ,3, with p1(x) = 1, p2(x) = x , and p3(x) = y . Moreover, 0 isa zero vector of length 3, and O is a zero matrix of size 3× 3.
[email protected] MATH 590 – Chapter 6 18
Example: Reproduction of Linear Functions using Gaussian RBFs
Program (RBFInterpolation2Dlinear.m)1 rbf = @(e,r) exp(-(e*r).^2); ep = 6;2 testfunction = @(x,y) (x+y)/2;3 N = 9; gridtype = ’u’;4 dsites = CreatePoints(N,2,gridtype);5 ctrs = dsites;6 neval = 40; M = neval^2;7 epoints = CreatePoints(M,2,’u’);8 rhs = testfunction(dsites(:,1),dsites(:,2));9 rhs = [rhs; zeros(3,1)];
10 DM_data = DistanceMatrix(dsites,ctrs);11 IM = rbf(ep,DM_data);12 PM = [ones(N,1) dsites];13 IM = [IM PM; [PM’ zeros(3,3)]];14 DM_eval = DistanceMatrix(epoints,ctrs);15 EM = rbf(ep,DM_eval);16 PM = [ones(M,1) epoints]; EM = [EM PM];17 Pf = EM * (IM\rhs);18 exact = testfunction(epoints(:,1),epoints(:,2));19 maxerr = norm(Pf-exact,inf)20 rms_err = norm(Pf-exact)/neval
[email protected] MATH 590 – Chapter 6 19
Example: Reproduction of Linear Functions using Gaussian RBFs
RemarkExcept for lines 9, 12, 13, and 16 which were added to deal with theaugmented problem RBFInterpolation2Dlinear.m is the sameas RBFInterpolation2D.m.
Figure: Interpolant based on linearly augmented Gaussians to bivariate linearfunction with N = 9 (left) and associated abolute error (right).
The error is now on the level of machine accuracy.
[email protected] MATH 590 – Chapter 6 20
Example: Reproduction of Linear Functions using Gaussian RBFs
RemarkExcept for lines 9, 12, 13, and 16 which were added to deal with theaugmented problem RBFInterpolation2Dlinear.m is the sameas RBFInterpolation2D.m.
Figure: Interpolant based on linearly augmented Gaussians to bivariate linearfunction with N = 9 (left) and associated abolute error (right).
The error is now on the level of machine accuracy.
[email protected] MATH 590 – Chapter 6 20
Example: Reproduction of Linear Functions using Gaussian RBFs
RemarkExcept for lines 9, 12, 13, and 16 which were added to deal with theaugmented problem RBFInterpolation2Dlinear.m is the sameas RBFInterpolation2D.m.
Figure: Interpolant based on linearly augmented Gaussians to bivariate linearfunction with N = 9 (left) and associated abolute error (right).
The error is now on the level of machine [email protected] MATH 590 – Chapter 6 20
Scattered Data Fitting w/ More General Polynomial Precision
Outline
1 Interpolation with Multivariate Polynomials
2 Example: Reproduction of Linear Functions using Gaussian RBFs
3 Scattered Data Fitting w/ More General Polynomial Precision
4 Almost Negative Definite Matrices and Reproduction of Constants
[email protected] MATH 590 – Chapter 6 21
Scattered Data Fitting w/ More General Polynomial Precision
Motivated by the example above with linear polynomials we modify theassumption on the form of the solution to the scattered datainterpolation by adding certain polynomials to the expansion, i.e., Pf isnow assumed to be of the form
Pf (x) =N∑
k=1
ckϕ(‖x − xk‖) +M∑
l=1
dlpl(x), x ∈ Rs, (2)
where p1, . . . ,pM form a basis for the M =(m−1+s
m−1
)-dimensional linear
space Πsm−1 of polynomials of total degree less than or equal to m − 1
in s variables.
RemarkWe use polynomials in Πs
m−1 instead of degree m polynomials sincethis will work better with the conditionally positive definite functionsdiscussed in the next chapter.
[email protected] MATH 590 – Chapter 6 22
Scattered Data Fitting w/ More General Polynomial Precision
Motivated by the example above with linear polynomials we modify theassumption on the form of the solution to the scattered datainterpolation by adding certain polynomials to the expansion, i.e., Pf isnow assumed to be of the form
Pf (x) =N∑
k=1
ckϕ(‖x − xk‖) +M∑
l=1
dlpl(x), x ∈ Rs, (2)
where p1, . . . ,pM form a basis for the M =(m−1+s
m−1
)-dimensional linear
space Πsm−1 of polynomials of total degree less than or equal to m − 1
in s variables.
RemarkWe use polynomials in Πs
m−1 instead of degree m polynomials sincethis will work better with the conditionally positive definite functionsdiscussed in the next chapter.
[email protected] MATH 590 – Chapter 6 22
Scattered Data Fitting w/ More General Polynomial Precision
Since enforcing the interpolation conditions Pf (x j) = f (x j),j = 1, . . . ,N, leads to a system of N linear equations in the N + Munknowns ck and dl one usually adds the M additional conditions
N∑k=1
ckpl(xk ) = 0, l = 1, . . . ,M,
to ensure a unique solution.
ExampleThe example in the previous section represents the particular cases = m = 2.
[email protected] MATH 590 – Chapter 6 23
Scattered Data Fitting w/ More General Polynomial Precision
Since enforcing the interpolation conditions Pf (x j) = f (x j),j = 1, . . . ,N, leads to a system of N linear equations in the N + Munknowns ck and dl one usually adds the M additional conditions
N∑k=1
ckpl(xk ) = 0, l = 1, . . . ,M,
to ensure a unique solution.
ExampleThe example in the previous section represents the particular cases = m = 2.
[email protected] MATH 590 – Chapter 6 23
Scattered Data Fitting w/ More General Polynomial Precision
RemarkThe use of polynomials is somewhat arbitrary (any other set of Mlinearly independent functions could also be used).
The addition of polynomials of total degree at most m − 1guarantees polynomial precision provided the points in X form an(m − 1)-unisolvent set.If the data come from a polynomial of total degree less than orequal to m − 1, then they are fitted exactly by the augmentedexpansion.
[email protected] MATH 590 – Chapter 6 24
Scattered Data Fitting w/ More General Polynomial Precision
RemarkThe use of polynomials is somewhat arbitrary (any other set of Mlinearly independent functions could also be used).The addition of polynomials of total degree at most m − 1guarantees polynomial precision provided the points in X form an(m − 1)-unisolvent set.
If the data come from a polynomial of total degree less than orequal to m − 1, then they are fitted exactly by the augmentedexpansion.
[email protected] MATH 590 – Chapter 6 24
Scattered Data Fitting w/ More General Polynomial Precision
RemarkThe use of polynomials is somewhat arbitrary (any other set of Mlinearly independent functions could also be used).The addition of polynomials of total degree at most m − 1guarantees polynomial precision provided the points in X form an(m − 1)-unisolvent set.If the data come from a polynomial of total degree less than orequal to m − 1, then they are fitted exactly by the augmentedexpansion.
[email protected] MATH 590 – Chapter 6 24
Scattered Data Fitting w/ More General Polynomial Precision
In general, solving the interpolation problem based on the extendedexpansion (2) now amounts to solving a system of linear equations ofthe form [
A PPT O
] [cd
]=
[y0
], (3)
where the pieces are given by Ajk = ϕ(‖x j − xk‖), j , k = 1, . . . ,N,Pjl = pl(x j), j = 1, . . . ,N, l = 1, . . . ,M, c = [c1, . . . , cN ]T ,d = [d1, . . . ,dM ]T , y = [y1, . . . , yN ]T , 0 is a zero vector of length M,and O is an M ×M zero matrix.
RemarkWe will study the invertibility of this matrix in two steps.
First for the case m = 1,and then for the case of general m.
[email protected] MATH 590 – Chapter 6 25
Scattered Data Fitting w/ More General Polynomial Precision
In general, solving the interpolation problem based on the extendedexpansion (2) now amounts to solving a system of linear equations ofthe form [
A PPT O
] [cd
]=
[y0
], (3)
where the pieces are given by Ajk = ϕ(‖x j − xk‖), j , k = 1, . . . ,N,Pjl = pl(x j), j = 1, . . . ,N, l = 1, . . . ,M, c = [c1, . . . , cN ]T ,d = [d1, . . . ,dM ]T , y = [y1, . . . , yN ]T , 0 is a zero vector of length M,and O is an M ×M zero matrix.
RemarkWe will study the invertibility of this matrix in two steps.
First for the case m = 1,and then for the case of general m.
[email protected] MATH 590 – Chapter 6 25
Scattered Data Fitting w/ More General Polynomial Precision
We can easily modify the MATLAB program listed above to deal withreproduction of polynomials of other degrees.
ExampleIf we want to reproduce constants then we need to replace lines 9, 12,13, and 16 by
9 rhs = [rhs; 0];12 PM = ones(N,1);13 IM = [IM PM; [PM’ 0]];16 PM = ones(M,1); EM = [EM PM];
and for reproduction of bivariate quadratic polynomials we can use
9 rhs = [rhs; zeros(6,1)];12a PM = [ones(N,1) dsites dsites(:,1).^2 ...12b dsites(:,2).^2 dsites(:,1).*dsites(:,2)];13 IM = [IM PM; [PM’ zeros(6,6)]];16a PM = [ones(M,1) epoints epoints(:,1).^2 ...16b epoints(:,2).^2 epoints(:,1).*epoints(:,2)];16c EM = [EM PM];
[email protected] MATH 590 – Chapter 6 26
Scattered Data Fitting w/ More General Polynomial Precision
We can easily modify the MATLAB program listed above to deal withreproduction of polynomials of other degrees.
ExampleIf we want to reproduce constants then we need to replace lines 9, 12,13, and 16 by
9 rhs = [rhs; 0];12 PM = ones(N,1);13 IM = [IM PM; [PM’ 0]];16 PM = ones(M,1); EM = [EM PM];
and for reproduction of bivariate quadratic polynomials we can use
9 rhs = [rhs; zeros(6,1)];12a PM = [ones(N,1) dsites dsites(:,1).^2 ...12b dsites(:,2).^2 dsites(:,1).*dsites(:,2)];13 IM = [IM PM; [PM’ zeros(6,6)]];16a PM = [ones(M,1) epoints epoints(:,1).^2 ...16b epoints(:,2).^2 epoints(:,1).*epoints(:,2)];16c EM = [EM PM];
[email protected] MATH 590 – Chapter 6 26
Scattered Data Fitting w/ More General Polynomial Precision
RemarkThese specific examples work only for the case s = 2. Thegeneralization to higher dimensions is obvious but more cumbersome.
[email protected] MATH 590 – Chapter 6 27
Almost Negative Definite Matrices and Reproduction of Constants
Outline
1 Interpolation with Multivariate Polynomials
2 Example: Reproduction of Linear Functions using Gaussian RBFs
3 Scattered Data Fitting w/ More General Polynomial Precision
4 Almost Negative Definite Matrices and Reproduction of Constants
[email protected] MATH 590 – Chapter 6 28
Almost Negative Definite Matrices and Reproduction of Constants
We now need to investigate whether the augmented system matrix in(3) is non-singular.
The special case m = 1 (in any space dimension s), i.e., reproductionof constants, is covered by standard results from linear algebra, andwe discuss it first.
[email protected] MATH 590 – Chapter 6 29
Almost Negative Definite Matrices and Reproduction of Constants
We now need to investigate whether the augmented system matrix in(3) is non-singular.
The special case m = 1 (in any space dimension s), i.e., reproductionof constants, is covered by standard results from linear algebra, andwe discuss it first.
[email protected] MATH 590 – Chapter 6 29
Almost Negative Definite Matrices and Reproduction of Constants
Definition
A real symmetric matrix A is called conditionally positive semi-definiteof order one if its associated quadratic form is non-negative, i.e.,
N∑j=1
N∑k=1
cjckAjk ≥ 0 (4)
for all c = [c1, . . . , cN ]T ∈ RN that satisfy
N∑j=1
cj = 0.
If c 6= 0 implies strict inequality in (4) then A is called conditionallypositive definite of order one.
[email protected] MATH 590 – Chapter 6 30
Almost Negative Definite Matrices and Reproduction of Constants
RemarkIn the linear algebra literature the definition usually is formulatedusing “≤" in (4), and then A is referred to as conditionally (oralmost) negative definite.
Obviously, conditionally positive definite matrices of order oneexist only for N > 1.We can interpret a matrix A that is conditionally positive definite oforder one as one that is positive definite on the space of vectors csuch that
N∑j=1
cj = 0.
In this sense, A is positive definite on the space of vectors c“perpendicular” to constant functions.
[email protected] MATH 590 – Chapter 6 31
Almost Negative Definite Matrices and Reproduction of Constants
RemarkIn the linear algebra literature the definition usually is formulatedusing “≤" in (4), and then A is referred to as conditionally (oralmost) negative definite.Obviously, conditionally positive definite matrices of order oneexist only for N > 1.
We can interpret a matrix A that is conditionally positive definite oforder one as one that is positive definite on the space of vectors csuch that
N∑j=1
cj = 0.
In this sense, A is positive definite on the space of vectors c“perpendicular” to constant functions.
[email protected] MATH 590 – Chapter 6 31
Almost Negative Definite Matrices and Reproduction of Constants
RemarkIn the linear algebra literature the definition usually is formulatedusing “≤" in (4), and then A is referred to as conditionally (oralmost) negative definite.Obviously, conditionally positive definite matrices of order oneexist only for N > 1.We can interpret a matrix A that is conditionally positive definite oforder one as one that is positive definite on the space of vectors csuch that
N∑j=1
cj = 0.
In this sense, A is positive definite on the space of vectors c“perpendicular” to constant functions.
[email protected] MATH 590 – Chapter 6 31
Almost Negative Definite Matrices and Reproduction of Constants
RemarkIn the linear algebra literature the definition usually is formulatedusing “≤" in (4), and then A is referred to as conditionally (oralmost) negative definite.Obviously, conditionally positive definite matrices of order oneexist only for N > 1.We can interpret a matrix A that is conditionally positive definite oforder one as one that is positive definite on the space of vectors csuch that
N∑j=1
cj = 0.
In this sense, A is positive definite on the space of vectors c“perpendicular” to constant functions.
[email protected] MATH 590 – Chapter 6 31
Almost Negative Definite Matrices and Reproduction of Constants
Now we are ready to formulate and prove
Theorem
Let A be a real symmetric N × N matrix that is conditionally positivedefinite of order one, and let P = [1, . . . ,1]T be an N × 1 matrix(column vector). Then the system of linear equations[
A PPT 0
] [cd
]=
[y0
],
is uniquely solvable.
[email protected] MATH 590 – Chapter 6 32
Almost Negative Definite Matrices and Reproduction of Constants
Proof
Assume [c,d ]T is a solution of the homogeneous linear system, i.e.,with y = 0.
We show that [c,d ]T = 0T is the only possible solution.
Multiplication of the top block of the (homogeneous) linear system bycT yields
cT Ac + dcT P = 0.
From the bottom block of the system we know PT c = cT P = 0, andtherefore
cT Ac = 0.
[email protected] MATH 590 – Chapter 6 33
Almost Negative Definite Matrices and Reproduction of Constants
Proof
Assume [c,d ]T is a solution of the homogeneous linear system, i.e.,with y = 0.
We show that [c,d ]T = 0T is the only possible solution.
Multiplication of the top block of the (homogeneous) linear system bycT yields
cT Ac + dcT P = 0.
From the bottom block of the system we know PT c = cT P = 0, andtherefore
cT Ac = 0.
[email protected] MATH 590 – Chapter 6 33
Almost Negative Definite Matrices and Reproduction of Constants
Proof
Assume [c,d ]T is a solution of the homogeneous linear system, i.e.,with y = 0.
We show that [c,d ]T = 0T is the only possible solution.
Multiplication of the top block of the (homogeneous) linear system bycT yields
cT Ac + dcT P = 0.
From the bottom block of the system we know PT c = cT P = 0, andtherefore
cT Ac = 0.
[email protected] MATH 590 – Chapter 6 33
Almost Negative Definite Matrices and Reproduction of Constants
Proof
Assume [c,d ]T is a solution of the homogeneous linear system, i.e.,with y = 0.
We show that [c,d ]T = 0T is the only possible solution.
Multiplication of the top block of the (homogeneous) linear system bycT yields
cT Ac + dcT P = 0.
From the bottom block of the system we know PT c = cT P = 0, andtherefore
cT Ac = 0.
[email protected] MATH 590 – Chapter 6 33
Almost Negative Definite Matrices and Reproduction of Constants
Since the matrix A is conditionally positive definite of order one byassumption the equation
cT Ac = 0.
implies that c = 0.
Finally, the top block of the homogeneous linear system states that
Ac + dP = 0,
so that c = 0 and the fact that P is a vector of ones imply d = 0. �
[email protected] MATH 590 – Chapter 6 34
Almost Negative Definite Matrices and Reproduction of Constants
Since the matrix A is conditionally positive definite of order one byassumption the equation
cT Ac = 0.
implies that c = 0.
Finally, the top block of the homogeneous linear system states that
Ac + dP = 0,
so that c = 0 and the fact that P is a vector of ones imply d = 0. �
[email protected] MATH 590 – Chapter 6 34
Almost Negative Definite Matrices and Reproduction of Constants
RemarkSince Gaussians (and any other strictly positive definite radial function)give rise to positive definite matrices, and since positive definitematrices are also conditionally positive definite of order one, thetheorem above establishes the nonsingularity of the (augmented)radial basis function interpolation matrix for constant reproduction.
In order to cover radial basis function interpolation with reproduction ofhigher-order polynomials we will introduce (strictly) conditionallypositive definite functions of order m in the next chapter.
[email protected] MATH 590 – Chapter 6 35
Almost Negative Definite Matrices and Reproduction of Constants
RemarkSince Gaussians (and any other strictly positive definite radial function)give rise to positive definite matrices, and since positive definitematrices are also conditionally positive definite of order one, thetheorem above establishes the nonsingularity of the (augmented)radial basis function interpolation matrix for constant reproduction.In order to cover radial basis function interpolation with reproduction ofhigher-order polynomials we will introduce (strictly) conditionallypositive definite functions of order m in the next chapter.
[email protected] MATH 590 – Chapter 6 35
Appendix References
References I
Buhmann, M. D. (2003).Radial Basis Functions: Theory and Implementations.Cambridge University Press.
Chui, C. K. (1988).Multivariate Splines.CBMS-NSF Reg. Conf. Ser. in Appl. Math. 54 (SIAM).
Fasshauer, G. E. (2007).Meshfree Approximation Methods with MATLAB.World Scientific Publishers.
Higham, D. J. and Higham, N. J. (2005).MATLAB Guide.SIAM (2nd ed.), Philadelphia.
Iske, A. (2004).Multiresolution Methods in Scattered Data Modelling.Lecture Notes in Computational Science and Engineering 37, Springer Verlag(Berlin).
[email protected] MATH 590 – Chapter 6 36
Appendix References
References II
Wendland, H. (2005a).Scattered Data Approximation.Cambridge University Press (Cambridge).
Berzolari, L. (1914).Sulla determinazione d’una curva o d’una superficie algebrica e su alcunequestioni di postulazione.Rendiconti del R. Instituto Lombardo, Ser. (2) 47, pp. 556–564.
Chung, K. C. and Yao, T. H. (1977).On lattices admitting unique Lagrange interpolations.SIAM J. Numer. Anal. 14, pp. 735–743.
Radon, J. (1948).Zur mechanischen Kubatur.Monatshefte der Math. Physik 52/4, pp. 286–300.
[email protected] MATH 590 – Chapter 6 37