Performance Evaluation for Scattered Data Interpolation

Preview:

DESCRIPTION

My IGARSS 2008 slides. Presented on the 10th July. A brief description of some interpolation methods, and ways of analysing performance.

Citation preview

Performance Evaluation for Scattered Data Interpolation

Matthew P. Foster & Adrian N. Evansm.p.foster@bath.ac.uk

University of Bath

Summary

Basics Performance Outputs

• Scattered data

• Interpolation

• Basics

• Methods

Summary

Basics Performance Outputs

• Performance Evaluation

• Simulation-validation

• Cross-validation

Summary

Basics Performance Outputs

• Output Evaluation

• Error distributions

• Differences & artefacts

Scattered Data & Interpolation

• 2-D (+ height) in this case:

• x, y, z triplets

• Or matrix projections

• Very common

• Common examples:

• Nearest Neighbour

• Linear, Cubic

• Kriging

Interpolation Methods

• All techniques fit into two classes:

• Local – points in neighbourhood

• Global – all points

• Both use weighted combinations of input points, the weightings can be based on:

• Geometry – ‘where?’

• Input characteristics – ‘what?’

Point Geometry

• Delaunay Triangulation / Voronoi diagram

• Arguably most fundamental

• Distance metric

• Or scale-space version

Image Characteristics

• Correlation

• E.g. Semivariogram

• Local image information

• Energy

• Orientation

• Anisotropy

• Other methods…

Image Characteristics

• Correlation

• E.g. Semivariogram

• Local image information

• Energy

• Orientation

• Anisotropy

• Other methods…

Local orientation

Method Locality Weighting Method

ANC LocalApplicability

function Steered filters

Kriging Global Basis functionBuild model

then fit

Linear / Cubic Local Triangulation Surface fitting

Natural Neighbour Local Triangulation

Area Weighting

RBF Global Basis function Linear fitting

Methods

Performance Evaluation

Simulation-Validation

• Workflow

• Generate

• Sample

• Interpolate

• Subtract

• Repeat

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.9

5

0.9

6

0.9

7

0.9

8

0.9

9 1

Prop

ortio

nal R

MSE

Sparsity

ANCCubicKrigingNat. Neighbour

Simulation-Validation

• Give a good ‘feel’ for performance

• Detailed analysis possible

• Rarely mirrors actual data

Cross-Validation

• Computer vision / classification technique

• Allow performance analysis using real data

• Partition into 2 classes

• Reconstruction

• Validation

180

o W

135oW

90

oW

45 o

W

0 o

10 o

S

0 o

10 oN

20 oN

30 oN

50

o N

60

o N

70

oN

80 o

N

Example: TEC Data

• Data from GPS Satellites

• During Halloween Storm -Oct. 2003

• Fairly sparse relative to field size 100 x 120 (0.5˚)

180o W

135 oW

90 oW

45 oW

0 o

10 oS

0 o

10 oN

20 oN

30 oN

50

o N 6

0oN

70o N 80 oN

Process• For each time interval

• Split in 10 random blocks

• Reconstruct using 1-9 blocks

• Validate with remaining blocks

• Repeat as necessary Cubic

Results

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

0.9

82

0.9

84

0.9

86

0.9

88

0.9

9

0.9

92

0.9

94

0.9

96

0.9

98

Prop

ortio

nal R

MSE

Sparsity

ANCCubicKrigingNatural Neighbour

• Noisier than simulations

• Some similarities

• Kriging peak

• General method performance

See: An Evaluation of Interpolation Methods for Ionospheric TEC Mapping, M. P. Foster and A. N. Evans. IEEE Trans. Geoscience and Remote Sensing. Vol 46, No. 7, pp. 2153 -

2164, 2008

Output Evaluation

0

5000

10000

15000

20000

25000

-0.6

-0.4

-0.2 0

0.2

0.4

0.6

Coun

t

Error Value

Error Histogram

Error Distributions

• When everything works, outputs look nice

• Histogram is approximately Gaussian

Kriging Reconstruction

Fractal Surface

0

5000

10000

15000

20000

25000

-0.6

-0.4

-0.2 0

0.2

0.4

0.6

Coun

t

Error Value

Error Histogram

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0 10 20 30 40 50 60 70 80 90

Sem

ivaria

nce

[γ (h

)]

lag [h]

Semivariogram with Fitted Spherical Model

Error Distributions

• When everything works, outputs look nice

• Histogram is approximately Gaussian

Kriging Reconstruction

Fractal Surface

0

50

100

150

200

250

300

350

-100 -8

0-6

0-4

0-2

0 0 20

40

60

80

100

Coun

t

Error Value

Error Histogram

Error Distributions

• When it doesn’t work well:

• Examining histogram can show problems

• Which you can then look into:

• bad model fitting (due to odd image!)

Kriging Reconstruction

Image Data

0

50

100

150

200

250

300

350

-100 -8

0-6

0-4

0-2

0 0 20

40

60

80

100

Coun

t

Error Value

Error Histogram

0

0.05

0.1

0.15

0.2

0 10 20 30 40 50 60 70

Sem

ivaria

nce

[γ (h

)]

lag [h]

Semivariogram with Fitted Spherical Model

Error Distributions

• When it doesn’t work well:

• Examining histogram can show problems

• Which you can then look into:

• bad model fitting (due to odd image!)

Kriging Reconstruction

Image Data

Artefacts

LinearShuttle Radar Topography Mission Reconstructed

from ~1% of samples

Linear RBFReconstructed from ~1% of

samples

Shuttle Radar Topography Mission

TPS RBFReconstructed from ~1% of

samples

Shuttle Radar Topography Mission

Natural Neighbour

Reconstructed from ~1% of

samples

Shuttle Radar Topography Mission

ANCReconstructed from ~1% of

samples

Shuttle Radar Topography Mission

Artefacts

NaturalNeighbour

TPS (Cubic)

Cubic

Linear

Artefacts

NaturalNeighbour

TPS (Cubic)

Cubic

Linear

Artefacts

PointyNatural

Neighbour

TPS (Cubic)

Cubic

Linear

Artefacts

PointyNatural

Neighbour

TPS (Cubic)

Cubic

Linear

Overshoot

Artefacts

PointyNatural

Neighbour

TPS (Cubic)

Cubic

Linear

Overshoot

Triangulation Edges

Conclusions• Quantitative methodologies are useful for analysing

performance

• Result from real data can be very different from simulations

• But don’t yield information about spatial error distribution, or artefacts produced by different methods

• Error distributions can be used for more detailed qualitative analysis, provided enough data are available.

• The method best method depends on the application.

Recommended