Advanced Image Processing Image Relaxation – Restoration and Feature Extraction 02/02/10

Preview:

Citation preview

Advanced Image Processing

Image Relaxation – Restoration and Feature Extraction

02/02/10

Homework discussion• On edge detectors

– Double edge• Does Sobel or Prewitt present double edge?• Does Prewitt behave the same in all directions?

– Closed contour by Marr-Hildreth?– Does Sobel provide better noise-suppression characteristics than

Prewitt?– Is zero-crossing more accurate than gradient?– Which one is gradient-based? Which one uses zero-crossing (or 2nd

derivative)?– Does large sigma increase neighborhood pixels’ weight?

• Misc– Provide parameter selection info– Observations should be based on your results– Using latex– Reference

Derivatives

http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWENS/LECT6/node2.html

Outline

• Image restoration as a relaxation method– MAP (maximum a-posteriori probability)– MFA (mean field annealing)– VCD (variable conductance diffusion or

anisotropic diffusion)

• Image restoration as a blind source separation problem– Lidan’s CVPR paper

• Non-iterative image restoration

Relaxation method

• Relaxation– A multistep algorithm with the property

that • The output of a single step is of the same

form as the input, so that it can be applied iteratively

• It converges to a bounded result• The operation on any element be

dependent only on its neighbors

• Restoration as a relaxation algorithm

Image restoration

• The degradation model• An ill-posed inverse problem

– Well-posed vs. ill-posed– Ill-conditioning

• Condition number

• Regularization theory– Maximum A-Posteriori probability

(MAP)– Mean Field Annealing (MAF)

Degradation model

( ) ( )[ ] ( )yxyxfHyxg ,,, η+=

H +f (x, y) g (x, y)

(x, y)

Regularization theory

• Generally speaking, any regularization method tries to analyze a related well-posed problem whose solution approximates the original ill-posed problem.

• The well-posedness is achieved by implementing one or more of the following basic ideas– restriction of the data; – change of the space and/or topologies; – modification of the operator itself; – the concept of regularization operators; and– well-posed stochastic extensions of ill-posed problems.

Image restoration – An ill-posed problem

• Degradation model

• H is ill-conditioned which makes image restoration problem an ill-posed problem– Solution is not stable

),(),(),(),( vuNvuFvuHvuG +=

),(

),(),(

),(

),(),(ˆ

vuH

vuNvuF

vuH

vuGvuF +==

Ill-conditioning

0061932.2)(

3085.7

5

001.0002.0

001.0001.0

1

1

254.0

217.0

659.0913.0

563.078.0

+=

⎥⎦

⎤⎢⎣

⎡ −=

⎥⎦

⎤⎢⎣

⎡−−

=

⎥⎦

⎤⎢⎣

⎡−

=

⎥⎦

⎤⎢⎣

⎡=

⎥⎦

⎤⎢⎣

⎡=

=

eAcond

x

E

x

b

A

bAx

Example

Noise-free Sinusoidal noise Noise-freeExact H Exact H not exact H

MAP

• Bayes’ rule• The noise term

– The noise probability distribution

• The prior term– MRF and Gibbs distribution– Different models of smoothness for modeling

prior energy• Piece-wise constant (flat surface)• Piece-wise planar surface• Quadratic surface

• Gradient descent

Solution formulation

• For g = Hf + , the regularization method constructs the solution as

• u(f, g) describes how the real image data is related to the degraded data. In other words, this term models the characteristic of the imaging system.

v(f) is the regularization term with the regularization operator v operating on the original image f, and the regularization parameter used to tune up the weight of the regularization term.

• By adding the regularization term, the original ill-posed problem turns into a well-posed one, that is, the insertion of the regularization operator puts some constraints on what f might be, which makes the solution more stable.

( ) ( )[ ]fgff vu β+,min

MAP (maximum a-posteriori probability)

• Formulate solution from statistical point of view: MAP approach tries to find an estimate of image f that maximizes the a-posteriori probability p(f|g) as

• According to Bayes' rule,

– P(f) is the a-priori probability of the unknown image f. We call it the prior model

– P(g) is the probability of g which is a constant when g is given– p(g|f) is the conditional probability density function (pdf) of g.

We call it the sensor model, which is a description of the noisy or stochastic processes that relate the original unknown image f to the measured image g.

( )gff f |maxargˆ p=

( ) ( ) ( )( )g

ffggf

P

Ppp

|| =

MAP - Derivation

• Bayes interpretation of regularization theory

( ) ( ) ( )[ ]( ) ( )[ ] ( )[ ] ( )[ ]

( )[ ]( )[ ]ffg

ffgffg

ffggff ff

P

p

PpPp

Ppp

p

n

ln

|ln

ln|ln|lnLet

|maxarg|maxargˆ

−=Ω

−=Ω

−−=−=Ω

==

Noise term

Prior term

Noise Term

• Assume Gaussian noise of zero mean, the standard deviation

Prior Model

• The a-priori probability of an image by a Gibbs distribution is defined as

– U(f) is the energy function– T is the temperature of the model– Z is a normalization constant

( )( )

ZT

fU

fP⎟⎠⎞

⎜⎝⎛−

=exp

( )[ ] ( )( ) ( )T

U

Z

TUPp

fff =⎥⎦

⎤⎢⎣

⎡ −−=−=Ω

explnln

Prior Model (cont’)

• U(f), the prior energy function, is usually formulated based on the smoothness property of the original image. Therefore, U(f) should measure the extent to which the smoothness is violated

Difference between neighborhood pixels

punishment

Prior Model (cont’)

is the parameter that adjusts how smooth the image goes

• The k-th derivative models the difference between neighbor pixels. It can also be approximated by convolution with the right kernel

( ) ( )

( )∑−

= ⎥⎥⎦

⎢⎢⎣

⎡⎟⎟⎠

⎞⎜⎜⎝

⎛ ⊗−−=

⎥⎥⎦

⎢⎢⎣

⎟⎟⎠

⎞⎜⎜⎝

⎛ ∇−−=

1

02

2

2

2

2exp

2

2exp

2

MN

i

i

k

Trf

TT

U

ττπ

β

ττπ

β ff

Prior Model – Kernel r

• Laplacian kernel

2

2

2

2

y

f

x

f

∂∂

+∂∂

⎥⎥⎥

⎢⎢⎢

−−−

−=

010

141

010

r

The Objective Function

• Use gradient descent to solve f

( ) ( )∑∑−

=

=⎟⎟⎠

⎞⎜⎜⎝

⎛ ⊗−−−⊗=

Ω+Ω=Ω

1

02

21

0

2

2 2exp

22

1

MN

i

iMN

ii

pn

rf

Tghf

ττπ

β

σ

fff kk

∂Ω∂

−=+ α1

MFA• Compared to SA• An optimization

scheme• Annealing

combined with gradient descent

• Avoids local minima

• In the prior term, as +T

Recommended