46
Image Restoration Image Deblurring Uwe Schmidt

Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image RestorationImage Deblurring

Uwe Schmidt

Page 2: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Image Blur• Sources of image blur

• camera motion • camera out-of-focus • object motion

• Why remove? • restoring digital photographs • cope with adverse imaging conditions

(e.g., microscopy, astronomy) • part of a system (e.g., face recognition)

• Why difficult? • loss of information, especially high frequencies • mathematically ill-posed

2

Flickr: Daveybot

Köhler et al., ECCV ‘12

Page 3: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Uniform Blur Assumption• Model: camera translation that is aligned with image

plane; whole scene with same distance to camera • typically unrealistic, but deblurring results often good

• Blurred image = convolution of original image x with blur kernel / point spread function (PSF) k • deblurring here also called deconvolution

3Figure 8. Ground truth data: 4 images and 8 blur kernels, resultingin 32 test images

1.5 2 2.5 3 3.5 above 40

20

40

60

80

100

FergusShanShan, sps deconvMAPxkGaussian prior

Error ratios

Per

cen

tag

e

Figure 9. Evaluation results: Cumulative histogram of the decon-volution error ratio across test examples.

which obeys their assumption. To capture images with spa-tially invariant blur we placed the camera on a tripod, lock-ing the Z-axis rotation handle of the tripod but looseningthe X and Y handles. We calibrated the blur of 8 such im-ages and cropped 4 255×255windows from each, leadingto 32 test images displayed in Fig. 8 and available online3.

3www.wisdom.weizmann.ac.il/˜levina/papers/LevinEtalCVPR09Data.zip

We used an 85mm lens and a 0.3 seconds exposure. Thekernels’ support varied from 10 to 25 pixels.We can measure the SSD error between a deconvolved

output and the ground truth. However, wider kernels resultin larger deconvolution error even with the true kernel. Tonormalize this effect, we measure the ratio between decon-volution error with the estimated kernel and deconvolutionwith the truth kernel. In Fig. 9 we plot the cumulative his-togram of error ratios (e.g. bin r = 3 counts the percentageof test examples achieving error ratio below 3). Empirically,we noticed that error ratios above 2 are already visually im-plausible. One test image is presented in Fig. 10, all othersincluded in [13].We have evaluated the algorithms of Fergus et al. [4] and

Shan et al. [19] (each using the authors’ implementation),as well as MAPk estimation using a Gaussian prior [13],and a simplified MAPx,k approach constraining

!ki = 1

(we used coordinate descent, iterating between holding xconstant and solving for k, and then holding k constant andsolving for x ). The algorithms of [14, 7, 3] were not testedbecause the first was designed for 1D motion only, and theothers focus on smaller blur kernels.We made our best attempt to adjust the parameters of

Shan et al. [19], but run all test images with equal parame-ters. Fergus et al. [4] used Richardson-Lucy non blind de-convolution in their code. Since this algorithm is a sourcefor ringing artifacts, we improved the results using the ker-nel estimated by the authors’ code with the (non blind)sparse deconvolution of [12]. Similarly, we used sparse de-convolution with the kernel estimated by Shan et al.The bars in Fig. 9 and the visual results in [13] suggest

that Fergus et al.’s algorithm [4] significantly outperformsall other alternatives. Many of the artifacts in the resultsof [4] can be attributed to the Richardson-Lucy artifacts, orto non uniform blur in their test images. Our comparisonalso suggests that applying sparse deconvolution using thekernels outputted by Shan et al. [19] improves their results.As expected, the naive MAPx,k approach outputs small ker-nels approaching the delta solution.

5. DiscussionThis paper analyzes the major building blocks of recent

blind deconvolution algorithms. We illustrate the limita-tion of the simple MAPx,k approach, favoring the no-blur(delta kernel) explanation. One class of solutions involvesexplicit edge detection. A more principled strategy exploitsthe dimensionality asymmetry, and estimates MAPk whilemarginalizing over x. While the computational aspects in-volved with this marginalization are more challenging, ex-isting approximations are powerful.We have collected motion blur data with ground truth

and quantitatively compared existing algorithms. Our com-parison suggests that the variational Bayes approxima-tion [4] significantly outperforms all existing alternatives.The conclusions from our analysis are useful for direct-

ing future blind deconvolution research. In particular, we

Figure 8. Ground truth data: 4 images and 8 blur kernels, resultingin 32 test images

1.5 2 2.5 3 3.5 above 40

20

40

60

80

100

FergusShanShan, sps deconvMAPxkGaussian prior

Error ratios

Per

cen

tag

e

Figure 9. Evaluation results: Cumulative histogram of the decon-volution error ratio across test examples.

which obeys their assumption. To capture images with spa-tially invariant blur we placed the camera on a tripod, lock-ing the Z-axis rotation handle of the tripod but looseningthe X and Y handles. We calibrated the blur of 8 such im-ages and cropped 4 255×255windows from each, leadingto 32 test images displayed in Fig. 8 and available online3.

3www.wisdom.weizmann.ac.il/˜levina/papers/LevinEtalCVPR09Data.zip

We used an 85mm lens and a 0.3 seconds exposure. Thekernels’ support varied from 10 to 25 pixels.We can measure the SSD error between a deconvolved

output and the ground truth. However, wider kernels resultin larger deconvolution error even with the true kernel. Tonormalize this effect, we measure the ratio between decon-volution error with the estimated kernel and deconvolutionwith the truth kernel. In Fig. 9 we plot the cumulative his-togram of error ratios (e.g. bin r = 3 counts the percentageof test examples achieving error ratio below 3). Empirically,we noticed that error ratios above 2 are already visually im-plausible. One test image is presented in Fig. 10, all othersincluded in [13].We have evaluated the algorithms of Fergus et al. [4] and

Shan et al. [19] (each using the authors’ implementation),as well as MAPk estimation using a Gaussian prior [13],and a simplified MAPx,k approach constraining

!ki = 1

(we used coordinate descent, iterating between holding xconstant and solving for k, and then holding k constant andsolving for x ). The algorithms of [14, 7, 3] were not testedbecause the first was designed for 1D motion only, and theothers focus on smaller blur kernels.We made our best attempt to adjust the parameters of

Shan et al. [19], but run all test images with equal parame-ters. Fergus et al. [4] used Richardson-Lucy non blind de-convolution in their code. Since this algorithm is a sourcefor ringing artifacts, we improved the results using the ker-nel estimated by the authors’ code with the (non blind)sparse deconvolution of [12]. Similarly, we used sparse de-convolution with the kernel estimated by Shan et al.The bars in Fig. 9 and the visual results in [13] suggest

that Fergus et al.’s algorithm [4] significantly outperformsall other alternatives. Many of the artifacts in the resultsof [4] can be attributed to the Richardson-Lucy artifacts, orto non uniform blur in their test images. Our comparisonalso suggests that applying sparse deconvolution using thekernels outputted by Shan et al. [19] improves their results.As expected, the naive MAPx,k approach outputs small ker-nels approaching the delta solution.

5. DiscussionThis paper analyzes the major building blocks of recent

blind deconvolution algorithms. We illustrate the limita-tion of the simple MAPx,k approach, favoring the no-blur(delta kernel) explanation. One class of solutions involvesexplicit edge detection. A more principled strategy exploitsthe dimensionality asymmetry, and estimates MAPk whilemarginalizing over x. While the computational aspects in-volved with this marginalization are more challenging, ex-isting approximations are powerful.We have collected motion blur data with ground truth

and quantitatively compared existing algorithms. Our com-parison suggests that the variational Bayes approxima-tion [4] significantly outperforms all existing alternatives.The conclusions from our analysis are useful for direct-

ing future blind deconvolution research. In particular, we

Images from Levin et al., CVPR ’09

Page 4: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Non-uniform Blur Assumption• Model: other camera motions beyond translation,

e.g. 3D camera rotation • more realistic, but leads to more complicated

models and optimization problems • also called spatially-varying blur

4

Cam

era

Moti

on

Con

stra

int

Effi

cien

tF

ilte

rF

low

∗ =

Computation with Efficient Filter Flow

Motion Density Function Point Spread Function Basis

Figure 1: The values of the Motion Density Function (bottom left, plotted with plot nonuni kernel.m from Oliver Whyte;

exemplarily, rotation around the optical axis (roll) and in-plane translations are depicted only) correspond to the time spent

in each camera pose. Linearly combined with the blur kernel basis (bottom right), it yields a non-uniform PSF (top middle)

which parametrises the EFF transformation allowing fast computation. By construction, our forward model permits only

physically plausible camera motions. The blur kernel basis has to be computed only once and allows a memory saving sparse

representation. The dimensionality and size of the blur kernel basis depends on the motion considered. For translational

motion only, the model reduces naturally to the uniform blur model. In this case the Motion Density Function equals the

invariant PSF.

basis (with Eq. (2)) and then run the fast implementation ofEFF detailed in Hirsch et al. [7]. Similarly we can obtainfast implementations of the MVMs with MT and AT.

The homography calculations on the point grid p are pre-computed, and neither required after updating the blur pa-rameters µθ nor after updating the estimate of the sharp im-age estimate. This fact is essential for our method’s fast run-time. Fig. 2 compares the run-time of our forward modelin dependence of both the image and blur size for camerashake to Whyte et al. [24]. There, the computation of aforward model consists of making d homographies on animage with n pixels, which means a complexity of O(n ·d).Since our model uses the EFF, the complexity is O(n·log q)with the number q of pixels in a patch [6], which depends onthe image and PSF sizes. The disadvantage in log q is eas-ily outweighted even for a small number of homographies.Furthermore, Fig. 3 shows that our fast forward model canapproximate the non-stationary blur of Whyte and Guptaalmost perfectly with as little kernels as 16× 12 for an im-age of size 1600× 1200 pixels. We mention in passing thatthe blur kernel basis can be represented as sparse matrices

which require less memory than storing large transforma-tion matrices as done by Gupta et al. [5].

4. Deconvolution of non-stationary blurs

Starting with a photograph g that has been blurred bycamera shake, we recover the unknown sharp image f intwo phases: (i) a blur estimation phase for non-stationaryPSFs, and (ii) the sharp image recovery using a non-blinddeconvolution procedure, tailored to non-stationary blurs.In the following, we will describe both phases in detail andwhere appropriate we explicitly include the values of hyper-parameters that determine the weighting between the termsinvolved. These values were fixed during all experiments.

4.1. Blur estimation phase

In the first phase of the algorithm, we try to recover themotion undertaken by the camera during exposure givenonly the blurry photo. To this end, we iterate the followingthree steps: (i) prediction step to reduce blur and enhanceimage quality by a combination of shock and bilateral filter-

Image from Hirsch et al., ICCV ’11

Page 5: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Single Image Deblurring• Only blurred image available, called blind deblurring

• Common approach • 1st step: blur estimation • 2nd step: non-blind deblurring using blur estimate

5

12

Page 6: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Blur Estimation

• Theoretic PSF model (e.g. microscope)

• Night images: estimation from point light sources

• Iterative estimation from single image

6

Page 7: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Non-blind Deconvolution I

• Blur model:

• observed blurred image y, known PSF k • want to estimate original image x

7

y = k⌦ x = F�1 (F(k) · F(x))

convolution theorem(Fourier transform )F

Page 8: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

8

xk

Page 9: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

9

y

Page 10: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Direct Inversion I

10

F(y) = F(k) · F(x)

) x = F�1

✓F(y)

F(k)

Page 11: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

11

Direct Inversion I

Page 12: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Non-blind Deconvolution II

• Blur model:

• observed blurred image y, known PSF k

• Gaussian noise n with standard deviation σ • want to estimate original image x

• Likelihood:

12

y = k⌦ x+ n, n ⇠ N (0,�2I)

p(y|x) = N (y;k⌦ x,�2I)

= N (y;Kx,�2I)

blur matrix

Page 13: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

13

k⊗x

Page 14: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

14

n

Page 15: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

15

y

Page 16: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Direct Inversion II

16

F(y) = F(k) · F(x) + F(n)

) x = F�1

✓F(y)� F(n)

F(k)

⇡ F�1

✓F(y)

F(k)

don’t know n

Page 17: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

17

Direct Inversion II

Page 18: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Wiener Filter for Deconvolution• Takes noise into account

18

→ direct inversionN

S= 0

x̂ = F�1

F(k)

F(k) · F(k) + NS

· F(y)

!

= F�1

F(k)

|F(k)|2 + NS

!

| {z }⌦ y

Wiener filter

Power spectra:S = E

⇥|F(x)|2

N = E⇥|F(n)|2

⇤⌘ �2

Page 19: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

19

Page 20: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

20

Wiener Deconvolution

Page 21: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

21

Wiener Filter

Page 22: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

22

x

Page 23: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

23

|F(x)|2

Page 24: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

24

S = E⇥|F(x)|2

Page 25: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Wiener Filter: Probabilistic Perspective• Likelihood

• Prior

• Posterior

25

p(y|x) = N (y;k⌦ x,�2I) = N (y;Kx,�2

I)

p(x) = N (x;0,U�1DSU)

p(x|y) / p(y|x) · p(x)/ N (y;Kx,�2

I) · N (x;0,U�1DSU)

F(a) = Ua U�1 = U⇤

F�1(a) = U�1a A⇤ = AT

(unitary matrix)

(any matrix)

F(a) = Ua U�1 = U⇤

F�1(a) = U�1a A⇤ = AT

k⌦ x = F�1⇣F(k)| {z }⌘Dk

·F(x)⌘

= U

�1DkU| {z }K

x

diagonal matrix frompower spectrum S

DFT matrix

Page 26: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Wiener Filter: Posterior & Inference

26

p(x|y) / exp

✓�1

2

(y �Kx)

T(�2

I)

�1(y �Kx)� 1

2

x

T(U

�1D

S

U)

�1x

/ exp

✓�1

2

x

T⇣1

�2K

TK+U

�1D

�1S

U

| {z }⌦

⌘x+ x

T 1

�2K

Ty

| {z }⌘

/ N (x;⌦

�1⌘,⌦�1)

ˆ

x = E[x|y] = argmax

x

p(x|y) = ⌦

�1⌘

=

⇣1

�2(U

�1D

k

U)

T(U

�1D

k

U) +U

�1D

�1S

U

⌘�11

�2(U

�1D

k

U)

Ty

=

hU

�1⇣1

�2|D

k

|2 +D

�1S

⌘U

i�1U

�1 1

�2D

k

Uy

= U

�1⇣1

�2|D

k

|2 +D

�1S

⌘�11

�2D

k

Uy

= U

�1

✓D

k

|Dk

|2 + �2

DS

Uy

◆⌘ F�1

✓F(k)

|F(k)|2 + �2

S

· F(y)

Page 27: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Probabilistic Image Deblurring• Can use similar approach as for image denoising

• likelihood somewhat more complicated (matrix K) • same image priors and penalty functions • MAP estimation via energy minimization

27

p(x|y) / N (y;Kx,�

2I) ·

Y

(i,j)2E

exp (�� · ⇢(xi � xj))

ˆ

x = argmax

x

p(x|y)

= argmin

x

1

2�

2kKx� yk2 + � ·

X

(i,j)2E

⇢(xi � xj)

Page 28: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

28

Page 29: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

29

Quadratic ρ λ = 0.0001

Page 30: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

30

λ = 0.001 Quadratic ρ

Page 31: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

31

λ = 0.01 Quadratic ρ

Page 32: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

32

Huber ρ λ = 0.03, ! = 1

Page 33: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

33

Huber ρ λ = 0.05, ! = 1

Page 34: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

34

Huber ρ λ = 0.1, ! = 1

Page 35: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

35

Lorentzian ρ λ = 0.5, " = 255

Page 36: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

36

Lorentzian ρ λ = 1.0, " = 255

Page 37: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

37

Lorentzian ρ λ = 1.5, " = 255

Page 38: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Recall: Boundary Handling

38

Zero Clamp Replicate

Wrap Circular Periodic

Mirror Symmetric Reflect

Page 39: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Blur Boundary Assumption

• Assumed circular boundary handling of blur kernel k • blurred and deblurred image have same size • mathematically convenient, but not realistic • allows matrix factorizations via Fourier transform

• Better: convolution does not go beyond boundary • blurred image is smaller than deblurred image • expensive optimization, but more realistic

39

Page 40: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

40

Page 41: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,
Page 42: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

42

λ = 0.001 Quadratic ρ

Page 43: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

43

Huber ρ λ = 0.05, ! = 1

Page 44: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

44

Lorentzian ρ λ = 1.0, " = 255

Page 45: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Clipping

• Image sensor has intensity range [Imin, Imax] • values below or above will be clipped to this range • clipped values are quantized, e.g. to [0, 1, …, 255]

• Simple additive Gaussian noise model • doesn’t account for clipping or quantization • restored image can be outside sensible range

(in dark and bright image regions)

45

Page 46: Image Restoration - Heidelberg University...Image Restoration Non-uniform Blur Assumption • Model: other camera motions beyond translation, e.g. 3D camera rotation • more realistic,

Image Restoration

Poisson Noise Model• Recall: shot noise, modeled with Poisson distribution

• Deblurring likelihood:

• models clipping: pixels of x must be non-negative • can be used for probabilistic image restoration

as before, but optimization more difficult

46

p(X = k) =�ke��

k!� > 0k = 0, 1, 2, . . .

p(y|x) =Y

i

[Kx]yii e�[Kx]i

yi!