31
DESIGN OF NON-LINEAR KERNEL DICTIONARIES FOR OBJECT RECOGNITION Murad Megjhani MATH : 6397 1

Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

Embed Size (px)

Citation preview

Page 1: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

1

DESIGN OF NON-LINEAR KERNEL DICTIONARIES FOR

OBJECT RECOGNITION

Murad Megjhani

MATH : 6397

Page 2: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

2

Agenda

• Sparse Coding• Dictionary Learning• Problem Formulation (Kernel)• Results and Discussions

Page 3: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

3

MotivationGiven a 16x16(or nxn) image patch x, we can represent it using 256 real numbers(pixels).

Problem: Can we find or learn a better representation for this?

Given a set of images, learn a better way to represent image other than pixels.

Page 4: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

4

What is Sparse Linear Model

signal.abenRLet x

nxK

K1 R],...,d[dD Let

D is “adapted” to x if it can represent it with a few basis vectors—that is, there exists a sparse vector γ in Rk such that x ≈Dγ. We call γ the sparse code.

D

x

be a set of normalized “basis vectors”.Lets call it Dictionary.

Page 5: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

5

Sparse Coding Illustration Natural Images Bases [d1 , …, d64]

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500 50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500 50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500 50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

» 0.8 * + 0.3 * + 0.5 *

x » 0.8 * d36 + 0.3 * d42

+ 0.5

* d63 [0, 0, …, 0, 0.8, 0, …, 0, 0.3, 0, …, 0, 0.5, …] = [γ1, …, γ64] (feature representation)

Test example

Compact & easilyinterpretable

Page 6: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

6

Notation

nxKD RKR

nRx nxNX R

KxNR

n dimensional Signal Vector

Matrix of N signal vectors of n dimension

Over-complete basis -Dictionary of size K (K>>n)

Sparse representation for input signal xMatrix of Sparse Vectors

nd R Atom : Elementary signal representing template

.

≈.

… …ΓX DN21 x..................x,x K21 ddd ..........,

N21 ..................,

Page 7: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

7

Sparse Coding Problem

)(γ ||D|| j

2

2jjγ j

xmin

ψ induces sparsity in γj

• The l0 “pseudo norm”. ||γ||0 ≡ #{i s.t. γ[i] ≠ 0} (NP-hard).

• The l1 norm. ||γ||1 ≡ ∑ | γ[i] | (convex).

data fitting Sparsity inducing regularization

This is a selection problem.

When ψ is the l1 norm, the problem is called LASSO [1] or Basis Pursuit [2].

When ψ is the l0 norm, the problem is called Matching Pursuit [3] or Orthogonal Matching Pursuit [4].

Page 8: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

8

Dictionary Learning Problem

)(γ ||DX||2

1j

2

FD

,min

data fitting Sparsity inducing regularization

Designed DictionaryD can be designed like in Haar [5], Curvelets [6], …

Learned Dictionary D can be created from data as in “Sparse coding with an over-complete basis set” [7] and K-SVD [8]

We will study two of these algorithms today and see how they can be Kernel-aized [9].

Page 9: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

10

• Next steps: given the previously found

atoms, find the next one to best fit the residual.

• The MP is one of the greedy algorithms that finds one atom at a time [4].

• Step 1: find the one atom that best matches the signal.

• The Orthogonal MP (OMP) is an improved version that re-evaluates the coefficients by Least-Squares after each round.

D

x

Sparse Coding Algorithms Matching Pursuit

Page 10: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

11

Orthogonal Matching PursuitT tosubject D

2

2

xmin

x r residual and 0; tscoefficien ; Set Active 0 S

for iter = 1…T doSelect the atom which most reduces the objective

rd 1-iiSi C

,maxargi

Update the active set: }{SS i

Update the residual Dri x

Update the coefficients

x

DDDD : OMP

rd]i[ : MP

T

S

1

S

T

SSS

1-ii

end for

1.

2.

3.

4.

5.

6.

7.

Input x, Dictionary D and Sparsity Threshold T

MP : Updates only one coefficient corresponding to the selected atom

OMP : Updates coefficient of all the coefficients in the active set

Initialize

Page 11: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

.

≈.

… …ΓX D

Initialize D Sparse Code Dictionary Update

12

Dictionary Learning Algorithms:

K-SVD

Page 12: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

13

The K-SVD algorithm:Train an explicit dictionary from examples

.

Set of Examples

X.

…ΓD

The examples are linear combinations of the atoms

Each representation uses at most T atoms

The target function to minimize:

T ||γ|| i s.t. ||DX|| 0i

2

FD,Γ Γmin

Input Output

Dictionary Learning Algorithms:

K-SVD

Page 13: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

14

For the jth example

For sparse coding, use batch OMP, or any other sparse coding algorithm

.

≈.

… …ΓX D

The Sparse Coding Stage:

T ||γ|| i s.t. ||D-X 0i

2

F ΓΓ

||min

T ||γ|| i s.t. ||D- 0j

2

2jj

x||minj

Dictionary Learning Algorithms:

K-SVD

Page 14: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

15

For the kth atom

(Residual)

.

≈.

… …ΓX D

T ||γ|| s.t. i ||γdγd X|| ||γdX|| 0i

2

F

T

kkkj

T

jj

2

F

K

1j

T

jjD D

minmin

T || s.t. i ||γdE|| 0i

2

F

T

kkkdk

||min kj

T

jjK γd X E

Dictionary Update Stage:

T ||γ|| i s.t. ||D-X 0i

2

F ΓD

||min

Dictionary Learning Algorithms:

K-SVD

Page 15: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

16

We can do better!

.

≈.

… …Γ

sparsity?

X D

Dictionary Update Stage:

||γdE|| 2

F

T

kkkdk

min ||γdE|| 2

F

T

kkkd kk

,

min

Dictionary Learning Algorithms:

K-SVD

Page 16: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

17

d ,γmin

Tk k

Only some of the examples use atom

dk

When updating γk, only re-compute

the coefficients for those examples

dk

γkT

Solve with SVD

We want to solve:

Ek

2

F

~ ~

~

Dictionary Update Stage:

Dictionary Learning Algorithms:

K-SVD

Page 17: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

N1i T tosubject DX0i

2

2D

...min,

Summary:

Input :X, Sparsity Threshold T

Initialization : Set the random normalized dictionary matrix D(0) ∈ Rn×K . Set J = 1Repeat until convergence ( or for some fixed number of iterations)

Sparse Coding Stage: Use any Pursuit algorithm to compute γi for i = 1…N

T ||γ|| i s.t. ||D|| 0i

2

2iii

xmin

Codebook Update Stage : for k = 1…K

• Define the group of examples that use dk, ωk = {i|1 ≤ i ≤ N, xi(k) ≠ 0}

• Compute

kj

T

jjK γd-XE

• Restrict Ek by choosing only the columns corresponding to those elements that initially used dk in their representation, and obtain R

KE

• Apply SVD decomposition TR

K VUE

Update : 1

R

k1k ).v11(ud ,, Set J = J+1

Output : D,Γ 19

Dictionary Learning Algorithms:

K-SVD

Page 18: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

20

Non-Linear Dictionary Learning Problem Formulation

dictionarysought the is D

in XX nxN ΗSpaceFeature )(R

Goal is to learn the non-linear dictionary in the feature space H by solving

T ||γ|| i s.t. ||DX)(|| 0i

2

FD,Γ Γmin

Proposition 1**: There exists an optimal solution D* to (1) that has the following form

NxK Asomefor X)A(D R*

(1)

**For proof refer Appendix VI of H. Van Nguyen, V. M. Patel, N. M. Nasrabadi, and R. Chellappa, “Design of non-linear kernel dictionaries for object recognition.,” IEEE Trans. Image Process., vol. 22, no. 12, pp. 5123–35, Dec. 2013

T ||γ|| i s.t. ||X)A(X)(|| 0i

2

FD,Γ Γmin

Equation (1) now can be written as

Page 19: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

21

2

F

2

F ( ||(( ΓΓ- A-IX)X)AX) ||

tr

(( tr

ΓΓ

ΓΓ

A-IXXTA-I

A-IX)TX)TA-I

,K

Matrix. Kernel a is XX ,K

This gives the motivation to formulate the dictionary learning in functional spaceusing the kernel trick.

Non-Linear Dictionary Learning Problem Formulation

Page 20: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

Kernel-Orthogonal Matching Pursuit

T tosubject A)X()(0

2

2

zmin

; r residual and 0; tscoefficien ; Set Active zS

for iter = 1…T do

iii

T

Si

aX d In |dr|C

)(Space Hilbertmaxargi

Update the active set: }{SS i

SSS A

Update the coefficients

T

S

1

S

T

SS

SS

T

S

1

S

T

SSS

X)A,(X)A(X,A

)( and (X)A D In DDDD

zKK

zzSpace Hilbertz

end for

1.

2.3.

4.

5.

6.

7.

Input: z, kernel ‘K’, Sparsity Threshold T, Coefficient Matrix A

Select the atom which most reduces the objective

i

T

SiiSi

i

T

S

SSSi

T

S

i

T

SS

i

T

i

T

a) X)(X,- Y)(z,( where )(

a) X)(X,- Y)(z,(

A where )aX()X(

)aX()AX(

)aX(r dr

C

KKmaxargi

KK

)()((z)

)()((z)

)(

22

Initialize:

Output: γ

Page 21: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

Kernel-Dictionary Learning Algorithms: K-SVD Summary:

Input: X, Sparsity Threshold T, Kernel function K

Initialization: Set the random normalized dictionary matrix A(0) ∈ Rn×K . Set J = 1

Repeat until convergence ( or for some fixed number of iterations)

Sparse Coding Stage: Use any KOMP to compute γi for i = 1…N given A(J-1)

Codebook Update Stage: for each column ak

(J-1) in A(J-1) k = 1…K• Define the group of examples that use ak,

ωk = {i|1 ≤ i ≤ N, xi(k) ≠ 0}• Compute

)γ(aM and )γa-(I where(X)M(X)

)γ(X)(a)γa-(X)(I

γ(X)aγ(X)a-(X)

γ(X)a-(X)

as written becan (X)A-(X)error The

kkkjjkkk

kkjj

kkjj

jj

kj

2

F

2

Fkj

2

Fkj

2

F

K

1j

2

F

EE ||||

||||

||||

||||

||||

• Restrict Ek by choosing only the columns corresponding to those elements that initially used ak in their representation, and obtain R

KE

• Apply SVD decomposition vEa updated choose

VV)X)(E(X,)(E

1

R

K

1(J)

k

TR

K

R

K

K

Set J = J+1

Output: A,Γ

N1i T tosubject A)X()X(0i

2

2D

...min,

kj

E )γa-(I jjk

Where v1 is the first vector of V corresponding to the largest singular value σ1

2 in Δ

23

Page 22: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

25

Results and Discussion• Synthetic data ( not discussed)• Kernel Sparse Representation• Digit Recognition• Caltech-101 and Caltech-256 Object Recognition

Kernel Sparse Representation

Compares the mean-squared-error (MSE) of an image from the USPS dataset when approximated using the first m dominant kernel PCA components and m = [1, . . . , 20] kernel dictionary atoms (i.e. T0 = m).

“It is clearly seen that the MSE decays much faster for kernel KSVD than kernel PCA with respect to the number of selected bases. This observation implies that the image is nonlinearly sparse and learning a dictionary in the high dimensional feature space can provide a better representation of data.”

Page 23: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

26

•Kernel Sparse Representation•Digit Recognition•Caltech-101 and Caltech-256 Object Recognition

Results and Discussion

Dataset • USPS handwritten digit database. • Image Size 16x16 making the dimension of the vector 256 and number

of classes = 10 ( number of digits).• Ntrain = 500 samples for training and Ntest = 200 samples for testing – for

each class.

Parameter Selection• The selection of parameters is done through a 5-fold cross-validation.• Parameters

o K (size of dictionary) = 300 atoms.o T0 (sparsity constraint)= 5.o maximum number of training iterations = 80.o Kernel Type : polynomial kernel of degree 4.

Digit Recognition

Page 24: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

27

Results and DiscussionDigit Recognition

Approach 1 : Distributive Approach

X1 = [xi1,…,xiN] ∈ R256×500 i=1...

X10 = [xi1,…,xiN] ∈ R256×500 i=10

compute

Training Examples A separate Dictionary for each class was learned using Kernel-KSVD

A1 R∈ 500×300 ...

A10 R∈ 500×300

Testing : Given a test image z R∈ 256 , perform KOMP

z R∈ 256

γ1 R∈ 300 using A1 ...

γ10 R∈ 300 using A10

r1

.

.

.r10

)r( i101i ...

minargclass

[1,...,10]i )A(A)A()( )( iiii ii

T

iii

2

2iii YYY2AXr ,Kz,Kzz,K(z)

Pre-images of learned atoms : Since the dictionary is learned in the kernel space the atoms in the dictionary need to be converted to back to the Euclidean space to view the atom**.

** J. T.-Y. Kwok and I. W.-H. Tsang, “The pre-image problem in kernel methods,” IEEE Trans. Neural Netw., vol. 15, no. 6, pp. 1517–1525, Nov. 2004.

Page 25: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

28

Results and Discussion

Testing : Given a test image z R∈ 256 , perform KOMP on joined Dictionary [A1,…,A10 ] Sparsity Constraint T = 10( in this case)

z R∈ 256 γ1 R∈ 3000 using [A1,…,A10]

r1

.

.

.r10

)r( i101i ...

minargclass

** J. T. Kwok and I. W. Tsang, “The pre-image problem in kernel methods.,” IEEE Trans. Neural Netw., vol. 15, no. 6, pp. 1517–25, Nov. 2004..

Digit Recognition

Approach 2 : Collective Approach

X1 = [xi1,…,xiN] ∈ R256×500 i=1...

X10 = [xi1,…,xiN] ∈ R256×500 i=10

compute

Training Examples A separate Dictionary for each class was learned using Kernel-KSVD

A1 R∈ 500×300 ...

A10 R∈ 500×300

Pre-images of learned atoms : Since the dictionary is learned in the kernel space the atoms in the dictionary need to be converted to back to the Euclidean space to view the atom**[10].

[1,...,10]i )A(A)A()( )( iiii ii

T

iii

2

2iii YYY2AXr ,Kz,Kzz,K(z)

Page 26: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

29

Results and Discussion

Comparison of digit recognition accuracies for different methods in the presence of Gaussian noise and missing-pixel effects. Red color and orange color represent the distributive and collective classification approaches for kernel KSVD, respectively. In order to avoid clutter, we only report distributive approach for kernel MOD which gives better performance for this dataset. (a) Missing pixels. (b) Gaussian noise.

Digit Recognition

Performance Comparison – KSVD, Kernel PCA, Kernel K-SVD and Kernel MOD under different scenarios – missing pixels, different noise.

Kernel KSVD classification accuracy versus the polynomial degree of the USPS dataset.

The second set of experiments examines the effects of parameters choices on the overall recognition performances. Figure shows the classification accuracy of kernel KSVD as we vary the degree of polynomial kernel. The best error rate of 1.6% is achieved with the polynomial degree of 4.

Page 27: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

30

Results and DiscussionCaltech-101 and Caltech-256 Object Recognition

Dataset • Caltech-101 database. • 101 object classes, and 1 background class – collected randomly

from Internet• Each category contains 31-800 images• Average size of the image = 300x300 pixels• Diverse and challenging dataset as it includes objects like

building, musical instruments, animals and natural scenes

Parameter Selection

• The selection of parameters is done through a 5-fold cross-validation• Parameters

o K (size of dictionary) = 300 atoms.o T0 (sparsity constraint)= 5.o Maximum number of training iterations =80.o Kernel Type : polynomial kernel of degree 4.

Page 28: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

31

Results and DiscussionCaltech-101 and Caltech-256 Object Recognition

Confusion matrix of Kernel KSVD recognition performances on Caltech 101 dataset The rows and columns correspond to true labels and predicted labels, respectively. The dictionary is learned from 3030 images where each class contributes 30 images. The sparsity is set at 30 for both training and testing. Although the confusion matrix contains all classes, only a subset of class labels is displayed for better legibility.

• Train on N images where N={5,10,15,20,25,30} and test on rest.

• Some categories are very small so end up with just one single image for testing.

• To compensate for the variations in class size they normalize the recognition results by the number of test images to get per-class accuracies.

• Final results is obtained by averaging per-class accuracies across 102 categories.

Page 29: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

32

Results and DiscussionCaltech-101 and Caltech-256 Object Recognition

#train samples 5 10 15 20 25 30

Griffin[11] 44.2 54.5 59.0 63.3 65.8 67.6

Germet[12] - - - - - 64.16

Yang[13] - - 67.0 - - 73.2

KSVD[8] 49.8 59.8 65.2 68.7 71.0 73.2

LC-KSVD[14] 54.0 63.1 67.7 70.5 72.3 73.6

D-Kernel (MOD) 53.8 64.0 70.2 73.8 76.4 78.1

C-Kernel (MOD) 56.2 67.7 72.4 75.6 77.5 80.0

D-Kernel (KSVD) 54.2 64.5 70.2 74.0 76.5 78.5

C-Kernel (KSVD) 56.5 67.2 72.5 75.8 77.6 80.1

Performance Comparison on Caltech – 101 Dataset

#train samples 15 30

Griffin[11] 28.3 34.1

Germet[12] - 27.2

Yang[13] 34.4 41.2

D-Kernel (MOD) 34.2 41.2

C-Kernel (MOD) 34.6 42.7

D-Kernel (KSVD) 34.5 41.4

C-Kernel (KSVD) 34.8 42.5

Performance Comparison on Caltech – 256 Dataset

Page 30: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

33

References1. R. Tibshirani, “Regression Shrinkage and Selection via the Lasso,” J. R. Stat. Soc., vol. Vol. 58, no. No. 1, pp. 267–288,

1996.

2. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic Decomposition by Basis Pursuit,” SIAM J. Sci. Comput., vol. 20, no. 1, pp. 33–61, Jan. 1998.

3. S. G. Mallat, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3397–3415, 1993.

4. Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition,” in Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, 1993, pp. 40–44.

5. A. Haar, “Zur Theorie der orthogonalen Funktionensysteme,” Math. Ann., vol. 71, no. 1, pp. 38–53, Mar. 1911.

6. E. J. Candes and D. L. Donoho, “Curvelets: A surprisingly effective nonadaptive representation for objects with edges,” 2000.

7. B. A. Olshausen and D. J. Field, “Sparse coding with an overcomplete basis set: a strategy employed by V1?,” Vision Res., vol. 37, no. 23, pp. 3311–25, Dec. 1997.

8. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation,” IEEE Trans. Signal Process., vol. 54, no. 11, pp. 4311–4322, Nov. 2006.

9. H. Van Nguyen, V. M. Patel, N. M. Nasrabadi, and R. Chellappa, “Design of non-linear kernel dictionaries for object recognition.,” IEEE Trans. Image Process., vol. 22, no. 12, pp. 5123–35, Dec. 2013.

10. J. T. Kwok and I. W. Tsang, “The pre-image problem in kernel methods.,” IEEE Trans. Neural Netw., vol. 15, no. 6, pp. 1517–25, Nov. 2004.

11. G. Griffin, A. Holub, and P. Perona, “Caltech-256 Object Category Dataset.” California Institute of Technology, 10-Mar-2007.

12. J. C. van Gemert, J.-M. Geusebroek, C. J. Veenman, and A. W. M. Smeulders, “Kernel codebooks for scene categorization,” in Computer Vision--ECCV 2008, Springer, 2008, pp. 696–709.

13. T. Huang, “Linear spatial pyramid matching using sparse coding for image classification,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 1794–1801.

14. Z. Jiang, Z. Lin, and L. S. Davis, “Learning a discriminative dictionary for sparse coding via label consistent K-SVD,” in CVPR 2011, 2011, pp. 1697–1704.

Page 31: Murad Megjhani MATH : 6397 1. Agenda Sparse Coding Dictionary Learning Problem Formulation (Kernel) Results and Discussions 2

34

Thank You

Q&A(let Q be sparse )