How to come up with new research ideas

Preview:

DESCRIPTION

Computer vision has been studied for more than 40 years. Due to the increasingly diverse and rapidly developed topics in vision and the related fields (e.g., machine learning, signal processing, cognitive science), the tasks to come up with new research ideas are usually daunting for junior graduate students in this field. In this talk, I will present five methods to come up with new research ideas. For each method, I will give several examples (i.e., existing works in the literature) to illustrate how the method works in practice. This is a common sense talk and will not have complicated math equations and theories. Note: The content of this talk is inspired by "Raskar Idea Hexagon" - Prof. Ramesh Raskar's talk on "How to come up with new Ideas". To download the presentation slide with videos, please visit http://jbhuang0604.blogspot.com/2010/05/how-to-come-up-with-new-research-ideas.html For the video lecture (in Chinese), please visit http://jbhuang0604.blogspot.com/2010/06/blog-post_14.html

Citation preview

How to Come Up With NewResearch Ideas?

Jia-Bin Huangjbhuang0604@gmail.com

TaiwanMay , 2010

1 / 94

What this talk is about?Five approaches to come up with new ideas in computer vision.Extensive case studies (i.e., more than one hundred papers).A common sense talk. No complicate theories or equations.

I wish someone told me this before.

ReferenceThe content of this talk is greatly inspired by “Raskar IdeaHexagon".

2 / 94

What this talk is about?Five approaches to come up with new ideas in computer vision.Extensive case studies (i.e., more than one hundred papers).A common sense talk. No complicate theories or equations.

I wish someone told me this before.

ReferenceThe content of this talk is greatly inspired by “Raskar IdeaHexagon".

2 / 94

What this talk is about?Five approaches to come up with new ideas in computer vision.Extensive case studies (i.e., more than one hundred papers).A common sense talk. No complicate theories or equations.

I wish someone told me this before.

ReferenceThe content of this talk is greatly inspired by “Raskar IdeaHexagon".

2 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

3 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

4 / 94

Active Topics in Computer Vision[Szeliski Computer Vision: Algorithms and Applications 2010]

Digital image processing Blocks world, line labelingGeneralized cylinders Pictorial structures

Stereo correspondence Intrinsic imagesOptical flow Structure from motion

Image pyramids Scale-space processingShape from X Physically-based modelingRegularization Markov Random FieldsKalman filters 3D range data processing

Projective invariants FactorizationPhysics-based vision Graph cuts

Particle filtering Energy-based segmentationFace recognition and detection Subspace methods

Image-based modeling/rendering Texture synthesis/inpaintingComputational photography Feature-based recognitionMRF inference algorithms Learning

5 / 94

What can we learn from the past?

The topics are diverse and evolve over time.

The ways to come up with new ideas are similar. There arepatterns to follow.

6 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

7 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

8 / 94

Seek different dimensions neXt = Xd

The only difference between a rutand a grave is their dimensions. -

Ellen Glasgow

9 / 94

Seek different dimensions neXt = Xd

IdeaCan we increase/replace/transform the dimensions of the originalproblem to get new problems/solutions?

What kind of dimensions can we work on?1 Concrete dimensions (e.g., space, time, frequency)2 Abstract dimensions (e.g., properties)

10 / 94

EX 1-1. Content-Aware Media Resizing[Avidan et al. SIGGRAPH 07] [Rubinstein et al. SIGGRAPH 08]

IdeasExtend dimensions from 2D image to 3D video: image re-targeting⇒ video re-targetingOther dimensions? E.g., 4D light field, infrared image, rangeimage.

11 / 94

EX 1-2. Video Stitching[Rav-Acha et al. CVPR 05]

Input video Dynamic Panorama

IdeasExtend dimensions from image to video, i.e., Image Panorama⇒Video Mosaics with Non-Chronological TimeIncrease the time dimension in both input and output

12 / 94

EX 1-3. Multi-Image Fusion[Agarwala et al. SIGGRAPH 04]

IdeasExtend from single input image to multiple input images⇒ DigitalPhotomontageIncrease the dimension in input only.

13 / 94

EX 1-4. Computation Photography (CodedPhotography)[Raskar et al. SIGGRAPH 04, 06, 08] [Levin et al. SIGGRAPH 07]

IdeasCoded Photography: reversibly encode information about thescene in a single photographCoding in Time (Exposure), Coded Illumination, Coding in Space(aperture), and Coded WavelengthReplace the dimension to code information of the light field

14 / 94

EX 1-1. Photography in Low Light Conditions

Flash Blurred Noisy

What we can do ?Flash→ Changes the overall scene appearance (cold and gray)Long exposure time (hand shake)→ Blurred imageShort exposure time (insufficient light)→ Noisy image

15 / 94

EX 1-1-1. Flash/non-Flash Photography[Petschnigg et al. SIGGRAPH 2004]

Flash No flash Detail transfer with denoising

IdeasThe original problem (taking a good photo in low lightenvironments from single image) is difficult.Increase the dimension of input (flash/no-flash image pair) makethe problem much easier.

16 / 94

EX 1-1-2. Image Deblurring with Blurred/Noisy ImagePairs[Yuan et al. SIGGRAPH 2007]

Blurred Noisy Enhanced noisy Deblurred result

IdeasThe original problem (taking a good photo in low light and flashprohibited environments from single image) is difficult.Increase the dimension of input (Blurred/Noisy image pair) makethe problem much easier.

17 / 94

EX 1-1-3. Robust Flash Deblurring[Zhou et al. CVPR 2010]

IdeasThe original problem (taking a good photo in low lightenvironments from single image) is difficult.Increase the dimension of input (Blurred/Flash image pair) makethe problem much easier.

18 / 94

EX 1-1-4. Dark Flash Photography[Krishnan et al. SIGGRAPH 2009]

IdeasThe original problem (taking a good photo in low lightenvironments from single image) is difficult.Increase the dimension of input (Dark Flash/Noisy image pair)make the problem much easier.

19 / 94

EX 1-2. Brute-Force Vision[Hays and Efros SIGGRAPH 07] [Dale et al. ICCV 09] [Agarwal et al. ICCV 09][Furukawa et al. ICCV 09]

IdeasUtilize a large collection of photos.

20 / 94

EX 2-1. X Alignment/Registration (pixel, object, scene)[Liu et al. CVPR 08, ECCV 08] [Berg et al. CVPR 05]

21 / 94

EX 2-2. Shape from X (shading, texture, specular)[Lobay and Forsyth IJCV 06] [Fleming et al JOV 04] [Adato et al ICCV 07]

shading specular

texture specular flow

22 / 94

EX 2-3. Depth from X (stereo, (de-)focus, codedaperture, diffusion, occlusion, semantic label)[Levin et al. SIGGRAPH 07] [Hoiem et al. ICCV 07] [Liu et al. CVPR 10] [Zhou et al.CVPR 10]

Coded Aperture Semantic Labels

Occlusion Diffusion

23 / 94

EX 2-4. Infer X from a single image (geometric,geography, illumination)[Hoiem et al. ICCV 05] [Hays and Efros CVPR 08] [Lalonde et al. ICCV 09]

Geometric

Geography

Illumination24 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

25 / 94

Combine two or more topics neXt = X + Y

To steal ideas from one person isplagiarism. To steal from many is

research. - Wilson Mizner

26 / 94

Combine two or more topics neXt = X + Y

IdeaCan we combine two or more topics to get new problems orsolutions?

What kind of topics can we combine?1 X, Y are methods2 X, Y are problems3 X, Y are areas

27 / 94

EX 1-1. Viola-Jones Object Detection Framework[Viola and Jones CVPR 2001]

Simple feature Integral img Boosting Cascade structure

IdeasPaper title: Rapid Object Detection using a Boosted Cascade ofSimple FeaturesViola-Jones object detection framework = Integral Images (simplefeature)(1984) + AdaBoost(1997) + Cascade Architecture(longtime ago)

28 / 94

EX 1-2. SIFT Flow = SIFT + Optical Flow[Liu et al. ECCV 08 CVPR 09]

Motion hallucinationLabel transfer

IdeasDense sampling in time : optical flow :: dense sampling in worldimages : SIFT flow

29 / 94

EX 1-3. Visual Tracking with Online Multiple InstanceBoosting[Babenko et al. CVPR 09]

IdeasMILTrack = Multiple Instance Boosting (2005) + Online BoostingTracking (2006)

30 / 94

EX 2-1. High Dynamic Range Image Reconstructionfrom Hand-held Cameras[Lu et al. CVPR 2009]

IdeasHDR from from Hand-held Cameras = High Dynamic RangeImage Reconstruction + Image Deblurring

31 / 94

EX 2-2. Human Body Understanding[Guan et al. ICCV 09]

IdeasHuman Body Understanding = Shape Reconstruction + PoseEstimation

32 / 94

EX 2-3. Image Understandingdetection, tracking, recognition, segmentation, reconstruction, scene classification,event recognition

33 / 94

EX 2-3-1. Detection + Tracking[Andriluka et al. CVPR 08]

IdeasPeople detection and people tracking are highly correlatedproblems.Combine two problems can potentially achieve improvedperformance on individual tasks.

34 / 94

EX 2-3-2. Object Attribute + Recognition[Farhadi et al. CVPR 09] [Lampert et al. CVPR 09]

IdeasDescribe image by attributesEnable knowledge transfer to recognition class with no visualexamples

35 / 94

EX 2-3-2. Object Recognition + Detection[Yeh et al. CVPR 09]

IdeasConcurrent object localization and recognition

36 / 94

EX 2-3-3. Image Segmentation + Object Recognition+ Event Recognition[Li et al. CVPR 09]

IdeasCombine scene classification, image segmentation, imageannotationAll three tasks are mutually beneficial

37 / 94

EX 3-1. SixthSense - A Wearable Gestural Interface[Mistry and Maes TED 2009]

IdeasSixthSense = Computer Vision (e.g., tracking, recognition) +Internet

38 / 94

EX 3-2. Sikuli:Picture-driven computing[Yeh et al. UIST 09] [Chang et al. CHI 10]

Ideas1. Readability/usability, 2. GUI serialization, 3. Computer visionon computer-generated figures

39 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

40 / 94

Re-think the research directions neXt = X̄

If at first, the idea is not absurd, thenthere is no hope for it -

Albert Einstein

41 / 94

Re-think the research directions neXt = X̄

IdeasAre the current research directions really make sense? What’s thekey problem?

What could we do?1 Re-formulate the original problem.2 Analyze, compare existing approaches. Provide insight to the

problems.

42 / 94

EX 1-1. Beyond Sliding Windows[Lampert et al. CVPR 08]

Rectangle set Branch and bound search

IdeasSliding window search⇔ brand-and-bound searchRepresent a set of rectangles with 4 intervalsUse brand-and-bound to find the optimal rectangle (objectlocalization) efficiently

43 / 94

EX 1-2. Beyond Categories[Malisiewicz and Efros CVPR 08, NIPS 09]

IdeasExplicit categorization⇔ Implicit categorizationAsk "what is this like?" (association), instead of "what is it?"(categorization)

44 / 94

EX 1-3. Motion-Invariant Photography[Levin et al. SIGGRAPH 08] [Cho et al. ICCP 10]

IdeasStill camera⇔ Moving camera (parabolic exposures)Enable the use of spatial-invariant blur kernel estimation

45 / 94

EX 1-4. Super-resolution from Single Image[Glasner et al. ICCV 09]

IdeasClasical multi-image SR/Example-based SR⇔ Single SRframework

46 / 94

EX 2-1. In Defense of ...[Boiman et al. CVPR 08] [Hartley PAMI 97]

Nearest-Neighbor Based Image ClassificationQuantization of local image descriptors (used to generate"bags-of-words", codebooks).Computation of "Image-to-Image" distance, instead of"Image-to-Class" distanceThe performance ranks among the top leading learning-basedimage classifiers

The 8-point Algorithm for the fundamental matrixNormalization, Normalization, Normalization!Performs almost as well as the best iterative algorithm

47 / 94

EX 2-2. Understanding blind deconvolution[Levin et al. CVPR 2009]

IdeasBlind deconvolution: recover sharp image x from the blurred one(y = k ⊗ x + n).MAPx,k estimation often favors no-blur explanations.MAPk can be accurately estimated since the kernel size is oftensmaller than the image size.Blind deconvolution should be address in this way: MAPk +non-blind deconvolution.

48 / 94

EX 2-3. Understanding camera trade-offs[Levin et al. ECCV 08]

IdeasTraditional optics evaluation: 2D image sharpness (eg, ModulationTransfer Function)Modern camera evaluation: How well does the recorded dataallow us to estimate the visual world - the lightfield?

49 / 94

EX 2-4. What is a good image segment?[Bagon et al. ECCV 08]

IdeasGood image segment as one which can be easily composed usingits own pieces, but is difficult to compose using pieces from otherparts of the image

50 / 94

EX 2-5. Lambertian Reflectance and LinearSubspaces[Basri and Jacobs PAMI 03]

IdeasThe set of all Lambertian reflectance functions (the mapping fromsurface normals to intensities) obtained with arbitrary distant lightsources lies close to a 9D linear subspace.Explain prior empirical results using linear subspace methods.

51 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

52 / 94

Use powerful tools, find suitable problems neXt = X ↑

If the only tool you have is a hammer,you tend to see every problem as a

nail. - Abraham Maslow

53 / 94

Use powerful tools, find suitable problems neXt = X ↑

What kinds of tools should we understand?Calculus of VariationsDimensionality ReductionSpectral Methods (specifically, spectral clustering)Probabilistic Graphical ModelStructured PredictionBilateral FilteringSparse Representationand more ... spectral method/theory, information theory, (convex)optimization, etc

54 / 94

EX 1. Calculus of Variations (1/2)

From Calculus to Calculus of VariationsCalculus Calculus of VariationsFunctions Functionals (functions of functions)f: Rn 7→ R f: F 7→ R, f (u) =

∫ x2x1

L(x, u(x), u′(x))dx

Derivative df (x)dx Variation df (u)

dulim∆x→0

f (x+∆x)−f (x)∆x limε→0

f (u+εδx)−f (u)ε

∂∂ε f (x + ε∆u)|ε=0

Local extremum Local extremumdf (x)

dx = 0 Euler-Lagrange equation

Total Variation (TV)TV(y) =

∫ x1x0|y′|dx: The "oscillation strength" of y(x)

55 / 94

EX 1. Calculus of Variations (2/2)

Total Variation Denoising/Inpainting

Applications in computer visionOptical flow [Horn and Schunck AI 81]Shape from shading [Horn and Brooks CVGIP 86]Edge detection [PAMI 87]Anisotropic diffusion [Perona and Malik PAMI 90]Active contours model [Kass et al. IJCV 98]Image segmentation [Morel and Solimini 95]Image restoration [Aubert and Vese SIAM Journal on NA 97] 56 / 94

EX 1. Calculus of Variations (2/2)

Total Variation Denoising/Inpainting

Applications in computer visionOptical flow [Horn and Schunck AI 81]Shape from shading [Horn and Brooks CVGIP 86]Edge detection [PAMI 87]Anisotropic diffusion [Perona and Malik PAMI 90]Active contours model [Kass et al. IJCV 98]Image segmentation [Morel and Solimini 95]Image restoration [Aubert and Vese SIAM Journal on NA 97] 56 / 94

EX 2. Dimensionality Reduction (1/2)

Why we need dimensionality reduction?Since high-dimensional data is everywhere (e.g., images, human genedistributions, weather prediction), we need dimensionality reduction for

1 processing data efficiently.2 estimating the distributions of data accuratly (curse of

dimensionality)3 finding meaningful representation of data

Classification of dimensionality reduction methodsGlobal structure preserved Local structure preserved

Linear PCA, LDA LPP, NPENonlinear ISOMAP, Kernel PCA, DM LLE, LE, HE

57 / 94

EX 2. Dimensionality Reduction (1/2)

Why we need dimensionality reduction?Since high-dimensional data is everywhere (e.g., images, human genedistributions, weather prediction), we need dimensionality reduction for

1 processing data efficiently.2 estimating the distributions of data accuratly (curse of

dimensionality)3 finding meaningful representation of data

Classification of dimensionality reduction methodsGlobal structure preserved Local structure preserved

Linear PCA, LDA LPP, NPENonlinear ISOMAP, Kernel PCA, DM LLE, LE, HE

57 / 94

EX 2. Dimensionality Reduction (2/2)

Applications in computer visionSubspace as constraints

Structure from motion [Tomasi and Kanade IJCV 92], Optical flow[Irani IJCV 02], Layer extraction [Ke and Kanade CVPR 01], Facealignment [Saragih et al. ICCV 09]

Face recognition (e.g., PCA, LDA, LPP)PCA [Turk and Pentland PAMI 91], LDA [Belhumeur et al. PAMI 97],LPP [He et al. PAMI 05], Random [Wright et al. PAMI 09]

Motion segmentationsubspace separation [Kanatani ICCV 01] [Yan and Pollefeys ECCV06] [Rao et al. CVPR 08] [Lauer and Schnorr ICCV 09]

Lightinglinear subspace [Belhumeur and Kriegman IJCV 98] [Georghiadeset al. PAMI 01] [Lee et al. PAMI 05] [Basri and Jacobs PAMI 02]

Visual trackingincremental subspace learning [Ross et al. IJCV 08] [Li et al. CVPR08]

58 / 94

EX 2. Dimensionality Reduction (2/2)

Applications in computer visionSubspace as constraints

Structure from motion [Tomasi and Kanade IJCV 92], Optical flow[Irani IJCV 02], Layer extraction [Ke and Kanade CVPR 01], Facealignment [Saragih et al. ICCV 09]

Face recognition (e.g., PCA, LDA, LPP)PCA [Turk and Pentland PAMI 91], LDA [Belhumeur et al. PAMI 97],LPP [He et al. PAMI 05], Random [Wright et al. PAMI 09]

Motion segmentationsubspace separation [Kanatani ICCV 01] [Yan and Pollefeys ECCV06] [Rao et al. CVPR 08] [Lauer and Schnorr ICCV 09]

Lightinglinear subspace [Belhumeur and Kriegman IJCV 98] [Georghiadeset al. PAMI 01] [Lee et al. PAMI 05] [Basri and Jacobs PAMI 02]

Visual trackingincremental subspace learning [Ross et al. IJCV 08] [Li et al. CVPR08]

58 / 94

EX 2. Dimensionality Reduction (2/2)

Applications in computer visionSubspace as constraints

Structure from motion [Tomasi and Kanade IJCV 92], Optical flow[Irani IJCV 02], Layer extraction [Ke and Kanade CVPR 01], Facealignment [Saragih et al. ICCV 09]

Face recognition (e.g., PCA, LDA, LPP)PCA [Turk and Pentland PAMI 91], LDA [Belhumeur et al. PAMI 97],LPP [He et al. PAMI 05], Random [Wright et al. PAMI 09]

Motion segmentationsubspace separation [Kanatani ICCV 01] [Yan and Pollefeys ECCV06] [Rao et al. CVPR 08] [Lauer and Schnorr ICCV 09]

Lightinglinear subspace [Belhumeur and Kriegman IJCV 98] [Georghiadeset al. PAMI 01] [Lee et al. PAMI 05] [Basri and Jacobs PAMI 02]

Visual trackingincremental subspace learning [Ross et al. IJCV 08] [Li et al. CVPR08]

58 / 94

EX 2. Dimensionality Reduction (2/2)

Applications in computer visionSubspace as constraints

Structure from motion [Tomasi and Kanade IJCV 92], Optical flow[Irani IJCV 02], Layer extraction [Ke and Kanade CVPR 01], Facealignment [Saragih et al. ICCV 09]

Face recognition (e.g., PCA, LDA, LPP)PCA [Turk and Pentland PAMI 91], LDA [Belhumeur et al. PAMI 97],LPP [He et al. PAMI 05], Random [Wright et al. PAMI 09]

Motion segmentationsubspace separation [Kanatani ICCV 01] [Yan and Pollefeys ECCV06] [Rao et al. CVPR 08] [Lauer and Schnorr ICCV 09]

Lightinglinear subspace [Belhumeur and Kriegman IJCV 98] [Georghiadeset al. PAMI 01] [Lee et al. PAMI 05] [Basri and Jacobs PAMI 02]

Visual trackingincremental subspace learning [Ross et al. IJCV 08] [Li et al. CVPR08]

58 / 94

EX 2. Dimensionality Reduction (2/2)

Applications in computer visionSubspace as constraints

Structure from motion [Tomasi and Kanade IJCV 92], Optical flow[Irani IJCV 02], Layer extraction [Ke and Kanade CVPR 01], Facealignment [Saragih et al. ICCV 09]

Face recognition (e.g., PCA, LDA, LPP)PCA [Turk and Pentland PAMI 91], LDA [Belhumeur et al. PAMI 97],LPP [He et al. PAMI 05], Random [Wright et al. PAMI 09]

Motion segmentationsubspace separation [Kanatani ICCV 01] [Yan and Pollefeys ECCV06] [Rao et al. CVPR 08] [Lauer and Schnorr ICCV 09]

Lightinglinear subspace [Belhumeur and Kriegman IJCV 98] [Georghiadeset al. PAMI 01] [Lee et al. PAMI 05] [Basri and Jacobs PAMI 02]

Visual trackingincremental subspace learning [Ross et al. IJCV 08] [Li et al. CVPR08]

58 / 94

EX 3. Spectral Clustering (1/3)

Why spectral clustering is popular?Can be solved efficiently by standard linear algebra softwareVery often outperform traditional clustering algorithms

Spectral clustering algorithmInput: a set of data points

1 Construct a similarity graph, e.g., ε-neighbor, k-nearest neighbor,fully connected

2 Construct graph Laplacian, e.g., (un)normalized (L, Lrw, Lsym)3 Compute the first k (with smallest eigenvalues) eigenvectors of L,

v1, · · · , vk

4 Let V ∈ Rn×k be a matrix contains v1, ·, vk as columns5 Cluster the row vectors yi with the k-means algorithm into cluster

C1, · · · ,Ck

Output: Clusters A1, · · · ,Ak with Ai = {j|yj ∈ Ci}59 / 94

EX 3. Spectral Clustering (1/3)

Why spectral clustering is popular?Can be solved efficiently by standard linear algebra softwareVery often outperform traditional clustering algorithms

Spectral clustering algorithmInput: a set of data points

1 Construct a similarity graph, e.g., ε-neighbor, k-nearest neighbor,fully connected

2 Construct graph Laplacian, e.g., (un)normalized (L, Lrw, Lsym)3 Compute the first k (with smallest eigenvalues) eigenvectors of L,

v1, · · · , vk

4 Let V ∈ Rn×k be a matrix contains v1, ·, vk as columns5 Cluster the row vectors yi with the k-means algorithm into cluster

C1, · · · ,Ck

Output: Clusters A1, · · · ,Ak with Ai = {j|yj ∈ Ci}59 / 94

EX 3. Spectral Clustering (1/3)

Why spectral clustering is popular?Can be solved efficiently by standard linear algebra softwareVery often outperform traditional clustering algorithms

Spectral clustering algorithmInput: a set of data points

1 Construct a similarity graph, e.g., ε-neighbor, k-nearest neighbor,fully connected

2 Construct graph Laplacian, e.g., (un)normalized (L, Lrw, Lsym)3 Compute the first k (with smallest eigenvalues) eigenvectors of L,

v1, · · · , vk

4 Let V ∈ Rn×k be a matrix contains v1, ·, vk as columns5 Cluster the row vectors yi with the k-means algorithm into cluster

C1, · · · ,Ck

Output: Clusters A1, · · · ,Ak with Ai = {j|yj ∈ Ci}59 / 94

EX 3. Spectral Clustering (1/3)

Why spectral clustering is popular?Can be solved efficiently by standard linear algebra softwareVery often outperform traditional clustering algorithms

Spectral clustering algorithmInput: a set of data points

1 Construct a similarity graph, e.g., ε-neighbor, k-nearest neighbor,fully connected

2 Construct graph Laplacian, e.g., (un)normalized (L, Lrw, Lsym)3 Compute the first k (with smallest eigenvalues) eigenvectors of L,

v1, · · · , vk

4 Let V ∈ Rn×k be a matrix contains v1, ·, vk as columns5 Cluster the row vectors yi with the k-means algorithm into cluster

C1, · · · ,Ck

Output: Clusters A1, · · · ,Ak with Ai = {j|yj ∈ Ci}59 / 94

EX 3. Spectral Clustering (1/3)

Why spectral clustering is popular?Can be solved efficiently by standard linear algebra softwareVery often outperform traditional clustering algorithms

Spectral clustering algorithmInput: a set of data points

1 Construct a similarity graph, e.g., ε-neighbor, k-nearest neighbor,fully connected

2 Construct graph Laplacian, e.g., (un)normalized (L, Lrw, Lsym)3 Compute the first k (with smallest eigenvalues) eigenvectors of L,

v1, · · · , vk

4 Let V ∈ Rn×k be a matrix contains v1, ·, vk as columns5 Cluster the row vectors yi with the k-means algorithm into cluster

C1, · · · ,Ck

Output: Clusters A1, · · · ,Ak with Ai = {j|yj ∈ Ci}59 / 94

EX 3. Spectral Clustering (1/3)

Why spectral clustering is popular?Can be solved efficiently by standard linear algebra softwareVery often outperform traditional clustering algorithms

Spectral clustering algorithmInput: a set of data points

1 Construct a similarity graph, e.g., ε-neighbor, k-nearest neighbor,fully connected

2 Construct graph Laplacian, e.g., (un)normalized (L, Lrw, Lsym)3 Compute the first k (with smallest eigenvalues) eigenvectors of L,

v1, · · · , vk

4 Let V ∈ Rn×k be a matrix contains v1, ·, vk as columns5 Cluster the row vectors yi with the k-means algorithm into cluster

C1, · · · ,Ck

Output: Clusters A1, · · · ,Ak with Ai = {j|yj ∈ Ci}59 / 94

EX 3. Spectral Clustering (1/3)

Why spectral clustering is popular?Can be solved efficiently by standard linear algebra softwareVery often outperform traditional clustering algorithms

Spectral clustering algorithmInput: a set of data points

1 Construct a similarity graph, e.g., ε-neighbor, k-nearest neighbor,fully connected

2 Construct graph Laplacian, e.g., (un)normalized (L, Lrw, Lsym)3 Compute the first k (with smallest eigenvalues) eigenvectors of L,

v1, · · · , vk

4 Let V ∈ Rn×k be a matrix contains v1, ·, vk as columns5 Cluster the row vectors yi with the k-means algorithm into cluster

C1, · · · ,Ck

Output: Clusters A1, · · · ,Ak with Ai = {j|yj ∈ Ci}59 / 94

EX 3. Spectral Clustering (2/3)

Why it works?Graph Cut Point of View: Construct a partition that minimize theweight across the cut (the well-known mincut problem) whilebalancing the clusters (e.g., RatioCut, Normalized cut).Random Walks Point of View: When minimizing Ncut, weactually look for a cut through the graph such that a random walkseldom transitions from one cluster to another.Perturbation Theory Point of View: The distance betweeneigenvectors from the ideal and nearly ideal graph Laplacian isbounded by a constant times a norm of the error matrix. If theperturbations are not small enough, then the k-means algorithmwill still separate the groups from each other.

60 / 94

EX 3. Spectral Clustering (2/3)

Why it works?Graph Cut Point of View: Construct a partition that minimize theweight across the cut (the well-known mincut problem) whilebalancing the clusters (e.g., RatioCut, Normalized cut).Random Walks Point of View: When minimizing Ncut, weactually look for a cut through the graph such that a random walkseldom transitions from one cluster to another.Perturbation Theory Point of View: The distance betweeneigenvectors from the ideal and nearly ideal graph Laplacian isbounded by a constant times a norm of the error matrix. If theperturbations are not small enough, then the k-means algorithmwill still separate the groups from each other.

60 / 94

EX 3. Spectral Clustering (2/3)

Why it works?Graph Cut Point of View: Construct a partition that minimize theweight across the cut (the well-known mincut problem) whilebalancing the clusters (e.g., RatioCut, Normalized cut).Random Walks Point of View: When minimizing Ncut, weactually look for a cut through the graph such that a random walkseldom transitions from one cluster to another.Perturbation Theory Point of View: The distance betweeneigenvectors from the ideal and nearly ideal graph Laplacian isbounded by a constant times a norm of the error matrix. If theperturbations are not small enough, then the k-means algorithmwill still separate the groups from each other.

60 / 94

EX 3. Spectral Clustering (3/3)[Shi and Malik PAMI 02]

Eigenvectors carry contour information.

61 / 94

EX 4. Probabilistic Graphical Model (1/2)

What is probabilistic graphical models?A marriage between probability theory and graph theory.A natural tool for dealing with uncertainty and complexityProvides a way to view all probablistic systems (e.g., mixturemodels, factor analysis, hidden Markov models, Kalman filters andIsing models) as instances of a common underlying formalism.

62 / 94

EX 4. Probabilistic Graphical Model (2/2)

63 / 94

EX 5. Structured Prediction (1/2)

What is structured prediction?Structured prediction is a framework for solving problems ofclassification or regression in which the output variables aremutually dependent or constrained.Lots of examples

Natural language parsingMachine translationObject segmentationGene predictionProtein alignmentNumerous tasks in computational linguistics, speech, vision,biology.

64 / 94

EX 5. Structured Prediction (1/2)

What is structured prediction?Structured prediction is a framework for solving problems ofclassification or regression in which the output variables aremutually dependent or constrained.Lots of examples

Natural language parsingMachine translationObject segmentationGene predictionProtein alignmentNumerous tasks in computational linguistics, speech, vision,biology.

64 / 94

EX 5. Structured Prediction (2/2)Applications [Lampert et al. ECCV 08] [Desai et al. ICCV 09]

65 / 94

EX 6. Bilateral Filtering (1/3)

What’s Bilateral Filtering?A technique to smooth images while preserving edgesUbiquitous in image processing, computational photography

66 / 94

EX 6. Bilateral Filtering (2/3)[Bennett and McMillan SIGGRAPH 05] [Eisemann and Durand SIGGRAPH 04] [Joneset al. SIGGRAPH 03] [Winnem¨oller et al. SIGGRAPH 06] [Bae et al. SIGGRAPH 02]

67 / 94

EX 6. Bilateral Filtering (3/3)

How does bilateral filter relate with other methods?

IntepretationBilateral filter is equivalent to mode filtering in local histogramsBilateral filter can be interpreted in term of robust statistics since itis related to a cost functionBilateral filter is a discretization of a particular kind of aPDE-based anisotropic diffusion

68 / 94

EX 6. Bilateral Filtering (3/3)

How does bilateral filter relate with other methods?

IntepretationBilateral filter is equivalent to mode filtering in local histogramsBilateral filter can be interpreted in term of robust statistics since itis related to a cost functionBilateral filter is a discretization of a particular kind of aPDE-based anisotropic diffusion

68 / 94

EX 7. Sparse Representation (1/4)

IdeasNatural signals (e.g. audio, image) usually admit sparserepresentation (i.e., can be well represented by a linearcombination of a few atom signals)Successfully applied to various areas in signal/image precessing,vision and graphics.

69 / 94

EX 7. Sparse Representation (2/4)Image Restoration [Aharon et al. TSP 06] [Julien et al. TIP 08]

denoising

Demoisaic

Inpainting

Inpainting

70 / 94

EX 7. Sparse Representation (3/4)Classification [Wright et al. PAMI 09] [Julien et al. CVPR ECCV NIPS 08]

face recognition

texture classification

edge detection

pixel classification

71 / 94

EX 7. Sparse Representation (4/4)Compressive sensing [donoho TIT 06] [Candes and Tao TIT 05 06]

and more (e.g., low-rank matrix completion, robust PCA)72 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

73 / 94

Add an appropriate adjective neXt = Adj + X

There is only one religion, thoughthere are a hundred versions of it. -

George Bernard Shaw

74 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

Add an appropriate adjective neXt = Adj + X

What kinds of adjective can we use?linear⇔ non-lineargenerative/reconstructive⇔ discriminativerule-based / hand-designed⇔ leanring-basedsingle scale⇔ multi-scalesignle step⇔ progressivebatch processing⇔ incremental / online processingfixed⇔ adaptive / dynamic to dataparametric⇔ non-parametricZ - invariant (Z = translation / scale / rotation / noise, facialexpression / pose / lighting / occlusion)Z - aware (Z = motion / content / semantic / context / occlusion)

75 / 94

EX 1. Linear⇔ Non-linear

Hard to find a straingt line to seperate them into two cluster?

IdeasLinear methods may not capture the nonlinear structure in theoriginal data representationNonlinear methods

Kernel tricks (e.g., Kernel PCA, Kernel LDA, Kernel SVM, etc)Manifold learning (e.g., ISOMAP, LLE, Laplacian eigenmap, etc)

76 / 94

EX 1. Linear⇔ Non-linear

Hard to find a straingt line to seperate them into two cluster?

IdeasLinear methods may not capture the nonlinear structure in theoriginal data representationNonlinear methods

Kernel tricks (e.g., Kernel PCA, Kernel LDA, Kernel SVM, etc)Manifold learning (e.g., ISOMAP, LLE, Laplacian eigenmap, etc)

76 / 94

EX 2. Generative⇔ Discriminative

Classification task : X → YGenerative classifier estimate class-conditional pdfs P(X|Y) andprior probabilities P(Y)

Naive Bayes, Mixtures of Gaussians, Mixtures of experts, HiddenMarkov Models (HMM), Sigmoidal belief networks, Bayesiannetworks, Markov random fields (MRF)

Discriminative classifier estimate posterior probabilities P(Y|X)

Logistic regression, SVMs, Traditional neural networks, Nearestneighbor, Conditional Random Fields (CRF)

Bayes’ rule

P(Y|X) =P(X|Y)P(Y)

P(X)

Two different perspectives in viewing a problem

77 / 94

EX 2. Generative⇔ Discriminative

Classification task : X → YGenerative classifier estimate class-conditional pdfs P(X|Y) andprior probabilities P(Y)

Naive Bayes, Mixtures of Gaussians, Mixtures of experts, HiddenMarkov Models (HMM), Sigmoidal belief networks, Bayesiannetworks, Markov random fields (MRF)

Discriminative classifier estimate posterior probabilities P(Y|X)

Logistic regression, SVMs, Traditional neural networks, Nearestneighbor, Conditional Random Fields (CRF)

Bayes’ rule

P(Y|X) =P(X|Y)P(Y)

P(X)

Two different perspectives in viewing a problem

77 / 94

EX 2. Generative⇔ Discriminative

Classification task : X → YGenerative classifier estimate class-conditional pdfs P(X|Y) andprior probabilities P(Y)

Naive Bayes, Mixtures of Gaussians, Mixtures of experts, HiddenMarkov Models (HMM), Sigmoidal belief networks, Bayesiannetworks, Markov random fields (MRF)

Discriminative classifier estimate posterior probabilities P(Y|X)

Logistic regression, SVMs, Traditional neural networks, Nearestneighbor, Conditional Random Fields (CRF)

Bayes’ rule

P(Y|X) =P(X|Y)P(Y)

P(X)

Two different perspectives in viewing a problem

77 / 94

EX 3. Rule-based / Hand-designed⇔ Leanring-based

Hard to find rules to recognize digits?

IdeasIt may be difficult to design a set of rule to do certain task such ashandwritten digit recognitionTurn to machine learning methods instead

78 / 94

EX 4. Single scale⇔ Multi-scale[Zelnik-Manor and Perona NIPS 04]

IdeasWe live in a multi-scale world (atom↔ universe)Image pyraimds / scale-space theory / wavelet representation→all attempt to capture the multi-scale properties in signal/images.

79 / 94

EX 5. Single step⇔ Progressive[Yuan et al. SIGGRAPH 08]

IdeasSome problems are difficult to solve in one step→ solve itprogressively

80 / 94

EX 6. Batch processing⇔ Incremental / Onlineprocessing

IdeasOnline methods can handle potentially infinite data samples andtime-varied data

ExamplesPCA→ Incremental PCA (many variants)LDA→ Incremental LDA (many variants)SVM→ Incremental and decremental SVM [Cauwenberghs andPoggio NIPS 01]Dictionary learning (e.g., K-SVD) [Aharon and Elad TSP 06]→Online dictionary learning [Mairal et al. ICML/JMLR 09]AdaBoosting→ Online boosting [Grabner and Bischof CVPR 06]Multiple instance boosting→ Online multiple instance boosting[Babenko et al. CVPR 09]

81 / 94

EX 6. Batch processing⇔ Incremental / Onlineprocessing

IdeasOnline methods can handle potentially infinite data samples andtime-varied data

ExamplesPCA→ Incremental PCA (many variants)LDA→ Incremental LDA (many variants)SVM→ Incremental and decremental SVM [Cauwenberghs andPoggio NIPS 01]Dictionary learning (e.g., K-SVD) [Aharon and Elad TSP 06]→Online dictionary learning [Mairal et al. ICML/JMLR 09]AdaBoosting→ Online boosting [Grabner and Bischof CVPR 06]Multiple instance boosting→ Online multiple instance boosting[Babenko et al. CVPR 09]

81 / 94

EX 6. Batch processing⇔ Incremental / Onlineprocessing

IdeasOnline methods can handle potentially infinite data samples andtime-varied data

ExamplesPCA→ Incremental PCA (many variants)LDA→ Incremental LDA (many variants)SVM→ Incremental and decremental SVM [Cauwenberghs andPoggio NIPS 01]Dictionary learning (e.g., K-SVD) [Aharon and Elad TSP 06]→Online dictionary learning [Mairal et al. ICML/JMLR 09]AdaBoosting→ Online boosting [Grabner and Bischof CVPR 06]Multiple instance boosting→ Online multiple instance boosting[Babenko et al. CVPR 09]

81 / 94

EX 6. Batch processing⇔ Incremental / Onlineprocessing

IdeasOnline methods can handle potentially infinite data samples andtime-varied data

ExamplesPCA→ Incremental PCA (many variants)LDA→ Incremental LDA (many variants)SVM→ Incremental and decremental SVM [Cauwenberghs andPoggio NIPS 01]Dictionary learning (e.g., K-SVD) [Aharon and Elad TSP 06]→Online dictionary learning [Mairal et al. ICML/JMLR 09]AdaBoosting→ Online boosting [Grabner and Bischof CVPR 06]Multiple instance boosting→ Online multiple instance boosting[Babenko et al. CVPR 09]

81 / 94

EX 6. Batch processing⇔ Incremental / Onlineprocessing

IdeasOnline methods can handle potentially infinite data samples andtime-varied data

ExamplesPCA→ Incremental PCA (many variants)LDA→ Incremental LDA (many variants)SVM→ Incremental and decremental SVM [Cauwenberghs andPoggio NIPS 01]Dictionary learning (e.g., K-SVD) [Aharon and Elad TSP 06]→Online dictionary learning [Mairal et al. ICML/JMLR 09]AdaBoosting→ Online boosting [Grabner and Bischof CVPR 06]Multiple instance boosting→ Online multiple instance boosting[Babenko et al. CVPR 09]

81 / 94

EX 6. Batch processing⇔ Incremental / Onlineprocessing

IdeasOnline methods can handle potentially infinite data samples andtime-varied data

ExamplesPCA→ Incremental PCA (many variants)LDA→ Incremental LDA (many variants)SVM→ Incremental and decremental SVM [Cauwenberghs andPoggio NIPS 01]Dictionary learning (e.g., K-SVD) [Aharon and Elad TSP 06]→Online dictionary learning [Mairal et al. ICML/JMLR 09]AdaBoosting→ Online boosting [Grabner and Bischof CVPR 06]Multiple instance boosting→ Online multiple instance boosting[Babenko et al. CVPR 09]

81 / 94

EX 7. Fixed⇔ Adaptive / Dynamic[Elad and Aharon TIP 06]

IdeasAdaptive approaches usually outperform the predefined/fixedones.

82 / 94

EX 8. Parametric⇔ Non-parametric

Probability density estimationParametric

Assumes a specific functional form with paramter θe.g., Gaussian distribution with unknown mean and variance, mixtureof Gaussians

Parameter estimationEstimative approach: p(x) = p(x|θbest)Bayesian approach p(x) =

∫a(θ)p(x|θ)dθ

Non-parametricDo not assume a specific form of the probability distributions

e.g., Histogram, kernel density estimation (or Parzen window method)

83 / 94

EX 8. Parametric⇔ Non-parametric

Probability density estimationParametric

Assumes a specific functional form with paramter θe.g., Gaussian distribution with unknown mean and variance, mixtureof Gaussians

Parameter estimationEstimative approach: p(x) = p(x|θbest)Bayesian approach p(x) =

∫a(θ)p(x|θ)dθ

Non-parametricDo not assume a specific form of the probability distributions

e.g., Histogram, kernel density estimation (or Parzen window method)

83 / 94

EX 9. Z - invariant

Make your method robust to potential performance degradationnoise (e.g., Gaussian additive noise, impluse noise, non-uniformnoise) (e.g., image restoration)translation shift (e.g., near-duplicate image/video detection, imagesearch)scale change (e.g., object detection, feature extraction)perspective distortion (e.g., feature extraction)deformation (e.g., non-rigid registration, part-based objectdetection)pose variation (e.g., human pose estimation)lighting variation (e.g., face recognition)partial occlusion (e.g., object detection and recognition)

84 / 94

EX 9. Z - invariant

Make your method robust to potential performance degradationnoise (e.g., Gaussian additive noise, impluse noise, non-uniformnoise) (e.g., image restoration)translation shift (e.g., near-duplicate image/video detection, imagesearch)scale change (e.g., object detection, feature extraction)perspective distortion (e.g., feature extraction)deformation (e.g., non-rigid registration, part-based objectdetection)pose variation (e.g., human pose estimation)lighting variation (e.g., face recognition)partial occlusion (e.g., object detection and recognition)

84 / 94

EX 9. Z - invariant

Make your method robust to potential performance degradationnoise (e.g., Gaussian additive noise, impluse noise, non-uniformnoise) (e.g., image restoration)translation shift (e.g., near-duplicate image/video detection, imagesearch)scale change (e.g., object detection, feature extraction)perspective distortion (e.g., feature extraction)deformation (e.g., non-rigid registration, part-based objectdetection)pose variation (e.g., human pose estimation)lighting variation (e.g., face recognition)partial occlusion (e.g., object detection and recognition)

84 / 94

EX 9. Z - invariant

Make your method robust to potential performance degradationnoise (e.g., Gaussian additive noise, impluse noise, non-uniformnoise) (e.g., image restoration)translation shift (e.g., near-duplicate image/video detection, imagesearch)scale change (e.g., object detection, feature extraction)perspective distortion (e.g., feature extraction)deformation (e.g., non-rigid registration, part-based objectdetection)pose variation (e.g., human pose estimation)lighting variation (e.g., face recognition)partial occlusion (e.g., object detection and recognition)

84 / 94

EX 9. Z - invariant

Make your method robust to potential performance degradationnoise (e.g., Gaussian additive noise, impluse noise, non-uniformnoise) (e.g., image restoration)translation shift (e.g., near-duplicate image/video detection, imagesearch)scale change (e.g., object detection, feature extraction)perspective distortion (e.g., feature extraction)deformation (e.g., non-rigid registration, part-based objectdetection)pose variation (e.g., human pose estimation)lighting variation (e.g., face recognition)partial occlusion (e.g., object detection and recognition)

84 / 94

EX 9. Z - invariant

Make your method robust to potential performance degradationnoise (e.g., Gaussian additive noise, impluse noise, non-uniformnoise) (e.g., image restoration)translation shift (e.g., near-duplicate image/video detection, imagesearch)scale change (e.g., object detection, feature extraction)perspective distortion (e.g., feature extraction)deformation (e.g., non-rigid registration, part-based objectdetection)pose variation (e.g., human pose estimation)lighting variation (e.g., face recognition)partial occlusion (e.g., object detection and recognition)

84 / 94

EX 9. Z - invariant

Make your method robust to potential performance degradationnoise (e.g., Gaussian additive noise, impluse noise, non-uniformnoise) (e.g., image restoration)translation shift (e.g., near-duplicate image/video detection, imagesearch)scale change (e.g., object detection, feature extraction)perspective distortion (e.g., feature extraction)deformation (e.g., non-rigid registration, part-based objectdetection)pose variation (e.g., human pose estimation)lighting variation (e.g., face recognition)partial occlusion (e.g., object detection and recognition)

84 / 94

EX 9. Z - invariant

Make your method robust to potential performance degradationnoise (e.g., Gaussian additive noise, impluse noise, non-uniformnoise) (e.g., image restoration)translation shift (e.g., near-duplicate image/video detection, imagesearch)scale change (e.g., object detection, feature extraction)perspective distortion (e.g., feature extraction)deformation (e.g., non-rigid registration, part-based objectdetection)pose variation (e.g., human pose estimation)lighting variation (e.g., face recognition)partial occlusion (e.g., object detection and recognition)

84 / 94

EX 10. Z - aware[Wang et al. SIGGRAPH Asia 09] [Wang et al. SIGGRAPH 10]

motion-aware video resizing

Make your method be aware of potential failure casesMotion (e.g., video processing)Content (e.g., image processing)Semantic (e.g., image and video indexing/retrival)Context (e.g., image understanding)Occlusion (e.g., detection/tracking)

85 / 94

EX 10. Z - aware[Wang et al. SIGGRAPH Asia 09] [Wang et al. SIGGRAPH 10]

motion-aware video resizing

Make your method be aware of potential failure casesMotion (e.g., video processing)Content (e.g., image processing)Semantic (e.g., image and video indexing/retrival)Context (e.g., image understanding)Occlusion (e.g., detection/tracking)

85 / 94

EX 10. Z - aware[Wang et al. SIGGRAPH Asia 09] [Wang et al. SIGGRAPH 10]

motion-aware video resizing

Make your method be aware of potential failure casesMotion (e.g., video processing)Content (e.g., image processing)Semantic (e.g., image and video indexing/retrival)Context (e.g., image understanding)Occlusion (e.g., detection/tracking)

85 / 94

EX 10. Z - aware[Wang et al. SIGGRAPH Asia 09] [Wang et al. SIGGRAPH 10]

motion-aware video resizing

Make your method be aware of potential failure casesMotion (e.g., video processing)Content (e.g., image processing)Semantic (e.g., image and video indexing/retrival)Context (e.g., image understanding)Occlusion (e.g., detection/tracking)

85 / 94

EX 10. Z - aware[Wang et al. SIGGRAPH Asia 09] [Wang et al. SIGGRAPH 10]

motion-aware video resizing

Make your method be aware of potential failure casesMotion (e.g., video processing)Content (e.g., image processing)Semantic (e.g., image and video indexing/retrival)Context (e.g., image understanding)Occlusion (e.g., detection/tracking)

85 / 94

Outline

1 Introduction

2 Five ways to come up with new ideasSeek different dimensions neXt = Xd

Combine two or more topics neXt = X + YRe-think the research directions neXt = X̄Use powerful tools, find suitable problems neXt = X ↑Add an appropriate adjective neXt = Adj + X

3 What is a bad idea?

86 / 94

What is a bad idea?

Naive combination of two or more methodsAvoid a pipeline system paper

Blind application of toolsUse X feature and Y classifier without motivation and justification

Follow the hypeToo many competitors

Do just because it can be doneDo the right things, not just do things right

87 / 94

88 / 94

89 / 94

90 / 94

91 / 94

92 / 94

93 / 94

Thank you for your kind attention.Questions?

For more complete materials, please visit my bloghttp://jbhuang0604.blogspot.com/

94 / 94

Recommended