30
February, 26 2007 Content-Based Image Retrieval Saint-Petersburg State University Natalia Vassilieva [email protected] Il’ya Markov [email protected] Alexander Dolnik alexander.dolnik@gmail. com

Content-Based Image Retrieval

Embed Size (px)

DESCRIPTION

Content-Based Image Retrieval. Natalia Vassilieva [email protected]. Alexander Dolnik [email protected]. Il’ya Markov [email protected]. Saint-Petersburg State University. Our team. Natalia Vassilieva Alexander Dolnik Ilya Markov Maria Teplyh Maria Davydova - PowerPoint PPT Presentation

Citation preview

Page 1: Content-Based Image Retrieval

February, 262007

Content-Based Image Retrieval

Saint-Petersburg State University

Natalia [email protected]

Il’ya [email protected]

Alexander [email protected]

m

Page 2: Content-Based Image Retrieval

February, 262007

Our team

Natalia Vassilieva Alexander Dolnik Ilya Markov Maria Teplyh Maria Davydova Dmitry Shubakov Alexander Yaremchuk

Page 3: Content-Based Image Retrieval

February, 262007

General problems

Semantic gap between system and human mode of image analysis Specific of human visual perception How to catch semantics of an image

Signature calculation and response time

Combining different features and metrics

Page 4: Content-Based Image Retrieval

February, 262007

Image retrieval system

General goal: an image retrieval system

that is able to process natural language query that is able to search among annotated and non-annotated

images that takes into account human visual perception that processes various features (color, texture, shapes) that uses relevance feedback for query refinement, adaptive

search

How to minimize “semantic gap”?

semantic low-level featuressemantic gap

Page 5: Content-Based Image Retrieval

February, 262007

CBIR : Traditional approachin

dexati

on

retr

ieval

signaturecalculation database

signaturecalculation

comparisonresult

image

query

Relevance feedback: query refinement

fusion of results: independent search by different features

color space partition according to human perception

auto-annotation

annotations refinement

multidimensional indexing (vp-tree)

Page 6: Content-Based Image Retrieval

February, 262007

Research directions

Color space partition according to human visual perception

Correspondence between low-level features and semantics: auto-annotation

Fusion of retrieval result sets

Adaptive search: color and texture fusion

Using relevance feedback

Page 7: Content-Based Image Retrieval

February, 262007

Human visual perception: colors

Experiments with color partition: HSV space

(H=9; S=2; V=3) – 72 %(H=11; S=2; V=3) – 66%(H=13; S=2; V=3) – 63%(H=15; S=2; V=3) – 60%

Compare partitions of different spaces (RGB, HSV, Lab)

Page 8: Content-Based Image Retrieval

February, 262007

Research directions

Color space partition according to human visual perception

Correspondence between low-level features and semantics: auto-annotation

Fusion of retrieval result sets

Adaptive search: color and texture fusion

Using relevance feedback

Page 9: Content-Based Image Retrieval

February, 262007

Auto-annotation

Natalia Vassilieva, Boris Novikov. Establishing a correspondence between low-level features and semantics of fixed images. In Proceedings of the Seventh National Russian Research Conference RCDL'2005, Yaroslavl, October 04 - 06, 2005

Training set selection

Color feature extraction for every image from the set

Similarity calculation for every pair of images from the set

Training set clustering

Basis color features calculation: one per every cluster

Definition of basis lexical features

Correspondence between basis color features and basis lexical features

Page 10: Content-Based Image Retrieval

February, 262007

Examples

city, night, road, river snow, winter, sky, mountain

Page 11: Content-Based Image Retrieval

February, 262007

Retrieve by textual query

N. Vassilieva and B. Novikov. A Similarity Retrieval Algorithm for Natural Images. Proc. of the Baltic DB&IS'2004, Riga, Latvia, Scientific Papers University of Latvia, June 2004

Image database is divided into clusters

Search for appropriate cluster by textual query using cluster’s annotations

Browse the images from the appropriate cluster

Use relevance feedback to refine the query

Use relevance feedback to reorganize the clusters and assign new annotations

Page 12: Content-Based Image Retrieval

February, 262007

Feature extraction: color

Color: histograms

Color: statistical approachFirst moments for color distribution (every channel) and covariations

Page 13: Content-Based Image Retrieval

February, 262007

Feature extraction: texture

Texture: use independent component filters that results from ICA

H. Borgne, A. Guerin-Dugue, A. Antoniadis

“Representation of images for classification with independent features”

Image I1

Image I2

N filtres

dist(I1,I2) = KLH(H1i , H2i)Σi=1

N

Page 14: Content-Based Image Retrieval

February, 262007

Research directions

Color space partition according to human visual perception

Correspondence between low-level features and semantics: auto-annotation

Fusion of retrieval result sets

Adaptive search: color and texture fusion

Using relevance feedback

Page 15: Content-Based Image Retrieval

February, 262007

Fusion of retrieval result sets

How to merge fairly? How to merge efficiently? How to merge effectively?

Fusion of weighted lists with ranked elements:

(x11, r1

1), (x12, r1

2), … , (x1n, r1

n)ω1

(x21, r2

1), (x22, r2

2), … , (x2k, r2

n)ω2

(xm1, rm

1), (xm2, rm

2), … , (xml,

rml)

ωm

… ?

Page 16: Content-Based Image Retrieval

February, 262007

Supplement fusion– union textual results (textual viewpoints )

Collage fusion– combine texture (texture viewpoint) & color

results (color viewpoint)– different color methods (different color

viewpoints)

Ranked lists fusion: application area

Page 17: Content-Based Image Retrieval

February, 262007

Search by textual query in partly annotated image database

Ranked lists fusion: application area

Textual query

TextResult1, textrank1

TR2, tr2,

...

…tr1

…tr2

by annotations

conte

nt-

base

d

Result

Page 18: Content-Based Image Retrieval

February, 262007

commutative propertyassociative propertyvalue of result object's rank independent of

another object's ranks

Examples:COMBSUM, COMBMIN, COMBMAX merge functions

Three main native fusion properties

Page 19: Content-Based Image Retrieval

February, 262007

normalization & delimitation property

conic property attraction of current object for mix result

depend on value of function g(rank, weight) ≥ 0 ;

snare condition:

Additional native fusion properties

Page 20: Content-Based Image Retrieval

February, 262007

g monotonically decreases with fixed weight parameter

g monotonically decreases with fixed rank parameter

g must satisfy boundaries conditions: g( 0, w ) > 0 if w != 0 g( r, 0 ) = 0

Conic properties, function g

Page 21: Content-Based Image Retrieval

February, 262007

Fusion formula

where

Ranked lists fusion: Formulas

Page 22: Content-Based Image Retrieval

February, 262007

All lists are sorted by object id

Using step by step lists merging (object id priory)

If object_id1 not equal object_id2 => some object is absent in one of the lists

Ranked lists fusion: Algorithm

List 1

List 2

Result list

Currentobject_id2

Currentobject_id1

Page 23: Content-Based Image Retrieval

February, 262007

Viewpoint should provide some “valuable” information. Retrieval system's performance at least should be better than a random system.

Information is not fully duplicated. There should be partial disagreement among viewpoints.

Ranked lists fusion: Experiments

Necessary conditions:

Page 24: Content-Based Image Retrieval

February, 262007

Roverlap && Noverlap conditions

Intercomparison of methods– Classical methods: COMBSUM, COMBMIN,

COMBMAX – Probability methods: probFuse– Random method: random values that satisfied

to merge properties.

Ranked lists fusion: Experiments

Parameters:

Page 25: Content-Based Image Retrieval

February, 262007

Research directions

Color space partition according to human visual perception

Correspondence between low-level features and semantics: auto-annotation

Fusion of retrieval result sets

Adaptive search: color and texture fusion

Using relevance feedback

Page 26: Content-Based Image Retrieval

February, 262007

Adaptive merge: color and texture

Hypothesis:

Optimal α depends on features of query Q. It is possible to distinguish common features for images that have the same “best” α.

Dist(I, Q) = α*C(I, Q) + (1 - α)*Т(I, Q),

C(I, Q) – color distance between I and Q;T(I, Q) – texture distance between I and Q; 0 ≤ α ≤ 1

Page 27: Content-Based Image Retrieval

February, 262007

Adaptive merge: experiments

Page 28: Content-Based Image Retrieval

February, 262007

Estimation tool

Web-application

Provides interfaces for developers of search-methods

Uses common measures to estimate search methods: Precision Pseudo-recall

Collects users opinions – > builds test

database

Page 29: Content-Based Image Retrieval

February, 262007

Datasets

Own photo collection (~2000 images)

Subset from own photo collection (150 images)

Flickr collection (~15000, ~1.5 mln images)

Corel photoset (1100 images)

Page 30: Content-Based Image Retrieval

February, 262007

Research directions

Color space partition according to human visual perception

Correspondence between low-level features and semantics: auto-annotation

Fusion of retrieval result sets

Adaptive search: color and texture fusion

Using relevance feedback