Click here to load reader

Nearest Neighbor Paul Hsiung March 16, 2004. Quick Review of NN Set of points P Query point q Distance metric d Find p in P such that d(p,q) < d(p’,q)

  • View
    214

  • Download
    2

Embed Size (px)

Text of Nearest Neighbor Paul Hsiung March 16, 2004. Quick Review of NN Set of points P Query point q...

  • Nearest NeighborPaul HsiungMarch 16, 2004

  • Quick Review of NNSet of points PQuery point qDistance metric dFind p in P such that d(p,q) < d(p,q) for all p in Pqp

  • NN Used InImage databases [Pentland et al]Color indexing [swain et al]Recognizing 3D objects [Murase et al]Shapes [Mori et al]Drug testingDNA sequence matching [Buhler]

  • Tree-based ApproachesQuadtreesSplit middle in all dimensionsSplit until no points or one point leftKd-treesSplit in one dimensionPick the middle wiselyBall-treesPick two pivots and splitSR-treesWe have rectangles and spheres, so why not combine them

  • Indyks GripeBeyond 10 or 20 dimensions, tree-based structures will look at many pointsNo better than brute force linear searchSo he came up with a hash table approach: Locality Sensitive Hashing (LSH)Rest of talk will be on his paper

  • LSH

  • Interlude: Near NeighborSet of points PQuery point qDistance metric dFind p in P such that d(p,q) < (1+)d(P,q) where d(P,q) is the distance of q to its closest point in Pqp(1+)d(P,q)d(P,q)

  • HashPick a subset I of random coordinatesHash function, h(p), will return a bucket IDh(p) = projection of p on I

  • IntuitionIf two points are close, they hash to same bucket with some probability p1If they are far, they hash to same bucket with a smaller probability p2 < p1

  • Indyks HashConvert coordinates of p to {0,1}dUse Hamming distance: d(p,q)= # positions on which p and q differExample:p=(0,1,0,1,1,1,0,0,1,0)I={2,5,7}Then, h(p)=(1,1,0)Demo: http://web.mit.edu/ardonite/6.838/locality-hashing.htm

  • Why Locality-sensitive?Pr[h(p)=h(q)]=(1-d(p,q)/D)kD is the number of dimensions in the binary representationk is the size of IWe can vary the probability by changing k

    k=1k=2distancedistancePrPr

  • Now to Use It (Training)Generate l hash functions: h1..hlStore each point p in the bucket hi(p) of the i-th hash array, i=1...l

  • Now to Use It (Query)Retrieve all the points that belong to the buckets: h1(q)..hl(q)Return the retrieved point that is closest to qThis solves the Near Neighbor problem

  • Indyks ResultsCompared with another tree-based algorithmColor histogram dataset from Corel Draw20,000 images, 64 dimensionsUsed 1k, 2k, 5k, 10k, 19k points for training1k points are used for queryComputed missed ratio fraction of queries with no hits

  • Indyks Results

  • Results II

  • Ugly SideWorks best with Hamming distanceCan be extended from L1 and L2 normsRequires parameter tweaking (size of I and number of hash buckets)Does not work well on uniform data

  • BibliographyA. Gionis, P. Indyk, R. Motwani. Similarity Search in High Dimensions via Hashing. In VLDB 25th, 1999J. Buhler. Efficient Large-Scale Sequence Comparison by Locality-Sensitive Hashing. In Bioinformatics 17(5) 419-428, 2001H. Murase, S. K. Nayar. Visual Learning and Recognition of 3D Objects from Appearance. In IJCV, Vol. 14, No. 1 5-24, 1995A. Pentland, R.W. Picard, S. Scalroff. Photobook: Tools for Content Based Manipulation of Image Databases. In SPIE Vol. 2185 34-47, 1994M.J. Swain, D.H. Ballard. Color Indexing. In IJCV, Vol. 7, No. 1 11-32, 1991G. Mori, S. Belongie, J. Malik. Shape Contexts Enable Efficient Retrieval of Similar Shapes. CVPR 1 723-730, 2001Slides: Algorithms for Nearest Neighbor Search by Piotr IndykSlides: Approximate Nearest Neighbor in High Dimensions via Hashing by Aris Gionis, Piotr Indyk, and Rajeev Motwani