8
Detection and Segmentation of Approximate Repetitive Patterns in Relief Images Harshit Agrawal * [email protected] Anoop M. Namboodiri [email protected] Center for Visual Information Technology, IIIT-Hyderabad ABSTRACT Algorithms for detection of repeating patterns in images of- ten assume that the repetition is regular and highly similar across the instances. Approximate repetitions are also of in- terest in many domains such as hand carved sculptures, wall decorations, groups of natural objects, etc. Detection of such repetitive structures can help in applications such as image retrieval, image inpainting, 3D reconstruction, etc. In this work, we look at a specific class of approximate repetitions: those in images of hand carved relief structures. We present a robust hierarchical method for detecting such repetitions. Given a single image with reliefs, our algorithm finds dense matches of local features across the image at various scales. The matching features are then grouped based on their ge- ometric configuration to find repeating elements. We also propose a method to group the repeating elements to seg- ment the repetitive patterns in an image. In relief images, foreground and background have nearly the same texture, and matching of a single feature would not provide reliable evidence of repetition. Our grouping algorithm integrates evidences of repetition to reliably find repeating patterns. Input image is processed on a scale-space pyramid to effec- tively detect all possible repetitions at different scales. Our method has been tested on images with large varieties of complex repetitive patterns and the qualitative results show the robustness of our approach. Keywords Reliefs, Spatial Configuration, Repetitive Patterns, Scale- Space Pyramid, Patch Matching 1. INTRODUCTION Repetitive patterns are present in various structures and shapes of the world at many different scales and forms. In man-made environments, structures with repetitive pat- terns or elements are often employed as they are aestheti- * Corresponding author Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ICVGIP ’12, December 16-19, 2012, Mumbai, India Copyright 2012 ACM 978-1-4503-1660-6/12/12 ...$15.00. (a) Sanchi, India (b) Hampi, India (c) Shri Senpaga Vinayagar Temple, Singapore Figure 1: Example reliefs with approximate repeti- tive patterns. cally pleasing. In reliefs, structures often appear in different repetitive patterns. Repetitions in themselves have redun- dancy as all the structures have some common properties like shape, style, texture, etc. The information extracted by detecting repetitions can be used as prior in segmenta- tion and reconstruction of elements of the structures. It can also provide valuable information in case of partial occlu- sions. Reliefs are a common form of decoration and medium of depicting incidents and stories in man made structures from ancient times. With time many of these structures get weathered down and parts of the reliefs may get broken or damaged. Repetitive patterns can also assist in single view reconstruction of the scene [19]. Robust representation of repetitive patterns can be used as an invariant descriptor to identify semantically relevant objects to match across images [5]. Many algorithms have been developed for detecting repetitions and translational symmetry [7, 8, 9, 10, 12], but a fully automatic method for detecting approximate repetitions in complex real im- ages like reliefs is still a very challenging task. We pro- pose a robust method to detect the approximately repeat- ing structures on reliefs. The hierarchical method proceeds from lower to higher level features, thus robustly detects structures with partial repetitions.

Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

Detection and Segmentation of Approximate RepetitivePatterns in Relief Images

Harshit Agrawal∗

[email protected] M. Namboodiri

[email protected]

Center for Visual Information Technology, IIIT-Hyderabad

ABSTRACTAlgorithms for detection of repeating patterns in images of-ten assume that the repetition is regular and highly similaracross the instances. Approximate repetitions are also of in-terest in many domains such as hand carved sculptures, walldecorations, groups of natural objects, etc. Detection of suchrepetitive structures can help in applications such as imageretrieval, image inpainting, 3D reconstruction, etc. In thiswork, we look at a specific class of approximate repetitions:those in images of hand carved relief structures. We presenta robust hierarchical method for detecting such repetitions.Given a single image with reliefs, our algorithm finds densematches of local features across the image at various scales.The matching features are then grouped based on their ge-ometric configuration to find repeating elements. We alsopropose a method to group the repeating elements to seg-ment the repetitive patterns in an image. In relief images,foreground and background have nearly the same texture,and matching of a single feature would not provide reliableevidence of repetition. Our grouping algorithm integratesevidences of repetition to reliably find repeating patterns.Input image is processed on a scale-space pyramid to effec-tively detect all possible repetitions at different scales. Ourmethod has been tested on images with large varieties ofcomplex repetitive patterns and the qualitative results showthe robustness of our approach.

KeywordsReliefs, Spatial Configuration, Repetitive Patterns, Scale-Space Pyramid, Patch Matching

1. INTRODUCTIONRepetitive patterns are present in various structures and

shapes of the world at many different scales and forms.In man-made environments, structures with repetitive pat-terns or elements are often employed as they are aestheti-

∗Corresponding author

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.ICVGIP ’12, December 16-19, 2012, Mumbai, IndiaCopyright 2012 ACM 978-1-4503-1660-6/12/12 ...$15.00.

(a) Sanchi, India (b) Hampi, India

(c) Shri Senpaga Vinayagar Temple, Singapore

Figure 1: Example reliefs with approximate repeti-tive patterns.

cally pleasing. In reliefs, structures often appear in differentrepetitive patterns. Repetitions in themselves have redun-dancy as all the structures have some common propertieslike shape, style, texture, etc. The information extractedby detecting repetitions can be used as prior in segmenta-tion and reconstruction of elements of the structures. It canalso provide valuable information in case of partial occlu-sions. Reliefs are a common form of decoration and mediumof depicting incidents and stories in man made structuresfrom ancient times. With time many of these structures getweathered down and parts of the reliefs may get broken ordamaged. Repetitive patterns can also assist in single viewreconstruction of the scene [19].

Robust representation of repetitive patterns can be usedas an invariant descriptor to identify semantically relevantobjects to match across images [5]. Many algorithms havebeen developed for detecting repetitions and translationalsymmetry [7, 8, 9, 10, 12], but a fully automatic methodfor detecting approximate repetitions in complex real im-ages like reliefs is still a very challenging task. We pro-pose a robust method to detect the approximately repeat-ing structures on reliefs. The hierarchical method proceedsfrom lower to higher level features, thus robustly detectsstructures with partial repetitions.

Page 2: Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

We consider a generic notion of repetitive pattern as rep-etition of a structure or an object. A repetitive patterncan have both uniformly spaced repetitions or irregularlyspaced repetitions. In urban facades, the structural elementslike windows and doors usually have regularity in repetition.The regularity can always be used as a reliable cue for de-tecting such repetitive patterns. Different algorithms [6, 12,14] have been developed that exploits the regularity of suchrepetitions. Unlike facades, reliefs have irregular repetitionswith changes in the appearances. Perspective view of fa-cade images can be rectified using vanishing points to getthe fronto-parallel view [12]. It is often difficult to extractrobust vanishing points in reliefs due to absence of linearstructures (see Fig. 1(a)). In our approach, we do not con-strain the input image to have a fronto-parallel view.

Humans inherently recognize symmetries and repetitivepatterns in objects and images. Detection of repetitions forhumans happen at multiple levels of detail. At a coarselevel, we may use the overall texture of a scene or part ofit to find possible repetitions. We then analyze the partsof objects within it and their arrangement to find objectsthat repeat. If those pieces or objects are found in a similarconfiguration elsewhere, we identify this object as a repe-tition. The motivation for the design of our algorithm isthe same. We begin by finding reliable matches for indi-vidual components in an image. These individual matchesare then verified and grouped together to get regions withpossible repetitions. These grouped matches are then usedto detect different repetitive patterns and elements.

We have collected images with various types of repeti-tive patterns from different sources. It includes images withidentical and approximate repetitive patterns. We also testour algorithm for facade images and images from PSU-NearRegular Texture database. The images in the collection weremanually annotated to get the ground truth for quantitativeevaluation of the algorithm.

1.1 Related WorkRepetition detection in images is a long standing problem

and till date researchers in computer vision have made signif-icant progress towards detection of symmetries and repeti-tions not only in images but also in 3D data. Leung and Ma-lik [7] proposed a simple window matching scheme followedby grouping of patterns for finding repeating scene elements.Schaffalitzky and Zisserman [15] proposed a RANSAC-basedgrouping method for imaged scenes, which repeat on a planein a scene. Park et al. [14] developed an algorithm using anefficient Mean-shift belief propagation for 2D lattice detec-tion on near-regular textures. These approaches give goodresult on grid-like repetitive patterns. However, it is veryrare to find a grid-like repeating pattern in hand carvedreliefs and are generally limited to one-dimensional repeti-tions. Hence we cannot use the above approaches for de-tecting repetitions in our problem. The proposed algorithmdoes not require the presence of a grid-like repetitive patternand can detect the repetitions even if an element is repeatedonly once at two arbitrary locations in the image.

Wenzel et al. [17] have proposed a method for detectingrepeated structures in facade images. They begin by de-tecting the dominant symmetries and then use clustering offeature pairs to detect the repeating structures in the image.Wu et al.[18] developed an approach to detect large repeti-tive structures with salient boundaries. They assume reflec-

tive symmetry in the architectural structures and use thisto localize vertical boundaries between repeating elements.Their algorithm depends on accurate vanishing point detec-tion for rectifying the image to a fronto-parallel view. Theiralgorithm does not work if the repetition interval is largerthan the width of the repetitive structure. In reliefs, we cannot assume reflective symmetry, accurate vanishing point de-tection, or regularity in intervals between repetitions. Ouralgorithm does not pose any of the above constraints onthe input image. However, our algorithm does not outputcomplete repetitions for significantly large perspective dis-tortions.

Zhao and Quan [20] describe a robust and efficient methodfor detecting translational symmetries in fronto-parallel viewof facade images using joint spatial and transformation spacedetection. Their algorithm outputs incorrect results if thelattice is incomplete or the repetitions are non-uniformlyspaced. Recently, Zhao et al. [21] developed a robust anddense per-pixel translational symmetry detection and seg-mentation in facade images. Although their algorithm con-siders most of the limitations of earlier approaches they can-not be applied directly to detect repetitions in reliefs. Theycreate translational map in horizontal and vertical directionsonly. Moreover, for segmentation of a repeating element,they have used a learning based approach to classify eachpixel as either a wall pixel or a non-wall pixel. Such a clas-sification based segmentation is impractical in relief carv-ings due to the high textural similarity between foregroundand background. Our approach is different from all the pre-vious approaches as we do not have any pre-assumptionsabout the repetitive pattern or the repetitive element type.We also compute pixel level correspondences of the repet-itive elements after merging the results from all the scalesin the scale space pyramid. Cai and Basui [2] proposed aregion growing image segmentation algorithm for detectingand grouping higher level repetitive patterns. Their algo-rithm begins with manual marking of a region of repetitionand then they iterate between growing and refinement steps.

We present a robust algorithm for repetition detection andsegmentation in reliefs with fewer assumptions about thenature of repetition compared to existing approaches. Ourhierarchical pairwise matching approach is closely related tothe work of Carneiro and Jepson [4]. They have developedsemilocal image features to improve the robustness of fea-ture matching between a pair of images with possible rigidor non-rigid deformations. The hierarchy of matching levelsin our approach can also be compared to levels in a deeplearning network, where higher level features and conceptsare defined in terms of lower-level ones. We propose a robustpairwise matching approach that detects pairwise matcheswith high confidence scores. In image matching, interestpoint groups are matched using pairwise spatial constraints[13]. We have also used pairwise spatial constraints to en-hance the confidence of a match. Barnes et al. [3] finds densepairwise matches using k-NN search.

To improve the correspondences they use the offset val-ues of the neighboring pixels. In contrast, we start withconfident pairwise matches, found independently and thenimpose the neighboring constraint while doing hierarchicalgrouping. The hierarchical approach reduces the search spaceconsiderably, and is critical to establish matches with ap-proximate repetitions. Our approach is different from imageretrieval in the sense that we are not using a Bag-of-words

Page 3: Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

(a) Dense Feature Extraction (b) Pairwise Feature Matching (c) Pairwise Patch Matching

(d) Connectivity graph at different scales

(e) Score images for corresponding connectivity graphs (blue with lower scores and red with higher scores).

(f) Segmented image regions. (g) Detected repetitive patterns.

Figure 2: (a) Dense sift features after removal of false matches (b) Pairwise sift matching, blue features arepairwise matches of yellow (c) Pairwise patch matching, green patches are correct match of yellow patch andred patches are false matches (d) Connectivity graph at various scales in the scale-space pyramid (e) ScoreImage for the corresponding connectivity graph in (d). (f) Segmented image regions after merging all scoreimages with distinct labels for each region. (g) These labels are merged and grouped to produce the finaloutput of our algorithm with color-coded convex hulls of repetitive patterns.

representation of features. We would like to have strongmatches between features and in a bag-of-words represen-tation, many features will be denoted by a representativevisual word which will increase the number of false matchesamong the features.

2. OBSERVATIONS AND ASSUMPTIONSRepetitive patterns in reliefs have certain interesting char-

acteristics that are different from the typical patterns foundin the urban facades. This section lists some of the ob-servations and assumptions that lead to the design of ourrepetition detection algorithm.

1. In general, repetitions in reliefs do not occur in a reg-ular grid or at regular interval. Repetitive patterns of-ten appear non-uniformly in an image (see Fig. 1(b)).Our algorithm is independent of factors including thenumber of repetitions, repetition interval and repeti-tion direction.

2. Occasionally, in reliefs it is difficult to define repetitivepatterns where each object is a single unit. Repeatingelements generally have some variance in appearance

from other repetitions (see Fig. 1(c)). Elements canhave partial repetitions and we achieve robust detec-tions for such reliefs with our algorithm.

3. It is often difficult to apply traditional rectificationtechniques on reliefs as detecting the vanishing lineis not robust enough in images such as Fig. 1(a). Ouralgorithm is not constrained to any requirement of in-put image to be fronto-parallel or rectified. Detectionis robust to significant image skews.

3. PAIRWISE MATCHINGAs a brief overview, our algorithm initiates by building a

scale-space pyramid of the given input image. Fig. 2 showsthe complete pipeline of our algorithm. We process eachscale independently. At each scale, we extract feature de-scriptors and a list of matching features are found for eachfeature. We then consider the features in image patches andsearch for similar patches using our similarity score function.These pairwise patch-matches give us a confidence score formatching image patches. Confidence scores are then usedto identify the regions that are repeating using watershedsegmentation of the confidence score image. We begin by

Page 4: Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

presenting our framework for finding pairwise patch corre-spondences, then describe the detection and segmentationof repetitive patterns.

3.1 Pairwise Feature MatchingIn order to cover the entire image, we have extracted SIFT

descriptors using sift interest points and dense SIFT descrip-tors with a fixed step size (10 pixels, in our experiments).We convert the SIFT descriptors to ROOT SIFT. This hasbeen shown to work well on many computer vision tasks [1].We denote the set of SIFT descriptors Sn for the image atcurrent level n. For each si ∈ Sn, k nearest neighbors arefound using efficient implementation of kd-tree data struc-ture. We consider sj as a reliable match for si if their scalesand orientations are very similar. For a pair of sift descrip-tors, we define a similarity score as:

SS(si, sj) = 1− sd(si, sj) + od(si, sj) + dd(si, sj)

3, (1)

where sd(si,sj) is absolute difference between the scales,od(si,sj) is absolute difference between the orientations anddd(si,sj) is the L1 norm of difference vector of their descrip-tors, each of them normalized between 0 and 1. We considerthe sift pair as a strong match if SS(si,sj) is more than athreshold sim thresh (= 0.7 in our experiments). We discardthe pair if any of sd, od, dd exceeds the individual threshold(nearly sim thresh/3 ).

Remove false matches: Trivial repetitions in imageslike the background plane or repetitions with very small in-terval are considered insignificant. To remove false matchesfor sift feature, we find its num neighbor nearest neighborsin sift descriptor space. We use the following false matchremoval algorithm to filter the set of SIFT descriptors. Weremove a feature si from Sn if:

• No nearest neighbor has similarity score greater thansim thresh, else check the following conditions.

• At least two nearest neighbor has spatial euclideandistance less than the minimum repetition interval al-lowed or,

• At least one nearest neighbor are spatially very closeto si.

The minimum repetition interval is chosen as the patch size(section 4.2) of the current scale. This step removes a largenumber of insignificant features, which increases the effi-ciency of the algorithm. All the remaining features in Snhave strong and reliable matches along with their corre-sponding similarity scores. We denote the jth strong matchof si as sift matchij and score with sift scoreij .

Images with reliefs are noisy due to their textures. Al-though strict thresholds are used for similarity scores, wefurther want to increase the reliability of matches. We usea higher level matching after sift feature matching to fur-ther remove the remaining false matches. In next step ofour algorithm, we introduce a higher level feature matchingtechnique that boosts the robustness and reliability of thefeature matches found in this step.

3.2 Pairwise Patch MatchingMatching a set of features has proved to be more reliable

then matching a single feature. Next higher level of SIFTmatching is matching a set of SIFT features spatially close

to each other in the image space. We consider a set of over-lapping image patches P = {pc}, where each patch pc is ofsize τn × τn(τ1 ∼ 15) is centered at a regular interval in theimage. Let sc = {si} be the set of SIFT descriptors lyingwithin the image patch pc. We present our patch-matchalgorithm for finding matches for each patch in P.

For each image patch pc ∈ P, find all the patches thatcan possibly match pc. To find the possible patches we usesift matchij for all si ∈ sc. Each possible patch is centeredsuch that the spatial position of si and sj are same in theircorresponding patches. We define matching score(pi, pj)using the spatial configuration of matching sift features inpi and pj . To provide flexibility, we divide each patch in2×2 cells and use soft-binning for each matching sift feature.We find the distance of each matching sift feature in thepatch to the centers of the 4 cells and then give a weightwi = 1

1+di, where di is spatial euclidean distance from the

center of ith cell. After considering all the matching siftfeatures, we will get a pair of vectors vi and vj of size 4× 1.For a pair of patches, the matching distance is defined asthe L1 norm of the difference vector of vi and vj . So thematching score = 1 − matching distance. Two patcheshave a strong match if their matching score is greater thana threshold (0.7 in our experiments).

Algorithm 1 Patch-Match Algorithm

1: patch match← φ2: for all pc ∈ P do3: patch match(pc)← φ4: pos match(pc)← φ5: for all si ∈ sc do6: p← patch corresponding to sift matchij7: pos match(pc)← pos match(pc) + p8: end for9: for all pos match(pi) ∈ pos match(pc) do

10: score(pc)← matching score(pc, pos match(pi))11: if score(pc) ≥ threshold then12: patch match(pc)← patch match(pc)+

pos match(pi)13: end if14: end for15: patch match← patch match+ patch match(pc)16: end for

After this step, patch match(pc) stores position of all thematching patches of pc along with their matching scores.The patch-match algorithm proposed above will further re-move features that are not matched in groups. We removefrom P all the patches that do not have any strongly match-ing patch. The correctness of retained features are increasedafter the patch-match algorithm.

4. GROUPING PATCHESThe pairwise patch matches found in the previous section

must be grouped together to accomplish our goal of identify-ing repetitive patterns. Before grouping patches, we want toremove overlapping patches that have strong matches to thesame patch. Repetitions in reliefs are not identical and hencethere could be multiple places at which the same matchingpatch is centered. To find a single patch out of multiple over-lapping patches, we follow a variant of the non-maximumsuppression technique. For each patch match we have a

Page 5: Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

matching score that is the confidence of the match betweenthe two patches. We say that two patches are overlappingif the Euclidean distance between their centers is less thanthe patch width τn, where n is the current level in the scale-space pyramid. For all overlapping matching patches weonly retain the patch with maximum matching score anddiscard the rest of the overlapping patches. We have alsoexperimented by taking the average position of all the over-lapping patches but the resulting patch often gets misplacedas the average patch does not correspond to a valid patch.Hence we have used the former approach.

We consider grouping of patches as the next higher levelof patch matching. In this section, we propose a groupingalgorithm that tries to group neighboring patches based onthe configuration and confidence of their individual match-ing patches. If a region is repetitive at two places then itssub-regions should also repeat in the same spatial configu-ration in both the places. Following the above intuition wedesign the grouping algorithm as follows:

We consider two patches to be neighboring patches if theeuclidean distance between their centers is less than α× τn(α = 1.2, in all experiments). It is similar to 8-neighborhoodin pixel space. For grouping patches, we find neighbor-ing patches neighbor(pc) for each patch pc ∈ P. From thepatch-match algorithm (Algorithm 1) we know that bothpc and the neighboring patch pi ∈ neighbor(pc) have somematching patches with strong matching scores. If a pair ofmatching patches of pc and pi have the similar neighborhoodproperty as pc and pi then the confidence of the matches arefurther increased. As the spatial distance between pc andpi is small there is a high possibility of both the patchesbelonging to a single repetitive unit. We have exploited thisfact to design the patch grouping algorithm (Algorithm 2).

Algorithm 2 Patch Grouping Algorithm

// All considered patches ∈ P

1: for all pc ∈ P do2: neighbor(pc)← neighboring patches of pc3: for all pi ∈ neighbor(pc) do4: for all pair ∈

(patch match(pc), patch match(pi)) do5: if config matches({pc, pi}, {pair}) then6: join centers of pc and pi7: join centers of patches in pair8: end if9: end for

10: end for11: if pc is not joined to any patch then12: patch match← patch match− pc13: end if14: end for

In reliefs, the individual repetitive unit can have smallvariations in the shape and structure. So the neighborhoodproperty is defined to have the trade-off between robustnessand accuracy. Consider two pairs of patches (pa, pb) and(pam,pbm) where pa and pb are neighboring patches, pamand pbm are their matching patches respectively. We con-sider two criteria for the pairs of patches to have a matchingconfiguration. First, spatial distance between the neighbor-ing patches and second, relative spatial arrangement of theneighboring patches in the image space. We have kept a re-laxed threshold for both the criteria providing robustness to

the grouping, where as the correctness is already been veri-fied at each level before this step. If the absolute differenceof the dist(pa, pb) and dist(pam, pbm) (dist(·,·) is euclideandistance) is less than a threshold(∼ 5 pixels) then it satisfythe first criterion. For spatial orientation, we find the anglemade by the line joining the centers of each patch with thehorizontal axes. If the absolute difference in the angles is lessthen a threshold (∼ 35◦ to 45◦), then it satisfy the secondcriterion. We say that config matches({pa, pb}, {pam, pbm})is true if both the criteria are satisfied. After joining thecenter of the grouped patches, we get a connectivity graph(Figure 2(d)) in the image space where nodes are the patchcenters and the edges denotes that patches have high chanceof belonging to a single repetitive unit.

Merging Results of all Scales: A region of the graphwith high connectivity among the neighboring patches cor-responds to a strongly repeating region but we cannot guar-antee that a region with low or no connectivity is not arepeating unit. All the computation is done at each scale ofscale-space pyramid. Each scale could possibly detect dif-ferent patches. To ensure completeness, we need to mergethe output of all the scales. We have outputs in the form ofmatching patches and the connectivity graphs at each scale.

To merge the results from different scales, we create ascore image from the connectivity graph at each scale. Scoreimage has a score (between 0 and 1) at each pixel and thescore denotes the confidence of that pixel belonging to arepetitive pattern. While joining the patches in Algorithm 2,we removed all the patches that are not joined to any neigh-boring patch. So all the patches in patch match(pc) is joinedwith valid neighboring patch. We create a score patch(pc)of size τn×τn with each value as the maximum score amongall the matches patch match(pc) of a patch pc. A Gaussianmask with σn = τn

3is applied to give more weight to the

center of the patch. The score patch(pc) for pc is addedin the score image at the corresponding place. We add thescore patches for all pc ∈ P to the score image at the cur-rent level. After repeating this for all scales, we will have ascore image at all the scales. Then, each image is mappedto the image at the highest scale of the pyramid. For eachpixel, we take the maximum score among all the score im-ages. We also merge patch match(pc) for all patches at allscales to the patch match(pc) of the highest scale. Aftermerging the results at all scales, we create a repetition fieldrep field in the image space where rep field(x, y) storesthe patch match(pc) information along with the matchingscores for patch pc which is centered at (x, y).

5. DETECTION AND SEGMENTATION OFREPETITIVE PATTERN

After grouping patches in the above section, we have thefollowing information available for the given input image.Score image in which each pixel gives the confidence of be-longing to a repetitive pattern. For each pixel in the imagespace, rep field stores all the matching pixel positions andscores. To detect the repetitive patterns we segment thescore image using watershed segmentation technique [11].We have no prior knowledge about the number of repetitiveelements in the image, hence we have to use an unsuper-vised approach. Watershed segmentation is an automaticsegmentation that considers a topographic representation ofthe image intensity. Intuitively, an infinite source of water

Page 6: Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

(a) Original (b) Ground Truth (c) Our Output

Figure 3: Example of an approximately repeatingstructure.

is kept at the lowest basin and then watersheds or dams areformed to prevent water flow from one catchment basin toanother.

In our score image, the matches with high matching scorewill have high intensity values and an absence of a matchwith zero intensity value. If we invert the score image thenregion with high matching score corresponds to catchmentbasins and hence watershed algorithm can be effectivelyused. Watershed algorithm often gives an over-segmentedoutput. Inverted score image will form large number ofcatchment basins as an evident of max operation while merg-ing all the scales. Pre-processing of the image with smooth-ing filters were also over-segmenting the image. So we firstconverted the gray-scale score image into a binary image bykeeping a low threshold ( between 0.01 to 0.15 ) and thenconvert it back to a gray scale image using euclidean dis-tance transform of the inverted image. This reduces theover-segmentation to large extent and outputs larger seg-ments with unique labels corresponding to possible repeti-tive regions. We develop a label merging algorithm to getthe correct repetitive regions. Each watershed pixel sepa-rates two regions with labels l1 and l2. Given the repetitionfield rep field, we can say that if no pixel in l1 has a re-peating pixel in l2, then both l1 and l2 must constitute thesame label. If more than β ( β = 10, in our experiments)watershed pixels satisfy the constraint then we merge l1 andl2 into one label.

After appropriate merging of the regions, we get possiblerepetitive regions associated with unique labels. To detectthe repetitive pattern, we need to find correspondences be-tween the repeating element. For each pixel in a label, wefind the labels corresponding to its matching pixels usingrep field. Large number of correspondences between twolabels implies that both the label belongs to same repeti-tive pattern. After finding the pairwise correspondences, wegroup the regions to get the correct repetitive patterns withthe repetitive elements.

6. RESULTS AND DISCUSSIONSWe have tested our method on a PC with 2 GHz CPU and

4GB RAM. The Matlab implementation of our method has3.8min average run time for a typical 500x500 image. Wetested our algorithm on a collection of various images. Itincludes images of repeating reliefs, urban facades, Normal-NRT images. The relief images includes bas-reliefs or low-reliefs, high reliefs and sunken reliefs. We have used someof the facade images from ZuBuD database [16]. The regu-lar texture images are taken from PSU Normal-near regulartexture images. The collection also includes images fromflickr. We have shown results on representative images fromthe collection of reliefs, facades and NRT images.

Our algorithm gives correct detection results for reliefs

Table 1: Repetition detection and segmentation per-formance of our algorithm

Image Type #Images Avg. Accuracy Avg. RecallReliefs 53 89.66% 79.77%facades 22 85.3% 80.1%normal-NRT 13 88.1% 58.3%

with highly similar repetitions in almost all the images.When the repetition is approximate (see Fig. 3), the algo-rithm robustly detects the repetitive pattern. In reliefs, onlyparticular parts of the object may repeat. In those cases,algorithm properly segments parts that belong to the repet-itive pattern such as in Figures 4.1 and 4.5, where the headsare approximately repeating. Figures 4.2 and 4.3 show therobustness of our approach for irregular repetitions. In Fig.4.2, the partial elephant is grouped with the horse patterndue to the matching back, which has same appearance tohorse’s back. Fig. 4.4 shows detection of multiple irregularpatterns. The red patterns are a result of matches betweensky patches. In Fig. 4.6, our algorithm detected parts ofthe repetitive element as different pattern because of theabsence of matching patches in those regions.

We prepared the ground truth results for all the imagesby drawing an approximate border around each repetitiveelements. We evaluate our detection and segmentation al-gorithm using accuracy and recall measures. The segmen-tation performance is explained mainly by accuracy and thedetection performance is explained by recall measure. Wecall any detected region a true positive (TP) if it belongsto the correct repetitive pattern. If a non-repeating regionis assigned to a repetitive pattern, then we call it a falsepositive (FP), similarly a segmented region is false negative(FN) if it does not belong to correct repetitive pattern. Sim-ilar evaluation criteria is used by Basui and Cai [2]. Table 1shows the detection and segmentation performance of ouralgorithm on the collection of images. Figure 7 shows resultof our algorithm on facade and NRT images.

Limitations - Our algorithm has some limitations for ro-bust repetition detections. Our algorithm is not robust toocclusions and large projective skews because of insufficientpairwise matches. In Fig. 6(d), the top-left window is par-tially detected due to occlusion by a tree. In Fig. 6, robustrepetitions are detected, but for relatively large projectiveskew like in Fig. 6(a), our algorithm fails to find reliablematches. In images where the repetitive patterns are veryclose to each other for example Fig. 6(b) and 6(c), our seg-mentation algorithm incorrectly merge multiple elements orsegments one element into two parts but still we can getsatisfactory segmentations by tuning the threshold values.

7. CONCLUSION AND FUTURE WORKWe proposed a robust and efficient method for detect-

ing approximate repetitions in relief images. Our algorithmoutputs labeled segmentation of the repetitive patterns bycomputing a convex hull of the repeating elements. We haveevaluated our algorithm on images with various types of rep-etitions. The robustness of the algorithm is also tested onfacade and near-regular-texture images. Our algorithm out-puts good results for repetitions with large texture varia-tions. We allows small changes in scale and shapes to bematched for the same repetitive pattern. Our algorithm

Page 7: Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

(1)

(2)

(3)

(4)

(5)

(6)

(I) Original Images (II) Ground Truth Segmentations (III) Our Results

Figure 4: Each row shows the result of detection and segmentation on varieties of reliefs images. (1) showsdetection in case of approximate repetitions. (2) Elephant’s back is wrongly detected as a horse’s repetition.(3) is an accurate detection. (4) Red pattern is a false positive due to repeating sky. (5) Detected multiplerepetitions with irregular intervals. (6) As parts of the statue is repeating, they are identified as differentrepeating elements.

works well for irregular and low count repetitions.In future, we would like to exploit the pairwise correspon-

dences detected by our algorithm. We believe that the de-tected repetition can be used in improving the 3D recon-struction from a single image. The repetitions can also helpin reconstructing partially damaged structures using regiongrowing and graphical model techniques. The detected ob-jects can possibly be used to describe and retrieve similar

objects from a large database of images. We would alsolike to work and develop a general repetition detection formultiple types of images.

8. ACKNOWLEDGMENTSThis work was partly supported by the India Digital Her-

itage(IDH) project of Department of Science & Technology,Govt. of India.

Page 8: Detection and Segmentation of Approximate Repetitive ...cvit.iiit.ac.in/images/ConferencePapers/2012/Harshit2012Detection.pdfWu et al.[18] developed an approach to detect large repeti-tive

Figure 5: Example of robustness of our detection and segmentation algorithm in presence of multiplerepetitive patterns and significant projective skews. Original image(left), Annotated image(center), Our Re-sult(right).

(a) (b)

(c) (d)

Figure 6: Example failure cases (a) Large projec-tive skews, (b),(c) close repetitive patterns, and (d)partial detection due to occlusions.

Figure 7: Example of our detection performance onFacades and NRT images.

9. REFERENCES[1] R. Arandjelovic and A. Zisserman. Three things

everyone should know to improve object retrieval. InCVPR, 2012.

[2] G. Baciu and Y. Cai. Higher level segmentation:Detecting and grouping of invariant repetitivepatterns. CVPR, 2012.

[3] C. Barnes, E. Shechtman, D. B. Goldman, andA. Finkelstein. The generalized PatchMatchcorrespondence algorithm. In ECCV, 2010.

[4] G. Carneiro and A. Jepson. Flexible spatial

configuration of local image features. PAMI, 2007.

[5] P. Doubek, J. Matas, M. Perdoch, and O. Chum.Image matching and retrieval by repetitive patterns.In ICPR, 2010.

[6] T. Korah and C. Rasmussen. 2d lattice extractionfrom structured environments. In ICIP, 2007.

[7] T. K. Leung and J. Malik. Detecting, localizing andgrouping repeated scene elements from an image. InECCV, 1996.

[8] H.-C. Lin, L.-L. Wang, and S.-N. Yang. Extractingperiodicity of a regular texture based onautocorrelation functions. Pattern Recognition Letters,1997.

[9] Y. Liu, R. Collins, and Y. Tsin. A computationalmodel for periodic pattern perception based on friezeand wallpaper groups. PAMI, 2004.

[10] G. Loy and J.-O. Eklundh. Detecting symmetry andsymmetric constellations of features. In ECCV, 2006.

[11] F. Meyer. Topographic distance and watershed lines.Signal Process., 1994.

[12] P. Muller, G. Zeng, P. Wonka, and L. Van Gool.Image-based procedural modeling of facades. ACMTrans. Graph., 2007.

[13] E. Ng and N. Kingsbury. Matching of interest pointgroups with pairwise spatial constraints. In ICIP,2010.

[14] M. Park, K. Brocklehurst, R. T. Collins, and Y. Liu.Translation-symmetry-based perceptual grouping withapplications to urban scenes. ACCV, 2011.

[15] F. Schaffalitzky and A. Zisserman. Geometric groupingof repeated elements within images. In BMVC, 1998.

[16] T. S. H. Shao and L. V. Gool. Zubud-zurich buildingsdatabase for image based recognition. Technicalreport, Swiss Federal Institute of Technology, 2004.

[17] S. Wenzel, M. Drauschke, and W. Forstner. Detectionand description of repeated structures in rectifiedfacade images. Photogrammetrie, Fernerkundung,Geoinformation (PFG), 2007.

[18] C. Wu, J.-M. Frahm, and M. Pollefeys. Detectinglarge repetitive structures with salient boundaries.ECCV, 2010.

[19] C. Wu, J. M. Frahm, and M. Pollefeys.Repetition-based dense single-view reconstruction. InCVPR, 2011.

[20] P. Zhao and L. Quan. Translation symmetry detectionin a fronto-parallel view. In CVPR, 2011.

[21] P. Zhao, L. Yang, H. Zhang, and L. Quan. Per-pixeltranslational symmetry detection, optimization, andsegmentation. In CVPR, 2012.