Upload
lytuong
View
218
Download
0
Embed Size (px)
Citation preview
Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012
FAST EXEMPLAR-BASED IMAGE INPAINTING APPROACH
HUI-QIN W ANG1, QING CHENI, CHENG-HSIUNG HSIEH2>, PENG yul
lInformation & Control Engineering College, Xi'an University of Architecture & Technology, Xi'an 710055, China 2Dept. of Computer Science & Information Engineering, Chaoyang University of Technology, Taichung 41349, Taiwan
E-MAIL: [email protected]@cyut.edu.tw
Abstract: This paper presents a way to improve the computation
efficiency of exemplar-based inpainting approach in [4]. Note
that the inpainting approach in [4] has computation redundancy in searching optimal patches in source region and
updating fill front. A scheme to reduce source region and a modified scheme to update fill front are proposed. With the
two schemes, better computation efficiency is expected.
Several examples are given to justify the proposed fast
inpainting approach and used to compare with the approach in [4]. The results indicate that the proposed approach has better computation efficiency than the approach in [4], as
expected. Interesting enough, better visual quality of inpainted
images is achieved by the proposed approach as well.
Keywords: Image Inpainting; Exemplar-based Inpainting; Fill Front
Update; Reduced Source Region; Inpainting Efficiency
1. Introduction
Image inpainting is the process to reconstruct lost or deteriorated parts in images. It can be easily extended to remove undesired object. The main objective of image inpainting is to maintain reasonable visual perception in inpainted images. Image inpainting has been widely applied to object removal, scratch or text removal in pictures, error concealment in video transmission, and so forth.
A pioneer work on image inpainting was reported in [1]. Based on the theory of partial differential equation, a diffusion-based inpainting approach was presented in [1] where the region to be filled was inpainted on a pixel by pixel basis. Another popular diffusion-based inpainting approach was shown in [2] which was based on total variation. A further work of total variation was given in [3] where curvature-driven diffusion equation was developed. In light of these approaches, several related inpainting schemes follow. Though the diffusion-based inpainting
The corresponding author
978-1-4673-1487-9/12/$31.00 ©2012 IEEE
approaches are good for low activity region and scratches, they generally suffer from smoothing region filled when large missing region is under consideration. To deal with the problem, the exemplar-based inpainting approach was motivated.
A pioneer exemplar-based inpainting approach was reported in [4-5] where missing regions were inpainted on a patch by patch basis. In [4-5], both structure and texture were considered through confidence term and data term in the calculation of patch priority. The inpainting approach in [4] gave impressive results especially in the cases of large miss region. Therefore, the exemplar-based inpainting approach has drawn more and more attention since then and many researchers have involved in the field. However, one of problems in the exemplar-based inpainting approach is computation intensive. In this paper, a fast exemplar-based image inpainting approach is proposed to improve the computation efficiency.
This paper is organized as follows: Section 2 briefly reviews a pioneer work on exemplar-based image inpainting approach reported in [4]. Section 3 presents an approach to improve the computation efficiency in [4]. The proposed approach is called fast exemplar-based image inpainting (FEll) approach. Section 4 provides examples to justify the proposed FEll approach and compares with the approach in [4]. Section 5 concludes the paper.
2. Review of the approach in [4]
In this section, the exemplar image inpainting approach in [4] is briefly reviewed. For details, one may consult in [4]. For easy understanding, we adopt same notations used in [4]. Notation Q denotes the region to be filled, i.e., the target region and t5Q denotes the contour of Q which moves inward as performing the inpainting algorithm. t5Q is called the "fill front " as well. The source region is denoted by <l> which provides patches used in the filling process. Suppose that a pixel p E t5Q and that the square patch 'P p centered at pixel p, the filling process in [4]
1743
Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012
is summarized in the following steps. Step 1. Calculate the priority for each pixel p E t5Q as
pep) = C(p) ·D(p) (1) where C(P) is the confidence term and D(P) is the data term. They are, respectively, defined as
C(p)= LqE'Ppnq;C(q)/I'Ppl (2)
and
(3)
where I'P p I is the area of 'P p , a. a normalization
factor and equal to 255 in 8-bit grey-level image, VI� the isophote andnp unit vector orthogonal
to the front t5Q at pixel p. Step 2. Find the patch 'P p* with the highest P(P), that is,
'P . Ip· = arg maxP(p) (4) p pEOn Step 3. Search for the optimal exemplar
'P q* = arg mind('P p" 'P q) (5) 'PqE<I> where d(·, -) is the distance function to calculate the sum of squared differences of two involved patches.
Step 4. Replace 'P p' with 'P q' •
Step 5. Update the confidence term C(p) for
every p E 'P p* n Q .
Step 6. Repeat the above steps until Q is empty.
3. The Proposed FEll approach
The filling process in [4] descried in the previous section suggests computational redundancy in Step 3 and Step 5 as fill front t5Q moves. In Step 3, the patches in the source region <D may be similar. Similar distance will be found by Eq. (5) if similar patches are applied. This implies computation load can be reduced if some similar patches are skipped in Step 3. As for Step 5, it is noted that only the patch filled changes its confidence term and P(P). Consequently, there is no need to calculate C(P) again for those pixels remains intact. In other words, the recalculation can be saved. Based on the observations described, a fast exemplar-based image inpainting (FEll) approach is proposed where two schemes are employed to reduce source region and to modify fill front updating rule. By this doing, the improvement on computation efficiency in [4] is expected.
3.1. Reduced source region
As described previously, similar patches may be found in source region <D and may waste computation in Step 3. This section provides a way to identify similar patches. Take 256 grey-level image Lena as example which is shown in Fig. 1 where patch area is 5 x 5 . The marked patches are similar. One may use one of them to represent the similar patches. Thus the source region can be reduced.
With the reduce source region, the computation to search for the optimal patch in Step 3 can be alleviated. One way to identify similar patches is to calculate mean squared error (MSE) for patches. When MSE defined in Eq. (6) is less than or equal to a given threshold, say t5, the two patches are considered similar. Otherwise, they are different.
1 N N
MSE = --LL[('P;(n)- 'Pj(n)]2 (6) NxN n�l n�l
where N x N is the patch area and 'Pi' 'P j (i *- j ) are
patches in source region <D. The MSE among three patches in the left side of Fig. 1
is around 1.5 and about 3.7 for the right side patches. It suggests using MSE to identify similar patches is appropriate.
Based on MSE, similar patches are identified and a patch is used to represent the group of similar patches. This doing reduces source patches to fmd optimal exemplar. The reduced source region is denoted as <Dr-
Fig. 1. An example for similar patches
3.2. Modified fill front updating scheme
In the filling process in [4] fill front is updated when 'P P* is replaced with 'P q* as described in Step 5. Then
the priority is recalculated to find the next patch to be filled where replacement follows. It is noted that the recalculation wastes computation and is not required since only a given
1744
Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012
part of fill front is altered while others intact. To improve the computation efficiency, a modified fill front updating rule is presented in this section. The modified updating rule is described in the following.
For the current fill front denoted as JOe' all related
patches 'I' p to JOe are filled one by one according to PCp).
During the filling process, no recalculation of confidence term is performed. When all 'I' p are filled, JOe moves
inwards. Then the priority PCp) is calculated for each pixel in the updated fill front JOe . Consequently, the computation efficiency is improved.
Fig. 2 indicates the difference of fill fronts between [4] and our modified scheme. Fig. 2(a) shows the original fill front. Fig. 2(b) is the fill front of [4] for some given filling iteration and Fig. 2(c) the fill front by the proposed scheme. Obviously, the fill front in Fig. 2( c) keeps a similar shape as JOe moves inward while Fig. 2(b) does not.
(a) (b) (c)
Fig. 2. The difference of fill fronts
3.3. Implementation steps
With the reduced source region and modified fill front updating scheme, the implementation steps for the proposed FEll approach are given as follows: Step 1. Calculate the priority PCp) for each pixel p E JO . Step 2. Sort PCp) in descending order. Step 3. By the sorted pep), find the related patch 'I' p' for
each pixel P E JO . Step 4. With the reduced source region <1>" search for the
optimal exemplar 'I' q* for each 'I' p' •
Step 5. Replace 'I' p* with its corresponding 'I' q* for each
pixel p E JO. Step 6. Update the confidence term C(p) for
every p E 'I' p* (l 0 .
Step 7. Repeat the above steps until 0 is empty.
4. Results and Discussiou
In this section, several examples will be given to verify the proposed FEll approach. And comparison will be made with the approach reported in [4] as well. The related algorithms used in the simulation are implemented by OpenCV (Open Source Computer Vision Library). The version of OpenCV is 2.3.1 which was released in August 2011 under Visual Studio 2010 [6].
First, image "bungee jumper " of size 205x307 is used as an example to illustrate the difference of filling processes between the approach in [4] and the proposed FEll approach. Eight snapshots for each approach are shown in Fig. 3 and Fig. 4, respectively. As indicated in Fig. 3 and Fig. 4, the fill front of the proposed FEll approach moves inward when all parts of 'I' p' have been filled.
Consequently, the fill front keeps similar contour as moves inward. However, the approach in [4] is not the case which does not maintain the contour.
Besides, the proposed FEll approach takes about 2 seconds to fmish filling the region 0 while the approach in [4] uses about 24 seconds. Obviously, the proposed FEll approach outperforms the approach in [4] in terms of computation efficiency.
Fig. 3. The filling processing of the approach in [4]
1745
Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012
Fig. 4. The filling processing of the proposed FEll approach
Second, a set of 512x512 Lena images with different data loss percentages is given to justify the proposed FEll approach. The set of Lena images are denoted as {Lena1 (2.16%), Lena2 (5.06%), Lena3 (7.69%), Lena4 (8. 86%), Lena5 (13.27%), Lena6 (20.80%)} where the percentage in the parentheses is the data loss percentage. The set of Lena images are shown in the fIrst column of Table 1. The inpainted images by the approach in [4] and the proposed FEll approach are given in the second column and the third column of Table 1, respectively.
By the inpaited images shown in Table 1, the visual quality is for the proposed FEll appoach. Take Lena2 and Lena3 as examples. The inpainted Lena2 and Lean3 from the approach in [4] still have some parts replaced by inappropriate source patches. On the other hand, the proposed FEll approach inpaints the damaged parts of Lena2 and Lena3 in a better way. For impainted images Lena4, Lena5, Lena6, similar results can be found. Consequently, we may say better inpainting performance is achieved by the proposed approach when compared with the approach in [4] for the given examples.
Table 2 shows the time spent to fInish fIlling for the set of Lena images with the approach in [4] and the proposed FEll approach. In Table 2, the ratio is to show how many times the proposed FEll approach is faster than the approach in [4]. The average ratio is about 3.5. This justifIes that the proposed FEll approach is of better computation efficiency as expected. Together with the result shown in Table 1, the proposed FEll approach has better computation efficiency and better visual quality in comparison with the approach in [4].
Lenal
Lena2
Lena3
Lena4
Lena5
Lena6
Lean 1
Lena2
Lena3
Lena4
Lena5
Lena6
TABLE 1. COMPARISON OF VISUAL QUALITY
TABLE 2. COMPARISON OF INPAINTING TIME
Ti me spent (sec) by the approach in [4]
30.563
61.287
105.241
103468
171.354
321.348
Time spent (sec) by the proposed FEn approach
8.142
17.058
29.785
31.256
51.945
92.176
5. Conclnsion
Ralio
3.75
3.59
3.53
3.31
3.30
349
This paper presented a fast exemplar-based image inpainting (FEll) approach to improve the computation efficiency in [4]. The proposed FEll approach was motivated by two observations. First, full search in the source region for optimal patches wastes computation on
1746
Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, 15-17 July, 2012
similar patches. Second, recalculation is found in updating patch priority. To improve computation efficiency, reduced source region and modified fill front updating scheme are employed in the proposed FEll approach. Several examples have given as examples to justify the proposed FEll approach. The results showed that the proposed FEll approach was about 3.5 times faster than the approach in [4] in the inpainting time for the given examples. Moreover, better visual quality is for the proposed FEll approach as well. Consequently, the proposed FEll approach achieved the goal of efficient inpainting while better inpainted images were found in the given examples.
Acknowledgement
This work was supported by Research Improvement Support Program of Chinese Ministry of Education for returned overseas researchers; Science and Technology Research and Development Program of Science and Technology Office of Shaanxi Province (2011 K17 -04-01); Key Great Science and Technology Innovation Fund of Xi'an University of Architecture & Technology (ZC11 03).
References
[1] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, "Image Inpainting, " SIGGRAPH, pp. 417-424, 2000.
[2] T. Chan and J. Shen, "Local Inpainting Models and TV Inpainting, " SIAM Journal on Applied
Mathematics, Vol. 62, No. 3, pp. 1019-1043, 2001. [3] T. Chan and J. Shen, ''Non-Texture Inpainting by
Curvature-Driven Diffusions, " Journal of Visual
Communication and Image Representation, Vol. 4, No. 12, pp. 436-449, 2001.
[4] A. Criminisi, P. Perez, and K. Toyama, "Object Removal by Examplar-Based Image Inpainting, " International Conference on Computer Vision and
Pattern Recognition, pp. 1-8, 2003. [5] A. Criminisi, P. P'erez, and K. Toyama, "Region
Filling and Object Removal by Exemplar-Based Image Inpainting, " IEEE Transactions on Image Processing, Vol. 13, Issue 9, pp. 1200-1212, 2004.
[6] Available at http://sourceforge.netlproj ects/opencvlibrary lfiles/ope ncv-win/2.3.1I
1747