10
Digital Makeup from Internet Images Asad Khan, Yudong Guo, Ligang Liu Graphics and Geometric Computing Lab, School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui, 230026, PR China Abstract We present a novel approach of color transfer between images by exploring their high-level semantic information. First, we set up a database which consists of the collection of downloaded images from the internet, which are segmented automatically by using matting techniques. We then, extract image foregrounds from both source and multiple target images. Then by using image matting algorithms, the system extracts the semantic information such as faces, lips, teeth, eyes, eyebrows, etc., from the extracted foregrounds of the source image. And, then the color is transferred between corresponding parts with the same semantic information. Next we get the color transferred result by seamlessly compositing dierent parts together using alpha blending. In the final step, we present an ecient method of color consistency to optimize the color of a collection of images showing the common scene. The main advantage of our method over existing techniques is that it does not need face matching, as one could use more than one target images. It is not restricted to head shot images as we can also change the color style in the wild. Moreover, our algorithm does not require to choose the same color style, same pose and image size between source and target images. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in dierent parts in the source image. Comparing with other approaches,our algorithm is much better in color blending in the input data. Keywords: Semantic information, Color transfer, Image matting, Color consistency 2000 MSC: I.4.9 [Image Processing and Computer Vision] 1. Introduction Color transfer is an image processing method imparting the color characteristics of a target image to a source image. Ide- ally, the result by a color transfer algorithm should apply the color style of the target image to the source image. A good color transfer algorithm should provide quality both in scene details and colors. Reinhard et al. [1] presented a simple and potent color transfer algorithm which translates and scales an image pixel by pixel in Lαβ color space according to the mean and standard deviation of the color values in the source and target images. There exist numerous procedures in literature where probability distributions are used to process the image’s colors [3, 11, 15] or deliver user controllable adjustment of the colors. Out of these, the latter ones are either restricted to local editing [34] or contain global edit propagation [2]. Although modern image editing packages provide some color correction, and tone adjustment functionalities, these techniques usually require indirect user interaction [28, 32], or direct ad- justment of color balance or manipulation of the tone curve. Consequently, these interactive techniques are too tedious for large image collections. On the other hand, individual color correction is likely to produce images with inconsistent colors Figure 1: Dierent results produced by our technique with certain facial styles while giving their source images and a number of corresponding target images. across the whole collection. Recently, HaCohen et al. [25] pro- posed a method to optimize color consistency across an image collection with respect to a reference image that relies on recov- ering dense pixel correspondence across multiple images [29]. This method is computationally expensive and not ideal for pro- cessing large collections involving thousands of images. Our method of color consistency is based on matrix factorization technique proposed in [31] that is robust to outliers. Preprint submitted to Computers & Graphics October 18, 2016 arXiv:1610.04861v1 [cs.CV] 16 Oct 2016

Digital Makeup from Internet Images

  • Upload
    vokhue

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Digital Makeup from Internet Images

Digital Makeup from Internet Images

Asad Khan, Yudong Guo, Ligang Liu

Graphics and Geometric Computing Lab, School of Mathematical Sciences,University of Science and Technology of China,

Hefei, Anhui, 230026, PR China

Abstract

We present a novel approach of color transfer between images by exploring their high-level semantic information. First, weset up a database which consists of the collection of downloaded images from the internet, which are segmented automaticallyby using matting techniques. We then, extract image foregrounds from both source and multiple target images. Then by usingimage matting algorithms, the system extracts the semantic information such as faces, lips, teeth, eyes, eyebrows, etc., from theextracted foregrounds of the source image. And, then the color is transferred between corresponding parts with the same semanticinformation. Next we get the color transferred result by seamlessly compositing different parts together using alpha blending. Inthe final step, we present an efficient method of color consistency to optimize the color of a collection of images showing thecommon scene. The main advantage of our method over existing techniques is that it does not need face matching, as one could usemore than one target images. It is not restricted to head shot images as we can also change the color style in the wild. Moreover,our algorithm does not require to choose the same color style, same pose and image size between source and target images. Ouralgorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the colorin different parts in the source image. Comparing with other approaches,our algorithm is much better in color blending in the inputdata.

Keywords: Semantic information, Color transfer, Image matting, Color consistency2000 MSC: I.4.9 [Image Processing and Computer Vision]

1. Introduction

Color transfer is an image processing method imparting thecolor characteristics of a target image to a source image. Ide-ally, the result by a color transfer algorithm should apply thecolor style of the target image to the source image. A goodcolor transfer algorithm should provide quality both in scenedetails and colors.

Reinhard et al. [1] presented a simple and potent colortransfer algorithm which translates and scales an image pixelby pixel in Lαβ color space according to the mean and standarddeviation of the color values in the source and target images.There exist numerous procedures in literature where probabilitydistributions are used to process the image’s colors [3, 11, 15]or deliver user controllable adjustment of the colors. Out ofthese, the latter ones are either restricted to local editing [34] orcontain global edit propagation [2].

Although modern image editing packages provide some colorcorrection, and tone adjustment functionalities, these techniquesusually require indirect user interaction [28, 32], or direct ad-justment of color balance or manipulation of the tone curve.Consequently, these interactive techniques are too tedious forlarge image collections. On the other hand, individual colorcorrection is likely to produce images with inconsistent colors

Figure 1: Different results produced by our technique with certain facial styleswhile giving their source images and a number of corresponding target images.

across the whole collection. Recently, HaCohen et al. [25] pro-posed a method to optimize color consistency across an imagecollection with respect to a reference image that relies on recov-ering dense pixel correspondence across multiple images [29].This method is computationally expensive and not ideal for pro-cessing large collections involving thousands of images. Ourmethod of color consistency is based on matrix factorizationtechnique proposed in [31] that is robust to outliers.

Preprint submitted to Computers & Graphics October 18, 2016

arX

iv:1

610.

0486

1v1

[cs

.CV

] 1

6 O

ct 2

016

Page 2: Digital Makeup from Internet Images

Figure 2: Some results with different facial styles by considering a single source image and a number of multiple target images.

In transferring color between images, we have the follow-ing challenges. First, such a technique must maintain the corre-spondences between meaningful image regions in an automaticway. Secondly, for novice users, the pipeline should be intu-itive and user friendly. Thirdly, an efficient technique to op-timize color consistency of a collection of images depicting acommon scene. The generation of automatic Trimap is anotherchallenge as almost all of the existing techniques require a userto input a Trimap manually.

In this paper, we propose a new color transfer techniquefor all types of images by taking advantage of high level facialsemantic information and large-scale Internet photos by profes-sional artists. A user can retouch his image easily to achieve acompelling visual style by using such an algorithm. we presenta matrix factorization based approach to automatically optimizecolor consistency for multiple images using sparse correspon-dence obtained from multi-image sparse local feature matching.For rigid scenes, we leveraging structure from motion (SfM)although it is an optional step. We stack the aligned pixel in-tensities into a vector whose size equals the number of images.Such vectors are stacked into a matrix, one with many missingentries. This is the observation matrix that will be factorized.Under a simple color correction model, the logarithm of thismatrix satisfies a rank two constraint under idealized conditions

In short summary, this article makes the following contribu-tions:

• a new color transfer technique is presented which cantransfer color between local regions of the source image andmultiple target images with the same facial semantic informa-tion,

• we propose a new algorithm of automatic generation ofTrimap for efficient synthesis of each facial semantic informa-tion,

• a semantic color transfer technique which transfers thecolor automatically is presented,

• an efficient technique to optimize color consistency of acollection of images depicting a common scene.

More importantly, our proposed method does not requirethe user to choose the same color style, face matching and im-age size between source and target images. We demonstrate

high quality color consistency results on large photo collectionsof internet images.

2. Related Work

Now a days, color transfer is a much debated research area.We can classify these color transfer techniques in two algo-rithms, namely global and local algorithms.

Reinhard et al. [1] and his colleagues were the first to im-plement a color transfer method by globally transferring colors,after translating color data of input images from the RGB colorspace to the decorrelated Lαβ color space. This transferred col-ors quickly, successfully and also efficiently generated a con-vincing output. This technique was further improved by Xiaoet al. [14]. Pitie et al. [15] who used a refined probabilisticmodel. In Pitie et al. [4] they furthered their method in orderto better perform non-linear adjustments of color probabilitydistribution between images. Similarly, Chang et al. [16, 17]suggested global color transfer by introducing perceptual colorcategorization for both images and video. Yang et al. [18] ini-tiated a method for color mood transfer which preserves spatialcoherence based on histogram matching. This idea was devel-oped further by Xiao et al. [19] who puzzled out the problem oflocal fidelity and global transfer in two steps: gradient preserv-ing optimization and histogram matching. Wang et al. [11, 20]proposed a technique for global color-mood exchange driven bypredefined and labeled color palettes and example images. Co-hen et al. [21] suggested a methodology which employs colorharmony rules to optimize the overall appearance after some ofthe colors have been altered by the user. Shapira et al. [22]suggested a solution which utilized navigating the appearanceof the image to obtain desired results. Furthermore, automaticmethods for colorizing grayscale images stemming from exam-ples from internet images [23] and semantic annotations [24]were introduced. In general, color transfer methods which actglobally are not competent enough for accurate re-coloring ofsmall objects or humans.

For transferring colors among desired regions only, manualapproaches with user interventions were suggested by some re-searches. Maslennikova et al. [9] defined a rectangular area ineach input image where color transfer was desired, then utiliz-ing region propagation, they generated the color influence map.Pellacini et al. [10] suggested a stroke-based color-transfertechnique which employs pairs of strokes to specify the cor-

2

Page 3: Digital Makeup from Internet Images

Figure 3: In step 1, we specify the source and target images, in step 2, we take semantic information after face detection, in step 3, we extract the semanticinformation by using matting algorithm, in step 4, we separate all parts of given sematic information, in fifth step, we get resulting image by using alpha blendingand in the final step, we apply color consistency to obtain our result with optimized color.

responding regions of target and source images.

Although it is possible for users to change the color of alocal region by using some strokes, it may call for strenuousefforts for detailed doctoring such as oil paintings and compleximages.

Recently, Baoyuang et al. [11] proposed color theme en-hancement of an image. They effectuated a new color style im-age by using predefined color templates instead of source im-ages. To perform decently, it needs quite accurate user input.The color transfer methodology is also utilized to apply colorsto grayscale images. Tomihisa et al. [12] assigned chromaticityvalues by equating the luminance channels of target and sourceimages. Abadpour et al. [13] yielded reliable results by em-ploying a principal component analysis method.

Moreover, some researchers have exhibited a keen interestin transformation of colors among distinct color spaces. Colortransfer technique warrants the use of a color space where ma-jor elements of a color are mutually independent. Since, inthe RGB color space, the colors are correlated , the decorre-lated CIELab color space is usually employed. This requires amethod to effectuate color transfer transformation of the colorspace, RGB to CIELab and vice versa. Xiao et al. [14] proposedan improved solution that circumvents the transformation pro-cess between the correlated color spaces and uses translation,rotation, and scaling to transfer colors of a target image in theRGB color space. Recently yang et al. [30] proposed a seman-tic color transfer system that leverages the image content on theInternet.

An option to extend color constancy methods to mixed light-ing is to let users segment images into regions lit by a singletype of light. Image editors such as Adobe Photoshop offerselection tools to restrict the spatial extent of color correctionfilters. Lischinski et al. [38] show a scribble interface that cansuccessfully correct localized color casts. We instead aim foran automatic process with less localized correction, since illu-mination may affect large disconnected portions of an image.

A few automatic color correction methods exist for largephoto collections of rigid scenes. Garg et al. [6] observe thatscene appearance often has low dimensionality and exploit thatfact for color correction. Laffont et al. [26] estimate coher-ent intrinsic images and transfer localized illumination usingthe decomposed layers. Díaz et al. [5] performs batch radio-metric calibration using the empirical prior on camera responsefunctions [7]. Shi et al. [37] handles the effect of nonlinearcamera response using a shape prior. Kim and Pollefeys [8] in-troduce a decoupled scheme for radiometric calibration and thevignetting correction. In contrast to [5, 6, 8, 26, 37], our methodonly requires sparse correspondence. Moreover, images of non-rigid scenes can be handled. For rigid scenes, we optionally useSfM for more accurate correspondence but neither surface nor-mals nor dense 3D models are needed. Instead of using highquality correspondences recovered by NRDC [25], our tech-nique uses sparse correspondences and it is also less sensitiveto erroneous correspondences due to the underlying robust op-timization framework.

3

Page 4: Digital Makeup from Internet Images

Figure 4: we connect the 83 key points on the face based on face detection, thenthe facial semantic information outline can be obtained.

3. Face Semantic Analysis

3.1. Face DatabaseThere are lots of attractive and artistic images on the In-

ternet. These images are produced by professional photogra-phers and professional cameras. It would be interesting if or-dinary people could reproduce the styles of these photos usingsome simple image processing operations. Therefore, we usea database to store target images and their semantic informa-tion. All the images in the database are segmented using mat-ting techniques. we detect the key points of a human face andobtain the facial characteristics. In this article, we utilize theAPI provided by the [33] for face detection. The landmark APIcan detect the key points of a human face robustly. The API isused to detect the position of the facial contours, facial featuresand other key points. Our approach detects 83 key points in theface are depicted in Fig. 4.

3.2. Contours of Face SemanticsWe can focus on the human face semantic analysis. In a

certain order, we connect the 83 key points on the face basedon face detection, then the facial semantic information outlinecan be obtained. The landmark entry stores the key points ofthe various parts of the human face, including eyebrows, eyes,mouth, chin, face and so on. Each part has some points, and thepoints are represented by the coordinates using x and y. Usingthese key points, we connect them in a certain order and thenwe get the contour of the face. The face semantic informationoutline is given in Fig. 4.

3.3. Matting of Face SemanticsA commonly used approach to extract semantic information

is the Mean-Shift image segmentation technique [27]. How-ever, it will produce unwanted hard boundaries between seman-tic regions. We employ the image matting technique to ob-tain semitransparent soft boundaries. Here we implement ourautomatic matting based on their matting technique by takingadvantage of our generated TriMap. Existing natural mattingalgorithms often require a user to identify background, fore-ground, and unknown regions using a manual segmentation.However, constructing a suitable TriMap is very tedious and

Figure 5: Trimap in the leftmost image while transparent image and syntheticimage are the medal and right respectively.

time-consuming. Sometimes, the inaccuracy in TriMap willlead to a poor matting result.

In order to solve the problem mentioned above, we expandthe contour of facial semantic information using an expansionalgorithm. After distinguishing the foreground, backgroundand unknown region by different colors (we set foreground towhite, background to black, and the unknown region to gray),we can obtain a corresponding TriMap for an image. The math-ematical expression for the expansion algorithm is:

It(x, y) = max(x′,y′):element((x′,y′)),0

Is(x + x′, y + y′

)(1)

Consequently, the matting image is computed with our auto-matically generated TriMap Chen et al.[35]. The transparentimage and the synthetic image are shown in Fig. 5. The au-tomatic matting approach is also applied in source images toobtain the basic semantic segmentation.

4. Semantics Color Transfer

The first step in our color transfer approach is to run white-balancing on both the source and the target images. The nextstep is to match the overall brightness between the two images.We use the transformed luminance values for this step and adoptNguyen et al.Šs Illuminant Aware Gamut-Based Color Trans-fer [36]. This technique was unique in its consideration of thescene illumination and the constraint that the mapped imagemust be within the color gamut of the target image. The math-ematical equation is

L f = C−1t (Cs(Ls)), (2)

where Ls, L f and Lt are the source luminance ,intermediate lu-minance and target luminance respectively. Cs and Ct are thecumulative histogram of Ls and Lt respectively. Next, the out-put luminance Lo is obtained by solving the following linearequation.

[I+σ(GxT Gx +Gy

T Gy)]L0 = L f +σ(GxT Gx +Gy

T Gy)Ls, (3)

where I is the identity matrix; Gx, Gy are two gradient matricesalong x, and y direction; σ is a regularization parameter.

4

Page 5: Digital Makeup from Internet Images

Figure 6: Some results of side pose where the target images are not needed to be of same pose or of same style. The target images are not specific to different results.The texture in the background of the result in first row is preserved, also the background of the result in second row is preserved where it comprises a combinationof colors.

To align the source color gamuts to the target resulting fromthe previous step, the centers of the source and the target imagegamuts are estimated based on the mean values µs and µt of thesource and target images.

Is = Is − µs, It = It − µt. (4)

Given a source and target image, we can propagate color byminimizing the following energy

E = 2η((E × Ds) ⊕ Dt) − ηDt − η(E × Ds), (5)

where Ds, and Dt are the full 3D convex hulls of the source andtarget image respectively. The operator ⊕ is the point concate-nation operation between two convex hulls and the operator ηis the volume of the convex hull. A volume of a combination oftwo convex hulls is always larger or equal to that of individualconvex hull.

4.1. Appearance Consistency Optimization

We adopt a global color correction model for reasons dis-cussed in [25, 31], namely robustness to alignment errors, easeof regularization and higher efficiency due to fewer unknownparameters. Our simple model is as follows:

I′ = (aI)γ (6)

where I′ is the input image, I is the desired image, a is a scalefactor equivalent to the white balance function [32] and (.)γ isthe non-linear gamma mapping.

Ii(xi j) = (aik jvi j)γi (7)

where k j is the constant albedo of the j− th 3D point and ai andγi are the unknown global parameters for the i − th image. Theper-pixel error term denoted as vi j captures un modeled colorvariation due to factors such as lighting and shading change.

Taking logarithms on both side of Eq. 7, and Rewriting inmatrix form, by grouping image intensities by scene point intosparse column vectors of length m and stacking the n columnsside by side, we get:

I = A + K + V. (8)

Here, n denotes the number of 3D points or equivalently thenumber of correspondence sets. I ∈ Rm×n is the observationmatrix, where each entry Ii j = log(Ii(xi j)). A ∈ Rm×n is thecolor coefficient matrix where Ai j = γi jlogai j . K ∈ Rm×n is thealbedo matrix where Ki j = γi jlogki j . Finally, V ∈ Rm×nAis theresidual matrix where Vi j = γi jlogvi j . Here, the row index idenotes the i − th image, and the column index j denotes thej − th 3D point.

5. Results and Discussion

In our method, one can use more than one target imageswhich makes it significant enough. As sometimes the user likessome parts of the face of one image and other parts of someother image, our technique allows the user to use different tar-get images. This gives the more natural results with better vi-sual effects. The semantic analysis of the source image by usingour interactive tool takes about 2 seconds, and the color trans-fer step takes about 1 second. In Fig. 3, the pipeline to ourtechnique is exhibited while explaining the different steps in

5

Page 6: Digital Makeup from Internet Images

Figure 7: Comparison with the results of [Yang et al. 2015] and [An et al. 2010].

a sequence. Our method comprises simple and user-friendlysteps to get completed and produces the eye-catching resultswith better visual effects. Once we specify the source imagewith a number of target images in first step, we take semanticinformation after face detection which is automatically done byour algorithm. In next step, it makes use of the matting algo-rithm after extracting the semantic information. In second laststep, we get resulting image while using the alpha blending andin the final step, we obtain our final result with optimized colorafter applying color consistency.

In Fig. 7, the first column shows the source images, thesecond one depicts the target images, the third one contains thecolor transfer methods of Yang et al. [30] and An and Pellacini[10] and the last one shows our results. We do not performany additional processing for the shadows made by the self-occlusion. We view the overall color statistical information asthe color style. Our proposed approach can deal with examplesof images whose some parts are in grayscale. In our method, thefocus is being put to preserve the boundaries in the resulting im-age and to control the color expansion to the regions where it isnot required to be transferred. In the result of Yang et al. [30],the local color is also transferred from some parts of grayscaleimage to the color image, as they were not able to control it.Here they could not preserve the color in the teeth part of theface. Whereas, we were able to transfer the color while preserv-ing the color in the teeth part of the face, which results a morenatural result with better visual effects.

The interactive method of An and Pellacini [10] utilizesuser-specified color strokes. However, it can be difficult for auser to specify those strokes in a compatible and perceptuallycoherent manner. In contrast, our method gets correspondingTriMap easily through facial semantic analysis and some easy-

to-use processing. With the obtained TriMap, image matting isperformed automatically. Our method propagates the color ef-ficiently while preserving the other color details. Our methodproduces comparatively the quality result and visual effects asbetter as by An and Pellacini [10]. In Fig. 7, it is clearly seenin result that while transferring the color from lips to lips, theywere not able to do this efficiently. Moreover, their transferredcolor on hair is much sharp than its original color in source im-age and also details of hairs is missing.

Figure 8: Comparison with the results of [Reinhard et al. 2001] and [Nguyenet al. 2014].

Fig. 8, a comparison of our method with the techniquesby Reinhard et al. [1] and Nguyen et al. [36] is made whilegiving their source and target images. In the result by Rein-hard et al. [1] it is clearly seen that the color is not transferredproperly resulting in a blur and imbalanced image. They werenot able to control the color and boundary is not preserved as

6

Page 7: Digital Makeup from Internet Images

Figure 9: A result with the source image in the leftmost column and resulting images in second, third and fourth columns with respective target images.

7

Page 8: Digital Makeup from Internet Images

Figure 10: Multi-pronged application of our method, in which (a), (b) and (c) are the source images whereas (e), (g) and (I) are the multiple target images and (d),(f) and (h) are corresponding result produced by our method.

8

Page 9: Digital Makeup from Internet Images

well. In the result by Nguyen et al. [36], the color is transferredbut color balancing is not properly performed, which results adarkness in right side of the cheek in the image. Whereas in ourresult the color is transferred properly with efficient boundarypreservation while the background color and shirt color is alsotransferred efficiently.

Fig. 6, we apply our algorithm to some images with sidepose depiction. We see that our technique is also suitable forside pose images and it produces the results of same quality asfor other images. Moreover, the texture from the background ofthe result in first row is preserved well and the background ofthe result in second row where it is a combination of differentcolors is also preserved efficiently. The main purpose of this ex-periment is to extend the limitation of our method from normalimages to side pose images. We show that our method worksfor these images types as well and produce the results of samequality.

Fig. 1, an application of our method, on a source imagewhile considering a number of target images, is presented. Infirst row, we consider four target images to transfer the colorin a source image and the resulting image shows the result af-ter application of our technique. In second row, we apply ouralgorithm to a source image with three target images, whereasthe resulting image depicts the corresponding result. It is notcustomary to use multiple target images in earlier work on thistopic. Nevertheless, we make use of the multiple target im-ages which enhances the efficiency of this work. In Fig. 2, anumber of results of efficient color transfer with different facialstyles are presented. A single source image is considered anda variety of results, while considering four target images, arepresented which shows the diversity of our technique.

Fig. 9, some results of our technique are presented. As ourmethod is not restricted to one target image only, the results aregiven with multiple target images. Our technique transfers thecolor of different parts of the face like hairs, lips and eyes etc. inthe source image by taking semantic information of similar na-ture from multiple target images. Our technique transforms animage of low characteristics to an artistic and exquisite imagein just a few steps. The results in Fig. 9 shows the efficiency ofour method as the results are relatively better than of existingtechniques.

Fig. 10, shows some more results of our proposed methodusing multiple target images. They all show the color-transferredresults that reflect the target colors to the source images effec-tively. Moreover, the color preservation in the resulting im-age is focused and tackled successfully. We consider imageswith different poses and show the effective applicability of ourmethod on these image types. Fig. 10, shows that our methodcan successfully transfer the color in source images with differ-ent styles and types e.g. side pose, back pose and other poseswhich are normally tough to tackle. One can choose a suitablecombination of colors which seem more attractive with respectto different poses and styles.

Limitation: The limitations of our technique are exhibitedin Fig. 11. In image from first column, the face skin colormatches with the background color, therefore our algorithmdoes not detect the face and thus unable to extract the semanticinformation. In image from second column, the face informa-tion is not clear and therefore leads to a failure where the algo-rithm is not able to detect the face and hence lacks extractingthe facial semantic information.

Figure 11: A drawback to our method: The image in first column shows afailure due to lack of semantic constraints and the image in second columnshows problem caused by lack of clarity of face information.

6. Conclusion

We have presented a system for stylizing possible appear-ances of a person from a single or more photographs. we haveproposed a new semantic color transfer technique between im-age to efficiently change the color style of images. In just fewsteps, our proposed framework can transform a common imageof low characteristics to an exquisite and artistic photo. Theuser is not required to select source and target images with facematching, as our method can make use of the multiple target im-ages. The broad area of applications of our technique includeshigh level facial semantic information and scene content analy-sis to transfer the color between images efficiently. While min-imizing manual labor and avoiding the time-consuming opera-tion of semantic segmentation for the target image, the frame-work of our proposed method can be broadly used in film post-production, video-editing, art and design and image processing.Our algorithm is not restricted to one-to-one image color trans-fer and can make use of more than one target images to trans-fer the color in different parts in the source image. Moreover,our technique is not restricted to head shot images as we canalso change the color style in the wild. The advantage of usingmultiple target images is to choose your favorite colors fromdifferent images and you are not restricted to choose the targetcolors from a single image. A number of results are presentedin different styles and conditions which shows the ubiquitous-ness and diversity of our method to industrial scale.

9

Page 10: Digital Makeup from Internet Images

7. References

References

[1] Reinhard E, Ashikhmin M, Gooch B, Shirley P. Color transfer betweenimages. IEEE Comput Graph Appl 2001;21(5):34-41.

[2] An X, Pellacini F. AppProp: all-pairs appearance-space edit propagation.ACM Trans Graph 2008; 27(3): 40.

[3] Pitie F, Kokaram A C, Dahyot R. N-dimensional probability density func-tion transfer and its application to color transfer. in IEEE Int Conf onComputer Vision. IEEE Computer Society. 2005; Vol. 2, p. 1434-1439,

[4] Pitie F, Kokaram A C, Dahyot R. Automated colour grading using colourdistribution transfer. Comput Vis Image Underst 2007;107(1-2):123-137.

[5] Diaz M and Sturm P. Radiometric calibration using photo collections. InProceedings of IEEE International Conference on Computational Photog-raphy (ICCP), 2011.

[6] Garg R, Du H, Seitz S and Snavely N. The dimensionality of scene ap-pearance. In Proceedings of International Conference on Computer Vision(ICCV), 2009.

[7] Grossberg M and Nayar S. Modeling the space of camera response func-tions. IEEE Transactions on Pattern Analysis and Machine Intelligence(PAMI), 26(10), 2004.

[8] Kim S J and Pollefeys M. Robust radiometric calibration and vignettingcorrection. IEEE Trans. Pattern Anal. Mach. Intell., 30(4):562U576, Apr.2008.

[9] Maslennikova A, Vezhnevets V. Interactive local color transfer betweenimages. in Proc of Graphicon 2007.

[10] An X, Pellacini F. User-controllable color transfer. Comput Graph Forum2010; 29(2): 263-271.

[11] Wang B, Yu Y, Wong TT, Chen C, Xu YQ. Data-driven image color themeenhancement. ACM TOG 2010; 29(6): 146.

[12] Welsh T, Ashikhmin M, Mueller K. Transferring color to grayscale im-ages. ACM TOG 2002; 21(3): 277-280.

[13] Abadpour A, Kasaei S. A fast and efficient fuzzy color transfer method. inProc of the Fourth IEEE Int Symp on Signal Processing and InformationTechnology 2004, p. 491-494.

[14] Xiao X, Ma L. Color transfer in correlated color space. in Proc ACM IntConf on Virtual Reality Continuum and Its Applications 2006, p. 305-309.

[15] Pitie F, Kokaram A. The linear Monge-Kantorovitch linear colour map-ping for example-based colour transfer. In 4th European Conference onVisual Media Production(IETCVMP) 2007, p. 1-9.

[16] Chang Y, Saito S, Nakajima M. Example-based color transformation ofimage and video using basic color categories. IEEE Trans Image Process2007; 16(2): 329-336.

[17] Chang Y, Saito S, Uchikawa K, Nakajima M. Example-based color styl-ization of images. ACMTrans Appl Percept 2005; 2(3): 322-345.

[18] Yang, C.-K., Peng, L.-K.: Automatic mood-transferring between colorimages. IEEE Comput. Graph. Appl. 28(2), 52-61 (2008)

[19] Xiao X, Ma L. Gradient-preserving color transfer. Comput Graph Forum2009; 28(7): 1879-1886.

[20] Wang B, Yu Y, Xu Y Q. Example-based image color and tone style en-hancement. ACM Trans Graph 2011; 30(4): 64.

[21] Cohen O D, Sorkine O, Gal R, Leyvand T, Xu Y Q. Color harmonization.ACM Trans. Graph 2006; 25(3): 624.

[22] Shapira L, Shamir A, Cohen O D. Image appearance exploration bymodel-based navigation. Comput Graph Forum 2009; 28(2):629-638.

[23] Liu X, Wan L, Qu Y, Wong T T, Lin S, Leung C S, Heng P A. Intrinsiccolorization. ACM Trans Graph 2008; 27(5): 152.

[24] Chia A Y S, Zhuo S, Gupta R K, Tai Y W, Cho S Y, Tan P, Lin S. Semanticcolorization with Internet images. ACM Trans Graph 2011; 30(6):1

[25] Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski. Optimiz-ing color consistency in photo collections. ACM Transactions on Graph-ics (TOG), 32(4), 2013.

[26] Laffont P -Y, Bousseau A, Paris S, Durand F and Drettakis G. Coherentintrinsic images from photo collections. ACM Transactions on Graphics(SIGGRAPH Asia), 31, 2012.

[27] Comaniciu D, Meer P (2002) Mean shift: a robust approach toward fea-ture space analysis. IEEE Trans Pattern Anal Mach Intell 24(5):603U619

[28] Boyadzhiev I, Bala K, Paris S, and Durand F. User-guided white balancefor mixed lighting conditions. ACM Transactions on Graphics (TOG),31(6), 2012.

[29] HaCohen Y, Shechtman E, Goldman D B, and Lischinski D. Nonrigiddense correspondence with applications for image enhancement. ACMTransactions on Graphics (TOG), 30(4):70:1U9, 2011.

[30] Yang Y, Zhao H, You L, Tu R AND Andx J X W. 2015. Semantic portraitcolor transfer with internet images. Multimedia Tools Application

[31] Park J, Tai Y W, Sinha S N and Kweon I S. Efficient and Robust ColorConsistency for Community Photo Collections. In IEEE CVPR 2016

[32] Hsu E, Mertens T, Paris S, Avidan S and Durand F. Light mixture estima-tion for spatially varying white balance. ACM Transactions on Graphics(TOG), 27(3), 2008.

[33] Face++, 2016. http://www.faceplusplus.com.[34] Lischinski D, Farbman Z, Uyttendaele M, Szeliski R. Interactive local

adjustment of tonal values. ACM Trans Graph 2006; 25(3):646.[35] Chen Q, Li D, Tang C K. Knn matting. In CVPR 2012; 869-876.[36] Nguyen R M H, Kim S J AND Brown M S. 2014. Illuminant aware

gamut-based color transfer. In Pacific Graphics, vol. 33.[37] Shi B, Inose K, Matsushita Y, Tan P, Yeung S -K and Ikeuchi K. Photo-

metric stereo using internet images. In Proceedings o International Con-ference on 3D Vision (3DV), 2014..

[38] Lischinski D, Farbman Z, Uyttendaele M AND S zeliski R. 2006. Inter-active local adjustment of tonal values. ACM Transactions on Graphics25, 3 (July), 646U653.

10