6
Artistic Enhancement and Style Transfer of Image Edges using Directional Pseudo-coloring Shouvik Mani C3.ai [email protected] Abstract Computing the gradient of an image is a common step in computer vision pipelines. The image gradi- ent quantifies the magnitude and direction of edges in an image and is used in creating features for downstream machine learning tasks. Typically, the image gradient is represented as a grayscale image. This paper introduces directional pseudo-coloring, an approach to color the image gradient in a de- liberate and coherent manner. By pseudo-coloring the image gradient magnitude with the image gra- dient direction, we can enhance the visual quality of image edges and achieve an artistic transforma- tion of the original image. Additionally, we present a simple style transfer pipeline which learns a color map from a style image and then applies that color map to color the edges of a content image through the directional pseudo-coloring technique. Code for the algorithms and experiments is available at https://github.com/shouvikmani/edge-colorizer. 1 Introduction Edge detection is a fundamental image processing technique which involves computing an image gradient to quantify the magnitude and direction of edges in an image. Image gradi- ents are used in various downstream tasks in computer vision such as line detection, feature detection, and image classifica- tion. Despite their importance, image gradients are visually dull. They are typically represented as black-and-white or grayscale images visualizing the change in intensity at each pixel of the original image. In this paper, we introduce directional pseudo-coloring, a technique to color the edges of an image (the image gradient) in a deliberate and coherent manner. We develop a pipeline to pseudo-color the image gradient magnitude using the im- age gradient direction, ensuring that edges are colored consis- tently according to their direction. An example of this trans- formation is shown in Figure 1. The goal of this approach is to make image gradients more informative and appealing by enhancing them with color. A This work was presented at the 2nd Workshop on Humanizing AI (HAI) at IJCAI’19 in Macao, China. Figure 1: An example of directional pseudo-coloring of image edges. Left: The original image. Right: Edge coloring result. colored image gradient is necessary as the human eye can dis- cern only two-dozen shades of gray, but thousands of shades of color [Valentinuzzi, 2004]. A colored image gradient will reveal latent information about edge direction not visible in a black-and-white gradient. In addition to its functional value, a colored image gradient will offer an aesthetically pleasing rendering of the original image. We also present a simple style transfer approach to give users more control over how they color image edges. This approach uses techniques from color quantization to learn the dominant colors in a style image. Then, it uses the directional pseudo-coloring technique to color the edges of a content im- age using the learned dominant colors. 2 Related Work Edge detection is a widely studied topic with many differ- ent approaches. Common methods for edge detection include using Sobel filters and the Canny edge detection algorithm. Szeliski [Szeliski, 2010] reviews the fundamentals of edge detection and provides a brief survey of edge detection algo- rithms. In this paper, we will use Sobel filters due to their effectiveness and simplicity. Our proposed edge coloring approach is an example of col- orization, the process of adding color to a grayscale image or video. Edge detection has often been used to improve col- orization results, but coloring the edges themselves has not been explored in depth. For instance, Huang et al. [Huang et al., 2005] have found that adding edge information may prevent the colorization process from bleeding over region arXiv:1906.07981v1 [cs.CV] 19 Jun 2019

Artistic Enhancement and Style Transfer of Image Edges ...dominant colors in a style image. Then, it uses the directional pseudo-coloring technique to color the edges of a content

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • Artistic Enhancement and Style Transfer of Image Edgesusing Directional Pseudo-coloring

    Shouvik ManiC3.ai

    [email protected]

    AbstractComputing the gradient of an image is a commonstep in computer vision pipelines. The image gradi-ent quantifies the magnitude and direction of edgesin an image and is used in creating features fordownstream machine learning tasks. Typically, theimage gradient is represented as a grayscale image.This paper introduces directional pseudo-coloring,an approach to color the image gradient in a de-liberate and coherent manner. By pseudo-coloringthe image gradient magnitude with the image gra-dient direction, we can enhance the visual qualityof image edges and achieve an artistic transforma-tion of the original image. Additionally, we presenta simple style transfer pipeline which learns a colormap from a style image and then applies that colormap to color the edges of a content image throughthe directional pseudo-coloring technique. Codefor the algorithms and experiments is available athttps://github.com/shouvikmani/edge-colorizer.

    1 IntroductionEdge detection is a fundamental image processing techniquewhich involves computing an image gradient to quantify themagnitude and direction of edges in an image. Image gradi-ents are used in various downstream tasks in computer visionsuch as line detection, feature detection, and image classifica-tion. Despite their importance, image gradients are visuallydull. They are typically represented as black-and-white orgrayscale images visualizing the change in intensity at eachpixel of the original image.

    In this paper, we introduce directional pseudo-coloring, atechnique to color the edges of an image (the image gradient)in a deliberate and coherent manner. We develop a pipelineto pseudo-color the image gradient magnitude using the im-age gradient direction, ensuring that edges are colored consis-tently according to their direction. An example of this trans-formation is shown in Figure 1.

    The goal of this approach is to make image gradients moreinformative and appealing by enhancing them with color. A

    This work was presented at the 2nd Workshop on HumanizingAI (HAI) at IJCAI’19 in Macao, China.

    Figure 1: An example of directional pseudo-coloring of imageedges. Left: The original image. Right: Edge coloring result.

    colored image gradient is necessary as the human eye can dis-cern only two-dozen shades of gray, but thousands of shadesof color [Valentinuzzi, 2004]. A colored image gradient willreveal latent information about edge direction not visible in ablack-and-white gradient. In addition to its functional value,a colored image gradient will offer an aesthetically pleasingrendering of the original image.

    We also present a simple style transfer approach to giveusers more control over how they color image edges. Thisapproach uses techniques from color quantization to learn thedominant colors in a style image. Then, it uses the directionalpseudo-coloring technique to color the edges of a content im-age using the learned dominant colors.

    2 Related WorkEdge detection is a widely studied topic with many differ-ent approaches. Common methods for edge detection includeusing Sobel filters and the Canny edge detection algorithm.Szeliski [Szeliski, 2010] reviews the fundamentals of edgedetection and provides a brief survey of edge detection algo-rithms. In this paper, we will use Sobel filters due to theireffectiveness and simplicity.

    Our proposed edge coloring approach is an example of col-orization, the process of adding color to a grayscale image orvideo. Edge detection has often been used to improve col-orization results, but coloring the edges themselves has notbeen explored in depth. For instance, Huang et al. [Huanget al., 2005] have found that adding edge information mayprevent the colorization process from bleeding over region

    arX

    iv:1

    906.

    0798

    1v1

    [cs

    .CV

    ] 1

    9 Ju

    n 20

    19

    https://github.com/shouvikmani/edge-colorizer

  • Figure 2: Edge detection using Sobel filters. We convolve the original image (left) with the Sobel filters and compute the magnitude G(middle) and direction θ (right) of the image gradient.

    boundaries in an image.Style transfer is a recent area of work which involves ap-

    plying the appearance of a style image to the semantic con-tent of a content image. Gatys et al. [Gatys et al., 2016] useConvolutional Neural Networks in order to create rich imagerepresentations and provide a neural style transfer algorithmto separate and recombine the image content and style im-ages. We will present a simpler style transfer pipeline, aimedat applying styles specifically for the edge coloring task.

    3 MethodologyThe proposed pipeline for coloring image edges has two keycomponents: edge detection and edge coloring. We reviewedge detection using Sobel filters, introduce the directionalpseudo-coloring technique, and present some results from us-ing the technique.

    3.1 Edge DetectionThe standard approach for edge detection is to convolve animage with a derivative filter like the Sobel filter to differ-entiate the intensity of the image pixels with respect to thehorizontal and vertical directions of the image. However,since differentiation is very sensitive to noise, it is usefulto blur the image first in order to detect the most salientedges [Gkioulekas, 2018]. This can be achieved by convolv-ing the original image f with a blur filter, such as the Gaus-sian filter, to produce a blurred image fb (line 1). For the pur-poses of edge detection, we will first convert f to a grayscaleimage before blurring it.

    fb = f ∗1

    16

    [1 2 12 4 21 2 1

    ](1)

    Once the image has been blurred, we can perform edgedetection using Sobel filters [Sobel, 1968]. We convolve theblurred image fb with the horizontal and vertical Sobel filters,Sx and Sy respectively, to approximate the image derivatives

    in the horizontal and vertical directions, ∂fb∂x and∂fb∂y respec-

    tively (lines 2 – 4). ∂fb∂x has a strong response to verticaledges, and ∂fb∂y has a strong response to horizontal edges.

    Sx =

    [1 0 −12 0 −21 0 −1

    ]Sy =

    [1 2 10 0 0−1 −2 −1

    ](2)

    ∂fb∂x

    = fb ∗ Sx (3)

    ∂fb∂y

    = fb ∗ Sy (4)

    ∂fb∂x and

    ∂fb∂y can be combined to form the image gradient

    ∇fb. Lines 5 – 7 show how to compute the magnitude G anddirection θ of the image gradient.

    ∇fb =[∂fb∂x ,

    ∂fb∂y

    ](5)

    G = ||∇fb|| =

    √(∂fb∂x

    )2+(∂fb∂y

    )2(6)

    θ = tan−1(∂fb∂x

    /∂fb∂y

    )(7)

    G and θ represent the intensity and direction of edges in theimage respectively and are displayed in Figure 2. Not onlyare these images visually uninteresting, but they must also beviewed separately to understand both the intensity and direc-tion of image edges. In the next section, we present an algo-rithm to color the gradient magnitude and unify the intensityand direction information of the edges into a single image.The quantities G and θ will serve as inputs to our directionalpseudo-coloring algorithm.

    3.2 Edge ColoringOnce the edges have been detected, there are several possi-ble ways to color them. One approach is to color the edges

  • using the same colors from the original image, masking thethresholded gradient magnitude image over the original im-age. However, this does not add any novelty or aesthetic qual-ity to the final image.

    Instead, we choose to pseudo-color the gradient magnitudeusing values from the gradient direction. Pseudo-coloring isthe process of mapping the shades of gray in a grayscale im-age to colors using a color map [Radewan, 1975]. We definea color map in lines 8 – 10 as a function that maps normal-ized numerical values to RGB pixel values along some colorspectrum [Wang, 2016].

    color map : [0, 1]→ (R ∈ [0, 255], (8)G ∈ [0, 255], (9)B ∈ [0, 255]) (10)

    To implement our direction-based edge coloring approach,we create directional color maps, which map normalized di-rection values to RGB colors. Figure 3a provides examplesof color maps in the Python Matplotlib library [Hunter, 2007]and Figure 3b illustrates the concept of a directional colormap.

    (a) Examples of color maps in Matplotlib.

    (b) A directional color map transforms normalized direc-tion values to RGB colors.

    Figure 3: The color map is a fundamental concept in directionalpseudo-coloring.

    After selecting a color map, we transform our gradientmagnitude and direction to prepare them for coloring. First,in line 11, we threshold the gradient magnitude image, con-verting pixel Gx,y to 1 if it has intensity above a thresholdvalue t, and 0 otherwise. This thresholding turns the gradientmagnitude into a black-and-white image T .

    Tx,y =

    {1 if Gx,y ≥ t0 otherwise (11)

    Then, in line 12, we mask the thresholded gradient mag-nitude over the gradient direction to create the colored image

    C. If Tx,y is 0, then the pixel Cx,y becomes a black pixel(R = 0, G = 0, B = 0). Otherwise, if Tx,y is 1, we normal-ize the pixel’s gradient direction θx,y between 0 and 1 andapply the chosen color map to color pixel Cx,y . Normalizingthe gradient directions has the effect of creating a directionalcolor map.

    Cx,y =

    {color map

    (θx,y−min(θ)

    max(θ)−min(θ)

    )if Tx,y = 1

    (R = 0, G = 0, B = 0) otherwise(12)

    The resulting image C is an image of the thresholded gra-dient magnitude pseudo-colored by the normalized gradientdirection. Because of this directional pseudo-coloring tech-nique, edges along the same direction are assigned the samecolor, adding consistency and coherency to the image. Theresulting image also has aesthetic value with edges of multi-ple colors contrasted against a dark background.

    4 ResultsIn Figure 4, we present a few examples of results from theedge coloring pipeline. The original images are shown on theleft along with their directional pseudo-colored edges on theright. Color maps for the transformations are chosen from theset of sequential color maps in the Python Matplotlib library.The pseudo-colored edges are visually pleasing and detail-rich – for instance, one can identify the individual beams andtrusses on the bridge because edges in different directions arecolored differently and consistently.

    Apart from presenting these qualitative examples, we con-duct a quantitative evaluation of our directional pseudo-coloring procedure. We devise an experiment to compare theaesthetic and technical qualities between the original black-and-white gradient magnitude T and the pseudo-colored gra-dient magnitude C. While the aesthetic quality of an imagepertains to its style and beauty, the technical quality measurespixel-level concepts such as noise, contrast, and saturation.

    Although assessing the quality of an image is a highly sub-jective task, in recent years, researchers have made efforts toautomate this process through machine learning. In our ex-periment, we leverage Neural Image Assessment (NIMA), aproject which consists of two deep convolutional neural net-work (CNN) models which predict an image’s aesthetic andtechnical quality, respectively, on a scale of 1 – 10 [Lennan etal., 2018; Esfandarani and Milanfar, 2017]. The two modelsare trained through transfer learning – model weights are ini-tialized as a CNN trained on an ImageNet object classificationtask and are fine-tuned to perform image quality assessmenton the Aesthetic Visual Analysis (AVA) and Tampere ImageDatabase (TID2013) datasets.

    We use a collection of 100 high-resolution images fromthe DIVerse 2K Resolution (DIV2K) dataset for this experi-ment [Agustsson and Timofte, 2017]. The diverse content andhigh-quality images in DIV2K makes it ideal for detectingand coloring edges. First, we perform edge detection and ob-tain the thresholded black-and-white gradient magnitude foreach image in the dataset. Then, we pseudo-color the edges ofeach image using four random color maps from Matplotlib’s

  • Figure 4: Results from the edge coloring pipeline. Original images (left). Directional pseudo-colored edges (right).

    Edge Coloring Method Type of Color Map Mean NIMAAesthetic ScoreMean NIMA

    Technical Score

    Thresholded black & whitegradient magnitude

    – 5.28 4.49

    Directional pseudo-coloredgradient magnitude

    Sequential 5.24 4.61Diverging 5.24 4.67

    Cyclic 5.27 4.66Qualitative 5.24 4.66

    Table 1: NIMA aesthetic and technical image quality scores for directional pseudo-coloring results using four types of color maps.

    set of sequential, diverging, cyclic, and qualitative color maps(one random color map is chosen from each set).

    Using the NIMA models, we predict the aesthetic and tech-nical quality scores for the black-and-white gradient magni-tude and for each of the four pseudo-colored gradient mag-nitudes, for each image. The mean NIMA scores for eachtype of color map are listed in Table 1. Regardless of thetype of color map used, pseudo-coloring the edges does notimprove the mean aesthetic score over the black-and-whiteedges, according to the NIMA model. However, the pseudo-colored edges have marginally higher mean technical scorescompared to their black-and-white counterparts.

    Overall, the NIMA scores suggest that the directionalpseudo-coloring approach does not offer a significant im-provement over standard black-and-white image gradients,contradicting our promising qualitative results. However,

    we should interpret these NIMA results with caution, as theNIMA models are not trained on assessing the quality of im-age gradients, but rather of natural images. More work isneeded to quantify the factors that make image gradients at-tractive and useful.

    5 Application: Style TransferIn the examples shown so far, we have chosen input colormaps for the directional pseudo-coloring algorithm arbitrar-ily. However, a user may wish to use a style image as inputinstead of an explicit color map, since a style image is mucheasier to specify. A natural extension of our edge coloringapproach to meet this need is style transfer: specifying a styleimage to influence the coloring of edges in a content image.In this section, we develop a simple style transfer pipline to

  • Figure 5: Style transfer pipeline for edge coloring.

    learn a color map from a style image and then apply that colormap to color the edges of a content image.

    Our style transfer approach draws inspiration from ap-proaches to color quantization. As described by Orchard andBouman [Orchard and Bouman, 1991], color quantization isthe problem of selecting an optimal color palette to reduce thenumber of colors in an image to a smaller, but representativeset. While there are numerous algorithms for color quanti-zation, Celebi [Celebi, 2011] demonstrates the effectivenessof the popular k-means clustering algorithm for this task. Atutorial on determining dominant colors in images through k-means clustering is provided by Tranberg [Tranberg, 2018].We will use k-means clustering to identify dominant colors inthe style image in order to form a color map for edge coloringof the content image.

    We outline the style transfer pipeline in Figure 5. As inputto the pipeline, the user must provide a content image, a styleimage, and the number of clusters k (i.e. number of domi-nant colors) to learn from the style image. First, we representeach pixel in the style image as a point in the RGB colorspace. Using k-means clustering in this three-dimensionalspace, we identify k clusters of pixels. The resulting clustercenters from the k-means algorithm represent the dominantcolors in the style image.

    After identifying the dominant colors, we form a color mapusing a greedy approach. To initialize the color map, we adda random dominant color as the first color in the color map.Then, until the remaining dominant colors are exhausted, weiteratively add the dominant color which is nearest (in termsof Euclidean distance in the RGB space) to the last addedcolor in the color map. This greedy procedure helps create asequential color map by ensuring that adjacent colors in the

    color map are similar to each other. A sequential color map isnecessary for edges of the same direction to be assigned thesame color in the edge coloring process.

    Additionally, the size of each color in the color map isscaled by the number of pixels in the style image which areassigned to the cluster represented by that color. For instance,in Figure 5, the light blue color makes up the largest portionof the color map since most of the pixels in the style imageare of that color. This scaling ensures that the learned colormap is representative of the colors in the style image.

    Once we have learned dominant colors from the style im-age through clustering and arranged those colors in a colormap, we can proceed with edge detection and edge coloringfor the content image. Now, we use the learned color mapas an input to the directional pseudo-coloring algorithm. Theresulting style transferred image preserves the edges of thecontent image, but is colored in with dominant colors fromthe style image. Figure 6 displays two examples of this styletransfer process.

    6 ConclusionWe have presented an algorithm to color the edges of an im-age in a way that adds both functional and aesthetic value.This approach leverages both the gradient magnitude and gra-dient direction to color image edges, producing a single visu-alization to understand the edges of an image. By enhancingimage gradients with color, we can better visualize edges andcreate new artistic transformations of image edges.

    To give users more flexibility in specifying colors for edgecoloring, we have provided a style transfer pipeline whichlearns the dominant colors in a style image and colors theedges of a content image using those dominant colors. Al-

  • Figure 6: Results from the style transfer pipeline.

    though this approach is much simpler than deep learning-based style transfer techniques, it is limited to a transfer ofdominant colors.

    There is plenty of scope for further work to enhanceand extend this edge coloring approach. First, one maywish to pseudo-color the gradient magnitude using a dif-ferent quantity other than the gradient direction. For in-stance, pseudo-coloring the magnitude using the depth foreach pixel (through depth estimation) may offer additionalinsights about the characteristics of the edges in an image.Second, we should evaluate how well this approach general-izes to three-dimensional figures: can a similar algorithm beused to create a 3D rendering of colored edges for a 3D fig-ure? Finally, the style transfer approach may be improved byusing other strategies for color quantization, such as by clus-tering the style image pixels in the HSV color space insteadof the RGB color space.

    References[Agustsson and Timofte, 2017] Eirikur Agustsson and Radu

    Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In CVPR, July 2017.

    [Celebi, 2011] M. Emre Celebi. Improving the performanceof k-means for color quantization. Image Vision Comput.,29(4):260–271, March 2011.

    [Esfandarani and Milanfar, 2017] Hossein Talebi Esfan-darani and Peyman Milanfar. NIMA: neural imageassessment. CoRR, 2017.

    [Gatys et al., 2016] L. A. Gatys, A. S. Ecker, and M. Bethge.Image style transfer using convolutional neural networks.pages 2414–2423, June 2016.

    [Gkioulekas, 2018] I. Gkioulekas. Image filtering. 2018.http://www.cs.cmu.edu/∼16385/lectures/lecture2.pdf.

    [Huang et al., 2005] Yi-Chin Huang, Yi-Shin Tung, Jun-Cheng Chen, Sung-Wen Wang, and Ja-Ling Wu. An adap-tive edge detection based colorization algorithm and its ap-plications. pages 351–354, 2005.

    [Hunter, 2007] J. D. Hunter. Matplotlib: A 2d graphics envi-ronment. Computing In Science & Engineering, 9(3):90–95, 2007.

    [Lennan et al., 2018] Christopher Lennan, Hao Nguyen, andDat Tran. Image quality assessment. https://github.com/idealo/image-quality-assessment, 2018.

    [Orchard and Bouman, 1991] Michael T. Orchard andCharles A. Bouman. Color quantization of images. IEEETrans. Signal Processing, 39:2677–2690, 1991.

    [Radewan, 1975] C.H. Radewan. Digital image processingwith pseudo-color. Acquisition and Analysis of PictorialData, 1975. https://doi.org/10.1117/12.954071.

    [Sobel, 1968] I. Sobel. An isotropic 3x3 image gradient op-erator. 1968.

    [Szeliski, 2010] R. Szeliski. Computer Vision: Algorithmsand Applications. 2010.

    [Tranberg, 2018] Bo Tranberg. Determining dominant colorsin images using clustering, 2018.

    [Valentinuzzi, 2004] M.E. Valentinuzzi. Understanding theHuman Machine: A Primer for Bioengineering. 2004.

    [Wang, 2016] R. Wang. Color and pseudo-color images.2016. http://fourier.eng.hmc.edu/e161/lectures/digitalimage/node4.html.

    http://www.cs.cmu.edu/~16385/lectures/lecture2.pdfhttps://github.com/idealo/image-quality-assessmenthttps://github.com/idealo/image-quality-assessmenthttps://doi.org/10.1117/12.954071http://fourier.eng.hmc.edu/e161/lectures/digital_image/node4.htmlhttp://fourier.eng.hmc.edu/e161/lectures/digital_image/node4.html

    1 Introduction2 Related Work3 Methodology3.1 Edge Detection3.2 Edge Coloring

    4 Results5 Application: Style Transfer6 Conclusion