Effects of Dip-Deepu

Embed Size (px)

Citation preview

  • 8/9/2019 Effects of Dip-Deepu

    1/5

    What is Blurring?

    We all know what blurring is, don't we? It's that thing that happens when your camera is

    out of focus or the dog steals your glasses. What happens is that what should be seen as asharp point gets smeared out, usually into a disc shape. In image terms this means that

    each pixel in the source image gets spread over and mixed into surrounding pixels.Another way to look at this is that each pixel in the destination image is made up out of amixture of surrounding pixels from the source image.

    The operation we need for this is called convolution. This sounds complicated but thats

    only because mathematicians like to make things sound complicated in order to maintain

    that air of magic and keep the funding rolling in. Well, I'm onto them and I can revealthat convolution is not that complicated (at my level anyway). The way it works is this:

    we imagine sliding a rectangular array of numbers over our image. This array is called

    the convolution kernel. For every pixel in the image, we take the corresponding numbersfrom the kernel and the pixels they are over, multiply them together and add all the

    results together to make the new pixel. For example, imagine we want to do a reallysimple blur where we just average together each pixel and its eight immediate

    neighbours. The kernel we need is:

    1/9 1/9 1/91/9 1/9 1/91/9 1/9 1/9

    Notice that these all add up to 1, which means that our resulting image will be just as

    bright as the original.

    Without further ado, let's blur an image in Java. All that convolution stuff sounds trickyto implement, but luckily Java comes with a built-in and ready-to-use operator to do

    exactly that. I'm talking ConvolveOp here. Here's the code:

    float[] matrix = {0.111f, 0.111f, 0.111f,0.111f, 0.111f, 0.111f,0.111f, 0.111f, 0.111f,

    };

    BufferedImageOp op = new ConvolveOp( new Kernel(3, 3, matrix) );blurredImage = op.filter(sourceImage, destImage);

  • 8/9/2019 Effects of Dip-Deepu

    2/5

    The Original and Blurred Images

    Fantastic! A blurry image! It's not very blurry though. Let's do a really big blur like this:

    float[] matrix = new float[400];for (int i = 0; i < 400; i++)

    matrix[i] = 1.0f/400.0f;

    BufferedImageOp op = new ConvolveOp( new Kernel(20, 20, matrix),ConvolveOp.EDGE_NO_OP, null );

    blurredImage = op.filter(sourceImage, destImage);

    Big Blur with ConvolveOp

    Hmmmmmm. Well that's not so good. Not only did it take a really long time, but the

    result is slightly odd - everything looks, well, sort of square, and what on earth has

    happened around the edges?

    First the edges: ConvolveOp is a timid namby-pamby thing which is scared of falling offthe edge of the image. If the kernel would overlap the edge of the image, it just gives up

    and just leaves the pixel unchanged. You can change this by passing EDGE_ZERO_FILL

    instead of EDGE_NO_OP, but that's even worse - the pixels round the edge just get set to

    zero and effectively disappear. What shall we do? Well, we could pad the image outaround the edges before blurring and crop the result, but that's just giving in, and besides

    we wouldn't learn anything. Instead, we'll write a proper, fearless, no-nonsense operator

    which isn't scared of edges. We'll call it ConvolveFilter to distinguish it fromConvolveOp. I'm not going to do into details of the source in this article - there's not

    enough time or space and we have a lot more filters to write yet, but you can download or

    view the source and it should be fairly self-explanatory.

  • 8/9/2019 Effects of Dip-Deepu

    3/5

    View ConvolveFilter.java

    Now the squareness problem: The reason everything looks square is because what we're

    doing here is called a box blur - our kernel is shaped like a square, as if we're using acamera which has a square aperture. Incidentally, don't let anyone tell you that box blurs

    are useless - in fact if you're simulating the shadow cast by a square light, it's exactlywhat you want. Anyway, they'll come in useful further on. Another thing: don't get

    confused - I'm using the term box blur to refer to the shape of the kernel, not its profile,which I'm going to call a box filter. More on this later on. To get a more realistic blur,

    what we should have done is used a circle-shaped kernel. This simulates much better

    what a real camera does.

    That's much better. We'll come back to this later on, but first a diversion back to the boxblur.

    Box Blur

    We've solved the edge pixel problem, but our blur is still going really slowly, and thingsare only going to get worse. The problem is that the number of multiplications in theconvolution is going up as the square of the kernel radius. With a 100x100 kernel, we're

    going to be doing 10000 multiplies and adds per pixel (approx). How can we get round

    this? It turns out that there are more ways to go about this than I've possibly got time to

    write about, or even bother to look at. One way I will mention quickly before sweeping itunder the rug is this: You can do a box blur by shrinking down your image, blurring it

    and scaling it up again. This may be fine for your purposes, and you should bear it in

    mind. One problem is that it doesn't animate very well, but may not be a concern to you.

    Let's look at the box blur again: It turns out that there's a couple of really easy ways to

    speed this up. Firstly, it turns out that the box blur is separable. This means that we cando a 2D blur by doing two 1D blurs, once in the horizontal direction and once in the

    vertical direction. This is much faster than doing the 2D blur because the time taken goesup in proportion to the kernel size, not as its square. Secondly, Think about the window

    that we're sliding across the image. As we move it from left to right, pixels come in at the

    right edge and are added to the total and at the same time pixels leave the left edge andare subtracted from the total. All we need to do is just do the add and subtract for the

    entering and leaving pixels at each step instead of adding together all the pixels in the

    http://www.jhlabs.com/ip/ConvolveFilter.javahttp://www.jhlabs.com/ip/ConvolveFilter.java
  • 8/9/2019 Effects of Dip-Deepu

    4/5

    window. We only need to store a set of running totals which are the width or height of the

    kernel. This gives a massive speed improvement at the cost of having to write some code.

    Luckily, I've written the code for you, so you win all round. We need two passes, once toblur horizontally and once vertically. The code for these is, of course, quite different. But

    wait! There's a trick we can do which allows us just to write the code once. If we write a

    blurring function which does the horizontal blur but writes its output image transposed,then we can just call it twice. The first pass blurs horizontally and transposes, the second

    pass does the same, but as the image is now transposed, it's really doing a vertical blur.

    The second transposition makes the image the right way up again and voila! - a very fastbox blur. Try it out in this applet:

    And here's the source code....

    GrayscaleFilter

    This filter converts an image to a grayscale image. To do this it finds the brightness of

    each pixel and sets the red, green and blue of the output to the brightness value. But what

    is the brightness? The simplest answer might be that it is the average of the RGBcomponents, but that neglects the way in which the human eye works. The eye is much

    more sensitive to green and red than it is to blue, and so we need to take less acount of

    the blue and more account of the green. The weighting used by GrayscaleFilter is:

    luma = 77R + 151G + 28B

    There are no parameters to this filter.

    GrayscaleFilter

    EdgeFilter

    This filter detects the edges in a filter. For each pixel, it looks a each channel, finds the

    local gradient and replaces the channel by a value determined by the gradient. Edges

  • 8/9/2019 Effects of Dip-Deepu

    5/5

    become white while flat areas become black. You can choose between various kernels for

    calculating the gradient.

    EdgeFilter

    You can use this filter to obtain black edges by using the Grayscale and Invert filters on

    the results. This can look nice when overlaid on the original image with the Multiplyblend mode.

    Black Edges

    http://www.developer.com/java/ent/article.php/3590351/Processing-Image-Pixels-

    Applying-Image-Convolution-in-Java.htm