43
Image Processing 2011 A PROJECT REPORT ON “IMAGE PROCESSING (EDGE DETECTION) IN CA PROJECT-II (SEM-II OF FINAL YEAR) Carried out in a partial fulfillment of the requirement for the B.TECH in Electronics & Telecommunication Engineering, In the academic year 2010-11. Submitted to SHRI GURU GOBIND SINGHJI INSTITUTE OF ENGINEERING AND TECHNOLOGY VISHNUPURI, NANDED-431606 Submitted by 1.Mr.Dawale Ashish Arunrao (2007BEC098) 2.Mr.Chormage Vaibhav Dilip (2007BEC102) 3.Mr.Khadse Akash Govindrao(2007BEC061) Project Supervisor Prof. A.N.Kamthane Page | 1

project

Embed Size (px)

DESCRIPTION

see

Citation preview

Page 1: project

Image Processing 2011

A

PROJECT REPORT

ON

“IMAGE PROCESSING (EDGE DETECTION) IN C”

A PROJECT-II (SEM-II OF FINAL YEAR)

Carried out in a partial fulfillment of the requirement for the B.TECH in Electronics & Telecommunication Engineering,

In the academic year 2010-11.

Submitted to

SHRI GURU GOBIND SINGHJI INSTITUTE OF ENGINEERING AND TECHNOLOGY VISHNUPURI, NANDED-431606

Submitted by

1.Mr.Dawale Ashish Arunrao (2007BEC098)

2.Mr.Chormage Vaibhav Dilip (2007BEC102)

3.Mr.Khadse Akash Govindrao(2007BEC061)

Project Supervisor

Prof. A.N.Kamthane

DEPARTMENT OF ELECTRONICS & TELECOMMUNICATION ENGINEERING, SHRI GURU GOBIND SINGHJI INSTITUTE OF ENGINEERING AND

TECHNOLOGY VISHNUPURI, NANDED-431606

P a g e | 1

Page 2: project

Image Processing 2011

SHRI GURU GOBIND SINGHAJI INSTITUTE OF ENGG. & TECH.

VISHNUPURI ,NANDED-431606

CERTIFICATE

This is to certify that Mr.Dawale Ashish, Mr.Chormage Vaibhav, Mr.Khadse Akash have completed all requirements for the completion of project-II in Sem-II of final year in academic year 2010-11 under my supervision.

In my opinion the work carried out by them has reached the standard required for the partial fulfillment for the award of the degree B.Tech Electronics & Telecommunication Engineering.

To the best of my knowledge, the contents of the work carried out by them have not been submitted to any other University or Institute for any other Degree or Diploma.

Date : 02-05-2011

Place: Nanded Prof.

A.N.Kamthane

(Project Supervisor)

Prof.-Dr. R. R. Manthalkar

H.O.D(Electronics & Telecommunication)

P a g e | 2

Page 3: project

Image Processing 2011

ACKNOWLEDGMENT

We would like to take this opportunity to express our heartfelt gratitude to our beloved guide Prof. A.N.Kamthane , without whose guidance this endeavour wouldn’t have been visible effort.

We also would like to thank our HOD Prof. Dr.R.R.Manthalkar who always in their own way have guided through their knowledge.

Finally, but not least, thank each and every person in our department who helped us in completion of this project directly and indirectly. We also thank to all those friends for standing behind us.

Mr.Dawale Ashish Arunrao (2007BEC098)

Mr.Chormage Vaibhav Dilip (2007BEC102)

Mr.Khadse Akash Govindrao(2007BEC061)

P a g e | 3

Page 4: project

Image Processing 2011

Contents

Page no.

1. Introduction 1.1 Image Data Basic……………………………..5

1.2 Motivation…………………………………....6

2. BMP File Format

2.1 Pixel Storage…………………………………7

2.2 Device-independent bitmaps…………….…...8

2.3 Bitmap Data…………………………………..9

2.4 54-Byte Header of a Bitmap………………...10

3. Edge Detection…………………………………………………..12

4. Sobel Operator

4.1 Introduction…………………………………14

4.2 Sobel Explanation………………………..…15

4.3 Formulation………………………………....17

4.4 Technical Details……………………………18

5. Laplace Operator

5.1 Introduction…………………………………18

P a g e | 4

Page 5: project

Image Processing 2011

6. Experimental results

6.1 Main Program……………………………….19

6.2 Canny Edge Detection………….……..…….27

7. Conclusion………………………………………………………31

8. References………………………………………………………32

1. Introduction

Image processing software performs image disk I/O, manipulates images, and outputs the results. This report will be easier to understand the C Image Processing System performs these three tasks.

The first task is image disk I/O, and the 1st item needed is an image file format. The file format specifies how to store the image and information about itself. This projects works with Windows bit mapped (BMP) files. The source code works with 8-bit BMP files (no compression). These are basic pixel-based image files. Images are available from many sources today (the Internet is a limitless source

The second task is image manipulation (i.e. Edge detection algorithm of a respective image). This is how the software holds the image data in memory and processes it. We used a 16-bit compiler and was limited by the 64K byte memory segments in the PC.

The final task is outputting results. C can write result BMP image files, display image (edge detected) pixel on the screen.

P a g e | 5

Page 6: project

Image Processing 2011

Image processing involves processing or altering an existing image in a desired manner. The first step is obtaining an image in a readable format. This is much easier today than five years back. The Internet and other sources provide countless images in standard formats. Next contents describes the BMP file formats and presents source code that reads and writes images in these formats. Once the image is in a readable format, image processing software needs to read it so it can be processed and written back to able.

1.1 Image Data Basics

An image consists of a two-dimensional array of numbers. The color or gray shade displayed for a given picture element (pixel) depends on the number stored in the array for that pixel. The simplest type of image data is black and white. It is a binary image since each pixel is either 0 or 1.

The next, more complex type of image data is gray scale, where each pixel takes on a value between zero and the number of gray scales or gray levels that the scanner can record. These images appear like common black-and white photographs | they are black, white, and shades of gray. Most gray scale images today have 256 shades of gray. People can distinguish about 40 shades of gray, so a 256- shade image looks like a photograph." This project concentrates on gray scale images. The most complex type of image is color. Color images are similar to gray scale except that there are three bands, or channels, corresponding to the colors red, green, and blue. Thus, each pixel has three values associated with it. A color scanner uses red, green, and blue filters to produce those values. Images are available via the Internet, scanners, and digital cameras. Any picture shown on the Internet can be downloaded by pressing the right mouse button when the pointer is on the image. This brings the image to your PC usually in a JPEG format. Your Internet access software and other software packages can convert that to a TIFF or BMP. Image scanners permit putting common photographs into computer files. The prices of full-color, full-size scanners are lower than ever (some available for less than $100). Be prepared to experiment with scanning photographs. The biggest problem is file size. Most scanners can scan 300 dots per inch (dpi), so a 3"x5" photograph at 300 dpi provides 900x1500 pixels. At eight bits per pixel, the image _le is over 1,350,000 bytes. Digital cameras have come out of the research lab and into consumer electronics. These cameras store images directly to copy disk. Most cameras use the JPEG file format to save space.

P a g e | 6

Page 7: project

Image Processing 2011

1.2 Motivations

Canny edge detection applied to a photograph

The purpose of detecting sharp changes in image brightness is to capture important events and changes in properties of the world. It can be shown that under rather general assumptions for an image formation model, discontinuities in image brightness are likely to correspond to[1][2]:

discontinuities in depth, discontinuities in surface orientation,

changes in material properties and

variations in scene illumination.

In the ideal case, the result of applying an edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects, the boundaries of surface markings as well as curves that correspond to discontinuities in surface orientation. Thus, applying an edge detection algorithm to an image may significantly reduce the amount of data to be processed and may therefore filter out information that may be regarded as less relevant, while preserving the important structural properties of an image. If the edge detection step is successful, the subsequent task of interpreting the information contents in the original image may therefore be substantially simplified. However, it is not always possible to obtain such ideal edges from real life images of moderate complexity. Edges extracted from non-trivial images are often hampered by fragmentation, meaning that the edge curves are not connected, missing edge segments as well as false edges not corresponding to interesting phenomena in the image – thus complicating the subsequent task of interpreting the image data.[3]

Edge detection is one of the fundamental steps in image processing, image analysis, image pattern recognition, and computer vision techniques.[4] During recent years, however, substantial (and successful) research has also been made

P a g e | 7

Page 8: project

Image Processing 2011

on computer vision methods[which?] that do not explicitly rely on edge detection as a pre-processing step

2. BMP File Format

2.1 Pixel storage

In uncompressed BMP files, and many other bitmap file formats, image pixels are stored with a color depth of 1, 4, 8, 16, 24, or 32 bits per pixel. Images of 8 bits and fewer can be either grayscale or indexed color. An alpha channel (for transparency) may be stored in a separate file, where it is similar to a grayscale image, or in a fourth channel that converts 24-bit images to 32 bits per pixel. Uncompressed bitmap files (such as BMP) are typically much larger than compressed (with any of various methods) image file formats for the same image. For example, the 1058×1058 Wikipedia logo, which occupies about 287.65 KB in the PNG format, takes about 3358 KB as a 24-bit BMP file. Uncompressed formats are generally unsuitable for transferring images on the Internet or other slow or capacity-limited media.

The bits representing the bitmap pixels are packed within rows. Depending on the color depth, a pixel in the picture will occupy at least n/8 bytes (n is the bit depth, since 1 byte equals 8 bits). The approximate size for a n-bit (2n colors) BMP file in bytes can be calculated, including the effect of starting each word on a 32-bit boundary, as:

……………………………………………..equ(2.1)

where the floor function gives the highest integer that is less than or equal to the argument; that is, the number of 32-bit dwords needed to hold a row of n-bit pixels; this value multiplied by 4 gives the byte count.

………………………………….equ(2.2)

P a g e | 8

Page 9: project

Image Processing 2011

In the formula above, 54 is the size of the headers in the popular Windows V3 BMP version (14-byte BMP file header plus 40-byte DIB V3 header); some other header versions will be larger or smaller than that, as described in tables below. And is the size of the color palette; this size is an approximation, as the color palette size will be bytes in the OS/2 V1 version, and some other versions may optionally define only the number of colors needed by the image, potentially fewer than 2n. Only files with 8 or fewer bits per pixel use a palette; for 16-bit (or higher) bitmaps, omit the palette part from the size calculation:

………………………………………………equ(2.3)

2.2 Device-independent bitmaps and BMP file format

Microsoft has defined a particular representation of color bitmaps of different color depths, as an aid to exchanging bitmaps between devices and applications with a variety of internal representations. They called these device-independent bitmaps or DIBs, and the file format for them is called DIB file format or BMP file format. According to Microsoft support:

Since BMP/DIB format was created for the Intel X86 architecture, all integers are stored in little-endian format.

A device-independent bitmap (DIB) is a format used to define device-independent bitmaps in various color resolutions. The main purpose of DIBs is to allow bitmaps to be moved from one device to another (hence, the device-independent part of the name). A DIB is an external format, in contrast to a device-dependent bitmap, which appears in the system as a bitmap object (created by an application...). A DIB is normally transported in metafiles, BMP files, and the Clipboard

A typical BMP file usually contains the following blocks of data:

BMP File Header Stores general information about the BMP file.

Bitmap Information (DIB header)

Stores detailed information about the bitmap image.

Color Palette Stores the definition of the colors being used for indexed

P a g e | 9

Page 10: project

Image Processing 2011

color bitmaps.

Bitmap Data Stores the actual image, pixel by pixel.

Table 2.1 A typical BMP file data

2.3 Bitmap data

This block of bytes describes the image, pixel by pixel. Pixels are stored "upside-down" with respect to normal image raster scan order, starting in the lower left corner, going from left to right, and then row by row from the bottom to the top of the image. Uncompressed Windows bitmaps can also be stored from the top row to the bottom, if the image height value is negative.

In the original DIB, the only four legal numbers of bits per pixel are 1, 4, 8, and 24. In all cases, each row of pixels is extended to a 32-bit (4-byte) boundary, filling with an unspecified value (not necessarily 0) so that the next row will start on a multiple-of-four byte location in memory or in the file. The total number of bytes in a row can be calculated as the image size/bitmap height in pixels. Following these rules there are several ways to store the pixel data depending on the color depth and the compression type of the bitmap.

One-bit (two-color, for example, black and white) pixel values are stored in each bit, with the first (left-most) pixel in the most-significant bit of the first byte. An unset bit will refer to the first color table entry, and a set bit will refer to the last (second) table entry.

Four-bit color (16 colors) is stored with two pixels per byte, the left-most pixel being in the more significant nibble. Each pixel value is an index into a table of up to 16 colors.

Eight-bit color (256 colors) is stored one pixel value per byte. Each byte is an index into a table of up to 256 colors.

RGB color (24-bit) pixel values are stored with bytes in the same order (blue, green, red) as in the color table.

P a g e | 10

Page 11: project

Image Processing 2011

2.4 54-Byte Header of a Bitmap

Offset Size Hex Value Value Description

0h 2 42 4D "BM" Magic Number (unsigned integer 66, 77)

2h 446 00 00 00 70 Bytes Size of Bitmap

6h 2 00 00 Unused Application Specific

8h 2 00 00 Unused Application Specific

Ah 436 00 00 00 54 bytes

The offset where the bitmap data (pixels) can be found.

Eh 428 00 00 00 40 bytes

The number of bytes in the header (from this point).

12h 402 00 00 00 2 pixels The width of the bitmap in pixels

16h 402 00 00 00 2 pixels The height of the bitmap in pixels

P a g e | 11

Page 12: project

Image Processing 2011

1Ah 2 01 00 1 plane Number of color planes being used.

1Ch 2 18 00 24 bits The number of bits/pixel.

1Eh 400 00 00 00 0 BI_RGB, No compression used

22h 410 00 00 00 16 bytes

The size of the raw BMP data (after this header)

26h 413 0B 00 00

2,835 pixels/meter

The horizontal resolution of the image

2Ah 413 0B 00 00

2,835 pixels/meter

The vertical resolution of the image

2Eh 400 00 00 00 0 colors Number of colors in the palette

32h 400 00 00 00

0 important colors

Means all colors are important

Table 2.2 54 byte header of .bmp file format

3. Edge Detection

Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Edges in images are areas with strong intensity contrasts – a jump in intensity from one pixel to the next. Edge detecting an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. There are many ways to perform edge detection. However, the majority of different methods may be grouped into two categories, gradient and

P a g e | 12

Page 13: project

Image Processing 2011

Laplacian. The gradient method detects the edges by looking for the maximum and minimum in the first derivative of the image. The Laplacian method searches for zero crossings in the second derivative of the image to find edges. An edge has the one-dimensional shape of a ramp and calculating the derivative of the image can highlight its location. Suppose we have the following signal, with an edge shown by the jump in intensity below:

Fig 3.1 Input signal

If we take the gradient of this signal (which, in one imension, is just the first derivative with respect to t) we get the following:

Fig 3.2 1st derivative of a signal

Clearly, the derivative shows a maximum located at the center of the edge in the original signal. This method of locating an edge is characteristic of the “gradient filter” family of edge detection filters and includes the Sobel method. A pixel location is declared an edge location if the value of the gradient exceeds some threshold. As mentioned before, edges will have higher pixel intensity values than those surrounding it. So once a threshold is set, you can compare the gradient value to the threshold value and detect an edge whenever the threshold is exceeded. Furthermore, when the first derivative is at a maximum, the second derivative is zero. As a result, another alternative to finding the location of an edge is to locate the zeros in the second derivative. This method is known as the Laplacian and the second derivative of the signal is shown below:

P a g e | 13

Page 14: project

Image Processing 2011

Fig 3.3 2nd derivative of a signal

4. Sobel Operator

4.1 Introduction

Based on this one-dimensional analysis, the theory can be carried over to two-dimensions as long as there is an accurate approximation to calculate the derivative of a two-dimensional image. The Sobel operator performs a 2-D spatial gradient measurement on an image. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. The Sobel edge detector uses a pair of 3x3 convolution masks, one estimating the gradient in the x-direction (columns) and the other estimating the gradient in the y-direction (rows). A convolution mask is usually much smaller than the actual image. As a result, the mask is slid over the image, manipulating a square of pixels at a time. The actual Sobel masks are shown below:

P a g e | 14

Page 15: project

Image Processing 2011

Fig. 4.1 masking matrices of sobel operator

The magnitude of the gradient is then calculated using the formula:

……………………………equ(4.1)

An approximate magnitude can be calculated using:

|G| = |Gx| + |Gy| …………………………….equ(4.2)

4.2 SOBEL EXPLANATION

The mask is slid over an area of the input image, changes that pixel's value and then shifts one pixel to the right and continues to the right until it reaches the end of a row. It then starts at the beginning of the next row. The example below shows the mask being slid over the top left portion of the input image represented by the green outline. The formula shows how a particular pixel in the output image would be calculated. The center of the mask is placed over the pixel you are manipulating in the image. And the I & J values are used to move the file pointer so you can multiply, for example, pixel (a22) by the corresponding mask value (m22). It is important to notice that pixels in the first and last rows, as well as the first and last columns cannot be manipulated by a 3x3 mask. This is because when placing the center of the mask over a pixel in the first row (for example), the mask will be outside the image boundaries.

P a g e | 15

Page 16: project

Image Processing 2011

Fig 4.2 masking process of a sobel

The GX mask highlights the edges in the horizontal direction while the GY mask highlights the edges in the vertical direction. After taking the magnitude of both, the resulting output detects edges in both directions.

4.3 Formulation

Mathematically, the operator uses two 3×3 kernels which are convolved with the original image to calculate approximations of the derivatives - one for horizontal changes, and one for vertical. If we define A as the source image, and Gx and Gy are two images which at each point contain the horizontal and vertical derivative approximations, the computations are as follows:

……………………………equ(4.3)

where * here denotes the 2-dimensional convolution operation.

P a g e | 16

Page 17: project

Image Processing 2011

The x-coordinate is here defined as increasing in the "right"-direction, and the y-coordinate is defined as increasing in the "down"-direction. At each point in the image, the resulting gradient approximations can be combined to give the gradient magnitude, using:

……………………………………………………………………………..equ(4.4)

Using this information, we can also calculate the gradient's direction:

More formally

Since the intensity function of a digital image is only known at discrete points, derivatives of this function cannot be defined unless we assume that there is an underlying continuous intensity function which has been sampled at the image points. With some additional assumptions, the derivative of the continuous intensity function can be computed as a function on the sampled intensity function, i.e. the digital image. It turns out that the derivatives at any particular point are functions of the intensity values at virtually all image points. However, approximations of these derivative functions can be defined at lesser or larger degrees of accuracy.

The Sobel operator represents a rather inaccurate approximation of the image gradient, but is still of sufficient quality to be of practical use in many applications. More precisely, it uses intensity values only in a 3×3 region around each image point to approximate the corresponding image gradient, and it uses only integer values for the coefficients which weight the image intensities to produce the gradient approximation...

4.4 Technical details

As a consequence of its definition, the Sobel operator can be implemented by simple means in both hardware and software: only eight image points around a point are needed to compute the corresponding result and only integer arithmetic is needed to compute the gradient vector approximation. Furthermore, the two discrete filters described above are both separable:

……………………..equ(4.5)

and the two derivatives Gx and Gy can therefore be computed as

P a g e | 17

Page 18: project

Image Processing 2011

………………..…..equ(4.6)

In certain implementations, this separable computation may be advantageous since it implies fewer arithmetic computations for each image point.

Applying convolution K to pixel group P can be represented in pseudocode as:

N(x,y) = Sum of { K(i,j).P(x-i,y-j)}, for i,j running from -1 to 1.………………equ(4.7)

N(x,y) represents the new matrix resulted after applying the Convolution K to P. where P is pixel matrix.

5. Laplace Algorithm

5.1 Introduction

The 5x5 Laplacian used is a convoluted mask to approximate the second derivative, unlike the Sobel method which approximates the gradient. And instead of 2 3x3 Sobel masks, one for the x and y direction, Laplace uses 1 5x5 mask for the 2nd derivative in both the x and y directions. However, because these masks are approximating a second derivative measurement on the image,

P a g e | 18

Page 19: project

Image Processing 2011

they are very sensitive to noise, as can be seen by comparing edgeSob.bmp to edgeLap.bmp. The Laplace mask and code are shown below:

Fig 5.1 masking matrix of a laplace

6. Experimental Result

For showing the effectiveness of algorithms, in our implementation, we test our system for different .BMP images.

The proposed methods have been implemented using the C software on a PC

6.1 MAIN PROGRAM:

#include <stdio.h>#include <stdlib.h>

P a g e | 19

Page 20: project

Image Processing 2011

#include <math.h>#include <alloc.h>#include<conio.h>

typedef struct { int rows; int cols; unsigned char *data;

} sImage;

/*-------PROTOTYPES---------*/long getImageInfo(FILE*, long, int);void copyImageInfo(FILE* inputFile, FILE* outputFile);void copyColorTable(FILE* inputFile, FILE* outputFile, int nColors);

int main(){ FILE *bmpInput, *bmpOutput; sImage originalImage,edgeImage; unsigned int X,Y,row,col; char ch; int I, J,nColors, SUM; longsumX, sumY; unsigned long vectorSize,fileSize; //vector size= rows*cols int GX[3][3],GY[3][3],MASK[5][5]; unsigned char *pChar, someChar; someChar = '0'; pChar = &someChar;

clrscr();

/* 3x3 GX Sobel mask. */ GX[0][0] = -1; GX[0][1] = 0; GX[0][2] = 1; GX[1][0] = -2; GX[1][1] = 0; GX[1][2] = 2; GX[2][0] = -1; GX[2][1] = 0; GX[2][2] = 1;

/* 3x3 GY Sobel mask. */ GY[0][0] = 1; GY[0][1] = 2; GY[0][2] = 1; GY[1][0] = 0; GY[1][1] = 0; GY[1][2] = 0; GY[2][0] = -1; GY[2][1] = -2; GY[2][2] = -1;

/* 5x5 Laplace mask. Ref: Myler Handbook p. 135 */ MASK[0][0] = -1; MASK[0][1] = -1; MASK[0][2] = -1; MASK[0][3] = -1; MASK[0][4] = -1;

P a g e | 20

Page 21: project

Image Processing 2011

MASK[1][0] = -1; MASK[1][1] = -1; MASK[1][2] = -1; MASK[1][3] = -1; MASK[1][4] = -1; MASK[2][0] = -1; MASK[2][1] = -1; MASK[2][2] = 24; MASK[2][3] = -1; MASK[2][4] = -1; MASK[3][0] = -1; MASK[3][1] = -1; MASK[3][2] = -1; MASK[3][3] = -1; MASK[3][4] = -1; MASK[4][0] = -1; MASK[4][1] = -1; MASK[4][2] = -1; MASK[4][3] = -1; MASK[4][4] = -1;

/*-------DECLARE INPUT & OUTPUT FILES-------*/ bmpInput = fopen("lena.bmp", "rb"); bmpOutput = fopen("edgeSob.bmp", "wb");

fseek(bmpInput, 2, SEEK_SET);fread(&fileSize,sizeof(unsigned int),1,bmpInput);fseek(bmpInput, 18, SEEK_SET);fread(&originalImage.cols,sizeof(unsigned int),1,bmpInput);fseek(bmpInput, 22, SEEK_SET);fread(&originalImage.rows,sizeof(unsigned int),1,bmpInput);

edgeImage.rows = originalImage.rows; edgeImage.cols = originalImage.cols;

/*--------PRINT DATA TO SCREEN----------*/ printf("\nWidth: %d\n", originalImage.cols); printf("\nHeight: %d\n", originalImage.rows); printf("\nFile size: %lu Bytes\n", fileSize);

fseek(bmpInput, 46, SEEK_SET);fread(&nColors,sizeof(unsigned int),1,bmpInput); printf("\nnColors: %d\n", nColors);

/*------ALLOCATE MEMORY FOR FILES--------*/ vectorSize = fileSize - (14+40+4*nColors); //Windows V3 BMP version (14-byte BMP file header plus 40-byte DIB V3 header); // vectorSize = originalImage.cols * originalImage.rows; printf("\nvectorSize: %lu Bytes\n", vectorSize); edgeImage.data = (unsigned char*) malloc(vectorSize*sizeof(unsigned char )); if(edgeImage.data == NULL)

P a g e | 21

Page 22: project

Image Processing 2011

{printf("Failed to malloc edgeImage.data\n");exit(0);

} printf("\n %lu bytes malloc'ed for edgeImage.data\n", vectorSize);

originalImage.data = (unsigned char *) malloc(vectorSize*sizeof(unsigned char)); if(originalImage.data == NULL) {

printf("Failed to malloc originalImage.data\n");exit(0);

} printf("\n %lu bytes malloc'ed for originalImage.data\n", vectorSize);

/*------COPY HEADER AND COLOR TABLE---------*/ copyImageInfo(bmpInput, bmpOutput); copyColorTable(bmpInput, bmpOutput, nColors); fseek(bmpInput, (14+40+4*nColors), SEEK_SET); fseek(bmpOutput, (14+40+4*nColors), SEEK_SET);

/* Read input.bmp and store it's raster data into originalImage.data */ for(row=0; row<=originalImage.rows-1; row++) {

for(col=0; col<=originalImage.cols-1; col++){ fread(pChar, sizeof(char), 1, bmpInput); //pChar=fgetc(bmpInput); // printf("%d ",someChar); *(originalImage.data + row*originalImage.cols + col) = *pChar;}

}

printf("Enter your choice \n\ta => By Sobel\n\tb => by Laplace\n"); scanf("%c",&ch);

switch(ch) {

case 'a':/*---------------------------------------------------

SOBEL ALGORITHM STARTS HERE ---------------------------------------------------*/

P a g e | 22

Page 23: project

Image Processing 2011

for(Y=0; Y<=(originalImage.rows-1); Y++) {

for(X=0; X<=(originalImage.cols-1); X++){ sumX = 0; sumY = 0;

/* image boundaries */ if(Y==0 || Y==originalImage.rows-1)

SUM = 0; else if(X==0 || X==originalImage.cols-1)

SUM = 0;

/* Convolution starts here */ else {

/*-------X GRADIENT APPROXIMATION------*/ for(I=-1; I<=1; I++) {

for(J=-1; J<=1; J++) { sumX = sumX + (int)( (*(originalImage.data + X + I + (Y +

J)*originalImage.cols)) * GX[I+1][J+1]); }

} if(sumX>255) sumX=255; if(sumX<0) sumX=0;

/*-------Y GRADIENT APPROXIMATION-------*/ for(I=-1; I<=1; I++)

{ for(J=-1; J<=1; J++) { sumY = sumY + (int)( (*(originalImage.data + X + I + (Y +

J)*originalImage.cols)) * GY[I+1][J+1]); }

} if(sumY>255) sumY=255; if(sumY<0) sumY=0;

SUM = abs(sumX) + abs(sumY); /*---GRADIENT MAGNITUDE APPROXIMATION ----*/

}

P a g e | 23

Page 24: project

Image Processing 2011

*(edgeImage.data + X + Y*originalImage.cols) = 255-(unsigned char)(SUM); /* make edges black and background white */

fwrite( (void *)(edgeImage.data + X + Y*originalImage.cols),1, 1, bmpOutput);

} }

printf("\n\a*******See edgeSob.bmp for results*******\a\n"); break;

case 'b':

for(Y=0; Y<=(originalImage.rows-1); Y++) {for(X=0; X<=(originalImage.cols-1); X++) { SUM = 0;

/* image boundaries */ if(Y==0 || Y==1 || Y==originalImage.rows-2 || Y==originalImage.rows-

1) SUM = 0;

else if(X==0 || X==1 || X==originalImage.cols-2 || X==originalImage.cols-1)

SUM = 0;

/* Convolution starts here */ else { for(I=-2; I<=2; I++) {

for(J=-2; J<=2; J++) { SUM = SUM + (int)( (*(originalImage.data + X + I +

(Y + J)*originalImage.cols)) * MASK[I+2][J+2]);

} } } if(SUM>255) SUM=255; if(SUM<0) SUM=0;

*(edgeImage.data + X + Y*originalImage.cols) = 255 - (unsigned char)(SUM);

fwrite((edgeImage.data + X + Y*originalImage.cols),sizeof(char),1,bmpOutput);

}

P a g e | 24

Page 25: project

Image Processing 2011

}

printf("See edgeLap.bmp for results\n");

break;

default: printf("Invalid Choice ...."); break;

}

fclose(bmpInput); fclose(bmpOutput); free(edgeImage.data); /* Finished with edgeImage.data */ free(originalImage.data); /* Finished with originalImage.data */ getch(); return 0;}

/*-------------COPIES HEADER AND INFO HEADER----------------*/void copyImageInfo(FILE* inputFile, FILE* outputFile){ unsigned char *ptrC; unsigned char dummy; int i;

dummy = '0'; ptrC = &dummy;

fseek(inputFile, 0L, SEEK_SET); fseek(outputFile, 0L, SEEK_SET);

for(i=0; i<=50; i++) { fread(ptrC, sizeof(char), 1, inputFile); fwrite(ptrC, sizeof(char), 1, outputFile); }

}

P a g e | 25

Page 26: project

Image Processing 2011

/*----------------COPIES COLOR TABLE-----------------------------*/void copyColorTable(FILE* inputFile, FILE* outputFile, int nColors){ unsigned char *ptrC; unsigned char dummy; int i;

dummy = '0'; ptrC = &dummy;

fseek(inputFile, 54L, SEEK_SET); fseek(outputFile, 54L, SEEK_SET);

for(i=0; i<=(4*nColors); i++) /* there are (4*nColors) bytes in color table */ { fread(ptrC, sizeof(char), 1, inputFile); fwrite(ptrC, sizeof(char), 1, outputFile); }

}

7.1 Result:

(1) Detect edges using the Sobel method.

P a g e | 26

Page 27: project

Image Processing 2011

(2) Detect edges using the Laplace method.

                         

6.2 CANNY EDGE DETECTOR

clc;clear all;close all;I=imread('lena.bmp');subplot(331);imshow(I);title('ORIGINAL IMAGE');[r c]=size(I);s=18; % standard deviation

P a g e | 27

Page 28: project

Image Processing 2011

for z=1:1:3 for y=1:1:3 g(z,y)=(1/(2*pi*s^2))*exp(-(z^2+y^2)/(2*s^2)); endend

I1=conv2(double(I),double(g));subplot(332);imshow(I1);title('GAUSSIAN IMAGE');Mx=[-1 -1 -1;0 0 0;1 1 1 ];y1=conv2(double(I1),double(Mx),'same');subplot(333);imshow(y1);title('X-Direction');My=[-1 0 1;-1 0 1;-1 0 1 ];y2=conv2(double(I1),double(My),'same');subplot(334);imshow(y2);title('Y -Direction');y3=y1+y2;subplot(335);imshow(y3);title('X+Y');y4=abs(y3);subplot(336);imshow(y4);title('ABS VALUE'); %magnitude & angle for i=1:r for j=1:1:c s(i,j)=sqrt((y1(i,j)*y1(i,j))+(y2(i,j)*y2(i,j))); t(i,j)= atand(y2(i,j)\y1(i,j)); endend % zero padding % m=zeros(r+1,c+1);m=padarray(s,[1 1]);subplot(337);imshow(m);title('zero padding in Magnitude'); %Edge direction finding for i=1:r for j=1:1:c if (t(i,j)>0 && t(i,j)< 22.5)||(t(i,j)<0 && t(i,j)<-22.5) a(i,j)=0; elseif (t(i,j)>22.5 && t(i,j)< 67.5)||(t(i,j)>-22.5 && t(i,j)<-67.5) a(i,j)=45; elseif (t(i,j)>67.5 && t(i,j)<112.5)||(t(i,j)>-67.5 && t(i,j)<-112.5) a(i,j)=90;

P a g e | 28

Page 29: project

Image Processing 2011

elseif (t(i,j)>112.5 && t(i,j)<157.5)||(t(i,j)>-112.5 && t(i,j)<-157.5) a(i,j)=135; end endend % maximal suppression d= zeros(r,c); for i=2:r for j=2:c if a(i,j)==0 if (m(i,j)>m(i-1,j)) || (m(i,j)>m(i+1,j)) d(i,j)=m(i,j); elseif (m(i-1,j)>m(i,j)) || (m(i-1,j)>m(i+1,j)) d(i,j)= m(i-1,j); elseif (m(i+1,j)>m(i,j)) || (m(i+1,j)>m(i-1,j)) d(i,j)=m(i+1,j); end end if a(i,j)==90 if (m(i,j)>m(i,j-1)) || (m(i,j)>m(i,j+1)) d(i,j)=m(i,j); elseif (m(i,j-1)>m(i,j)) || (m(i,j-1)>m(i,j+1)) d(i,j)= m(i,j-1); elseif (m(i,j+1)>m(i,j)) || (m(i,j+1)>m(i,j-1)) d(i,j)=m(i,j+1); end end if a(i,j)==45 if (m(i,j)>m(i+1,j-1)) || (m(i,j)>m(i-1,j+1)) d(i,j)=m(i,j); elseif (m(i+1,j-1)>m(i,j)) || (m(i+1,j-1)>m(i-1,j+1)) d(i,j)= m(i+1,j-1); elseif (m(i-1,j+1)>m(i,j)) || (m(i-1,j+1)>m(i+1,j-1)) d(i,j)=m(i-1,j+1); end end if a(i,j)==135 if (m(i,j)>m(i-1,j-1)) || (m(i,j)>m(i+1,j+1)) d(i,j)=m(i,j); elseif (m(i-1,j-1)>m(i,j)) || (m(i-1,j-1)>m(i+1,j+1)) d(i,j)= m(i-1,j-1); elseif (m(i+1,j+1)>m(i,j)) || (m(i+1,j+1)>m(i-1,j-1)) d(i,j)=m(i+1,j+1); end end endend subplot(338);imshow(d);title('Image before thresh.'); % Edges thresholding

P a g e | 29

Page 30: project

Image Processing 2011

T1=0.4;T2=0.2;b=zeros(r,c);for i=1:r for j=1:c if d(i,j)<=T1 b(i,j)=0; elseif d(i,j)>=T2 b(i,j)=1; else if(i~=1 || j~=1) b(i,j)=b(i,j-1); end end endendsubplot(339);imshow(b);

title('Image after thresh.');

P a g e | 30

Page 31: project

Image Processing 2011

OUTPUT:

P a g e | 31

Page 32: project

Image Processing 2011

7. Conclusion

In this project, we have successfully implemented C-code for Edge Detection. Under implementation, we have given stress on Edge enhancement. We have implemented sobel operator & laplace operator in C. From the result it can be conclude that laplace can found more edges than sobel.

Along with this, we have tried to implement Canny(For Sharp Edge Detection) in MATLAB.

P a g e | 32

Page 33: project

Image Processing 2011

8. References

Web References

a) www.google.com

b) www.wikipedia.org

c) www.pages.drexel.edu

d) http://planetmath.org/encyclopedia

Books

a) Anil K. Jain, Fundamentals of Digital Image Processing. P Prentice Hall, 1989.

b) William K. Pratt, Digital Image Processing. Wiley, 1991.

c) Rafael C. Gonzalez and Richard E. Woods, Digital Image P Processing. Addison- Wesley, 1992.

d) Dwayne Phillips ,Image processing in C 2nd Edition

P a g e | 33