64774516 Image Processing Suite Project Report

Embed Size (px)

DESCRIPTION

iage processing

Citation preview

. IMAGE PROCESSING SUITE

Submitted in the partial fulfillment of the requirementsFor the award of the degree of

Bachelor of TechnologyIn Computer Science(2007-2011)

Guide Name: Mrs. TanviSubmitted by:-Ekta Agarwal(083)Naina Verma(064)Gaurav Jain()Mayank Kimothi()0831562707

CERTIFICATE

This is to certify that the project report entitled Image Processing Suite done by Ekta Agarwal, Gaurav Jain, Naina Verma and Mayank Kimothi is an authentic work carried out by them at Northern India Engineering College under my guidance. The matter embodied in this project work has not been submitted earlier for the award of any degree or diploma to the best of my knowledge and belief.

Date: 05/05/2011Signature of the guide:(Mrs. Tanvi )CS Department (Senior Faculty)NIEC

ACKNOWLEDGEMENT

We would also like to give our heartily thanks to Mr. Saurabh Gupta, HOD of Computer Science. We are also very grateful to all the teachers who always helped us to enhance our software skills and giving their precious time.We are also very grateful to our friends without whom support we could never done this project.We would also thank our Institution and our family members without whom this project would have been a distant reality. We also extend our thanks to well wishers. With immense pleasure, we would like to present this project report to Northern India Engineering College, Delhi. It has been an enriching experience for us to enhance our skills at the time of the development of this major project, which would not have been possible without the goodwill and support of all the people around me. We would like to express our sincere thanks to all those people who helped us during our project completion. Words are insufficient to express our gratitude towards Mrs. Tanvi, our mentor of this project.

ABSTRACT

Image processing suite is an image editor that offers all the standard editing and paint tools, as well as image layers and several other features that are usually not found in free image editors. The program also includes screen capture tools, an image browser, and as well as photo frames, channel adjustments and more. The product is part of the image processing software family such as Adobe Photoshop, Corel Draw and likewise. The project is an effort to build a Photoshop on a smaller case, with an easy to use user interface and an easy to understand user manual guideline to help the user how to play with image pixels and colors and other parameters The captured image can be loaded into the internal image editor for further editing, saved as image file (JPG, PNG, GIF, BMP) or automatically uploaded to your FTP server (upload the image and copy the URL to the clipboard).The Objective of this project is to create desktop-based software for general pc users and imaging professionals to play with parameters of an image by using the following features (the features are subject to change): Upload an image Save an image Sharpening Edge Detection Contrast Enhancement Negative Grayscale Sketching Glass View Darkening Flipping Embossing Blurring Histogram Equalization User Friendly Manual to understand the functionality of each feature.

The above-mentioned features will be discussed in detail in the following sections.

TABLE OF CONTENTS1. INTRODUCTION 11.1 Purpose11.2 Objective..11.3 Special Features...21.4 Product Scope .51.5Assumptions and Dependencies...7

2. SOFTWARE DESIGN METHODOLOGY .......................................................102.1 Object Model 102.2 Use Case11 2.3 Class Association.16

3. FUNCTIONAL REQUIREMENTS...173.1 Software Requirements.17 3.2 Hardware Requirements.. 25

4. GRAPHICAL USER INTERFACE DESIGN264.1 The Main Window.264.2 Upload an image....274.3 Save an image....274.4 Sharpening.274.5 Edge Detection..284.6 Contrast Enhancement..294.7 Grayscale..304.8 Sketching..314.9 Glass View314.10 Darkening324.11 Flipping324.12 Embossing...334.13 Blurring344.14 Histogram Equalization...354.15 Negative..37

5 CODING..39

6. OTHER FUNCTIONAL REQUIREMENTS.71

7. CONCLUSION.73

8. BIBLIOGRAPHY..74

TABLE OF FIGURES1. INTRODUCTION 11.1 Convolution Kernel Filter...21.2 Convolution Kernel Matrix.31.3 Edge Detection Filter...41.4 Mean Filter .51.5RGB Color Space 9

2. SOFTWARE DESIGN METHODOLOGY .......................................................102.1 Use Case Diagram.12 2.2 Class Diagram..14

3. FUNCTIONAL REQUIREMENTS...173.1 Java Architecture...19 3.2 Java Virtual Machine193.3 Metal Motif Windows...21

4. GRAPHICAL USER INTERFACE DESIGN264.1 The Main Window.264.2 Sharpening.274.3 Edge Detection..284.4 Contrast Enhancement...294.5 Grayscale...304.6 Sketching...314.7 Glass View.314.8 Darkening..324.9 Flipping..324.10 Embossing...334.11 Blurring344.12 Histogram Graph.364.13 Histogram Equalization...364.13 Negative..37

Chapter-1INTRODUCTION

1.1 PurposeThe purpose of this SRS document is to provide a detailed overview of our software product, its parameters and goals. This document describes the project's target audience and its user interface, hardware and software requirements. 1.2 Objectives To explore and implement a basic image-processing program to use with the aim of providing the user with a basic knowledge of the fundamental techniques of image filtering. To provide the user with an easy to user graphical user interface (GUI) with which the user can filter images using ready loaded filters or custom filters created by the user. To gain experience in the Java programming language. To create the project as an applet for use on the web, so users can log on to my home page and use this program. Image processing is a very highly processor intensive activity. There are thousands of calculations to be completed when filtering an image with a simple 3x3-convolution matrix. There are many commonly available image-processing libraries which implement many of the functions within this project, such as DirectX, WinG and Intels own Image Processing Library (IPL) which use hardware functions on the Intel CPUs. This document is intended for professionals or general end pc users having keen interest in image processing and photography, which would like to change certain parameters and composition of their images.

1.3 Special Features 1.3.1 FilteringThere are two main types of image filtering, linear and non-linear. The method implemented in my project is the linear method, which is achieved by the use of a convolution kernel. Image filtering is generally used to clean up or enhance images, by removing the effects of noise etc. There are many different filters available, from edge detection, enhancement to blur filters and emboss filters etc. Each of the filters are implemented as a convolution mask or convolution kernel.

O1O2O3

O8O9O4

O7O6O5

Figure 1.1 Convolution KernelThe above mask is a simple 3x3 mask/kernel, where O9 is the pixel in question. This project will be using different filters from 3x3 kernels (above) to larger kernel such as 9x9 and 11x11. Most off-the-shelf image packages like Paint Shop Pro does not support the larger masks due to the mass number of calculations required to check/correct each pixel. The larger the mask, the larger the number of calculations and ultimately the longer it takes to filter an image.1.3.2 Standard Convolution AlgorithmThe basic operation of a convolution kernel filtering algorithm is based around a kernel or NxN matrix where N is an odd number. The matrix represents the filter coefficients, which will be applied to the image. The matrix is shifted over the image a pixel at a time and the middle value of the matrix is calculated during each iteration. This involves getting the pixel value in the centre of the matrix as well as, in the case of a 3x3 kernel, the values of its eight neighboring pixels. Each pixel is multiplied by its value or weight in the kernel, and then these values are added together. This result is divided by some divisor and finally a biasing factor is added. The final result becomes the new value of the centre pixel, and the matrix slides over to the next pixel and the process is repeated.3x3 Convolution Kernel Image pixels

Figure 1.2 Convolution KernelThe convolution kernel passes across the image from left to right. It then moves down to the new row of pixels and continues till the full image has been processed.

1.3.3 Common Filters Edge DetectionPsychophysical experiments indicate that a photograph or visual signal with crispier edges is often more subjectively pleasing that an exact photometric reproduction. Edges are not as sharp to the eye. By applying a discrete convolution filter such as one of the following to an image, edge become much clearer.H =0-10H =-1-1-1H =-1-2-1

-15-1-19-1-25-2

0-10-1-1-1-1-2-1

Figure 1.3 Edge Detection Filter Mean Filter The average filter is also known as a mean, box or blur filter. The most common use for the average filter is to reduce noise in an image. The averaging filter does exactly as its name suggests, it moves over each pixel in the image and assigns the middle pixel in the kernel the average value of its eight adjacent neighbors. The kernel coefficients can be seen below. There are however a few drawbacks to using this filter. The first being that edges within the image may become blurred because of the changing value regions in that area also if there is a second value (one of the neighbors) which is highly unrepresentative of the area, the average wont be a true average.

Figure 1.4 Mean Filter1.4 Product Scope Our project goal is to create an image-processing suite that will enable the user to enhance or lessen the quality of an image by changing certain parameters, or adding new ones. Java is used as the basic programming platform for the software. Java provides a vastlibrary of tools for developing the required user interface of our software, and also thevarious imaging algorithms necessary.Java has become increasingly popular of the last few years due to its ease of programming, its cross platform capabilities and its ability to be uses on the Internet as an applet in a web page. One of the downfalls of Java is that it is much slower then native code. But as time goes by, Java is becoming faster and faster and now isn't far behind the speed of C or C++.Key benefits of Java: Platform independence code (Write once, run anywhere) Object orientated Easy of coding (no pointers or functions) We decided we could gain more knowledge by learning Java then we would if we created the project in any other language. Using Java, we could also experiment with the ability of platform independent code, so that this package would run on multiple operating systems such as Window NT/95/98, Linux and also the possibility of creating an applet for use on the web.Many people believe that in the near future, all software products such as word processors will be run centrally through a wed site and not installed on the local client computer. When people need to use these products, they simply log onto the web site and start to work away. There are many advantages for both parties. On the software producer size it means easier to create software releases, updates and patches, as the bulk of the program will be centralized on the server.The client/user can benefit by using a cheaper computer. Payment can be based on an hourly / monthly charge. Users will be able to try out a product without purchasing it. If the user doesn't like the product they dont have to use or pay for it. With the ability of creating my project as an applet for use over the web people can log onto my homepage and use the program without having to download and install it onto their computers.A large amount of time will be spent becoming familiar with programming in the Java programming language, also researching and analyzing the methods by which this project could be implemented. Computer graphics is a highly processor intensive activity. As will become clear in our discussion on filtering techniques, the number of computations for a simple algorithm can often take quite some time.

1.5 Assumptions and Dependencies This been a software development project using new technologies a sizeable proportion of time was spent researching strategies for implementation. Therefore the project was developed using Java and UML. Time spent in analysis and development of a logical model can highlight many pitfalls prior to coding and therefore will reward us with an easily maintainable modularized application which lends itself to upgrades more easily in the future. UML gives these benefits to application development and therefore this was one of the main reasons it was chosen. UML gives us object models of how the application should work before coding takes place hence remove design problems before we start coding.1.5.1 Image FilteringSignals that are transferred over almost all forms of communication can be open to noise; an image may be subject to this noise and interference from several sources. These noise effects can be minimized by statistical filtering techniques or by application of spatial adhoc processing techniques. Image noise arising from noisy sensors or channel transmission errors usually appears as discrete isolated pixel variations that are not spatially correlated. Pixels that are in error often appear markedly different from their neighbors. Many noise-cleaning algorithms make use of this fact. By examining a pixel and checking to see if the brightness of this pixel is greater then the average brightness of its immediate neighbors by some threshold level, we can see if this pixel is valid or if it may be noise. If the pixel is noise then we replace this pixel with the average of the neighbors. Noisy images have a higher spatial frequency spectrum than normal images. Hence a simple low-pass filter can smoothen out the noise. 1.5.2 Color SpaceA color space is a mathematical representation of a set of color. I will be dealing with two fundamental color models, RGB (Red, Green, and Blue) used in color computer graphics and color television and YUV used in broadcast and television. Color spaces can be converted between each other, but video quality is lost with each conversion. Care should be taken to minimize the number of color space conversions used in the video encoding and decoding path, so as to loss as little as possible quality.1.5.3 RGB Color SpaceThree fundamental color models are RGB (user in color computer graphics and color television); YIQ, YUC, or YCbCr (used in broadcast and television systems), and HSV (Hue Saturation Value). All of the color spaces in common use can be derived from the RGB information supplied by the devices like cameras and scanners. When light refracts through a prism, its color components separate to create a rainbow. This rainbow is a spectrum particular to white light and the color range that the human eye can perceive. The colors proceed across the spectrum in the order red, orange, yellow, green, blue, indigo and violet to give the acronym ROYGBIV. Of these colors, the primaries are red, green and blue, and the color model for light is referred to as the RGB modelThe red, green, and blue (RGB) color space is widely used throughout computer graphics and imaging. Red, green and blue are three primary additive colors (individual components that are added together to from a desired color) and are represented by a three-dimensional, Cartesian co-ordinate system (The Color Cube). A diagonal from one corner of the cube (Black) to the other (White) represents various grey levels. The RGB color space is the most prevalent choice for graphic frame buffers because color CRTs use red, green, and blue phosphors to create the desired color. Therefore, the choice of the RGB color space for a graphics frame buffer simplifies the architecture and design of the system. Also, a system that is designed using the RGB color space can take advantage of a large number of existing software routines, since this color space has been around for a number of years.However, RGB is not very efficient when dealing with "real-world" images. All three RGB components need to be of equal bandwidth to generate any color within the RGB color cube. The result of this is a frame buffer that has the same pixel depth and display resolution for each RGB component. Also, processing an image in the RGB color space is not the most efficient method. For example, to modify the intensity of a given pixel, the three RGB values must be read from the frame buffer, the intensity or color calculated, the desired modifications performed, and the new RGB values calculated and written back to the frame buffer.

Figure 1.5 RGB Color Cube.

Chapter-2SOFTWARE DESIGN METHODOLOGIESBefore starting to code the project, a decision was made to use OOD (Object Oriented Design) to design the main points of the project, as the language that was to be used was an Object-Oriented language.Object-Oriented Design (OOD) practice in software engineering is essential for keeping down the cost of the ever-increasing amount of software being developed now and that have to be maintained in the futureThe project was designed by using different parts of OMT (Object Modeling Technique) methodology. OMT is a methodology used by many software engineers to design interaction between different parts of a project before starting the work on creating the project. Object Oriented Analysis (OOA) is concerned with generating a problem statement and investigating what needs to be done, while Object Oriented Modeling (OOM) addresses the needs of the Analysis model and is based on three different views of the system, which is the object, dynamic and functional models.Many problems can be avoided by using good design practices before the software development/coding stage.

2.1 Object Model The Object Model shows the static data structure of a system. The model describes object classes and their relationships to each other. From the general description of the project problem statement above, a list of objects and classes can be derived. The list of important classes is shown below: Image processing suite (create the user interface) Picture (it contains methods two load the image and display the image) Filter (Responsible for filtering images and various processes). Histogram( it generates the histogram of the image) Help(Display the help options to user)

2.2 Use Case

Figure 2.1 Use-Case Diagram2.2.1 User Opens Image (Buffered Image RGB color model) Actors User Preconditions The program is running The image exists on a hard drive or network drive that the user has access and read permission to. Trigger The image is loaded from the file and displayed on the screen. An image object is created. Actions Buffered Image is created with a string (path) of the image that is to be opened. RGB color model is loaded and image pro create a SplitImageComponent that will display the image. This frame is now set as been the latest frame to be opened by the desktop manager Post conditions The user can now see the image on the screen. The users can now user different filters on the image. The user can now create a histogram of the image.

2.2.2 Display a Histogram Actors User Preconditions A Buffered image in RGB color model is open on the desktop. Trigger Histogram set selected from the menu bar. Actions The user is asked which image he/she wants the histogram to be created for, either the original image or the filtered one. The Histogram is now displayed on the desktop. Post conditions If there is an image on the screen, then the histogram for this image is now displayed on the desktop.2.2.3 Filter an image Actors User Preconditions The program is running and the user has an image open and displayed on the screen. The user has selected which filter to apply. The user has checked the accumulated box on or off, depending on whether the filter is to be accumulated with the previously filtered image. Trigger User clicks on the selected filter button Actions The filter is applied to either the previously filtered image or the original image depending on if the accumulated checkbox is selected. If there is no previously filtered image, then the original image is filtered. Post conditions The filtered images(s) will now be loaded on output frame. The user can click on the screen to display the filtered side by side with the original image to compare the difference.2.2.4 Distortion filters Actors User Preconditions The program is running. Trigger User clicks on the distortion filters menu item from the menu bar. Actions The image is distorted by the user as selected. Post conditions The user can see the changes in the initial image on output frame.

2.2.5 Save an Image Actors User Preconditions The program is running. Image is there in the output frame. Trigger User clicks on the save icon in the file menu of the internal frame. Actions The image is assigned a name by user. Post conditions The image is saved at desired location selected by user. If there is no image in output frame a error message is displayed.2.3 Class Associations

Figure 2.2 Class Diagram

Chapter-3FUNCTIONAL REQUIREMENTS3.1 Software RequirementsJava is a programming language expressly designed for use in the distributed environment of the Internet. It was designed to have the "look and feel" of the C++ language, but it is simpler to use than C++ and enforces a completely object-oriented view of programming. Java can be used to create complete applications that may run on a single computer or be distributed among servers and clients in a network. It can also be used to build small application modules or applets for use as part of a Web page. Applets make it possible for a Web page user to interact with the page.3.1.1 Java History and Development Java Milestones1990: Programmer Patrick Naughton starts "Project Green" at Sun Labs.1991: Programmer James Gosling created new language ("Oak"), based on C++.Mid 1993: Release of Mosaic WWW browser from NCSA.1994: WWW Rise in Popularity. Oak renamed "Java", prototype Java WWW browser.January 1995: Hot Java/Java Development Kit released for Solaris.Summer 1995: Linux and Windows 95 ports of Java available.Autumn 1995: Java Beta 1 released. Java applet support announced for Netscape Navigator 2.0.December 1995: Sun/Netscape announces JavaScript. Microsoft and IBM announce intention to license Java technology.23 January 1996: Java 1.0 releasedJames Gosling an employee of Sun started a new project. He created a new language based on C++, but which eliminated many of that languages shortcomings. This language was designed with many goals in mind. It was to be fast, portable and safe to use for embedded systems. The name of this language was Oak (now named Java due to trademark clash). In 1993 Mosaic browser was released and started one of the biggest global infatuations, the World-Wide-Wed (WWW). Later that same year, a web browser was written in Oak. With the release of this browser, Oaks potential for Internet programming became apparent. It took C++ 10 years to become as popular as Java became in 20 months.Java differs from C++ in many different main ways: No functions (Being and entirely object oriented language) No pointers No global methods or variables No operator overloading No multiply inheritance No Pre-processor No header files When you compile a Java program, you dont get an exe file, you get a class file. This class file is highly portable binary code. Pure Java binaries are dependent only on the Java Virtual Machine (JVM). Once this interpreter machine has been ported to the target architecture, the Java binaries will run unmodified. The JVM is a software interpreter that presents (through either hardware or software) a set of defined features upon which Java code relies.Java BytecodeJava API

Java Virtual Machine

Web BrowserNative Methods

Operating System

Hardware

Figure 3.1 Java Architecture.

Class Loader

Bytecode Verifier

LibrariesInterpreter/

Security MangerJIT Compiler

Host Platform

Figure 3.2 Java Virtual Machine. 3.1.2 What is Swing and the JFC The Abstract Window Toolkit (AWT) was the original toolkit that was packaged with Java for developing User Interfaces (UIs). This toolkit was not original designed to be anything like a high-powered user interface toolkit to be used by more then half a million developers. It was designed to support the development of simple user interfaces for simple applets used in web pages. A lot of the 'normal' components found today in almost all UI toolkits weren't includes in the AWT, such as scroll panes, printing support, keyboard navigation and popup menus. The AWT was badly designed and was based on a peer-based architecture. Peers are native user interface components delegated to by wafer-thin AWT objects. Which left the AWT classes a mere shell around somewhat complex native peers? This design allowed the Java creators to turn out components in record time (six weeks). These components are called heavyweight components as they were associated with a native peer component and were rendered in their own native window.Realizing that if something was not done with the UI toolkit, the Java community was likely to split over a standard user interface toolkit. Javasoft struck a deal with Netscape who had been working on a set of lightweight classes based on concepts from NEXTSTEP's user interface toolkits. This deal brought into place the Java Foundation Classes (JFC), which include the Swing toolkit, which is a collection of over 40 lightweight components, which is four times the number of components provided by the AWT. In addition to providing lightweight replacements for the AWT's heavyweights, Swing also provides a wealth of additional components to facilitate the development of graphical user interfaces.Lightweight components do not have native peers and aren't rendered in their own heavyweight container windows. Because these lightweight components are rendered in their container's window and not a window of their own, lightweight must ultimately be contained in a heavyweight container. As a result, Swing frames, applets and dialogs must be heavyweight to provide a window into which lightweight Swing components can draw.Swing supports the concept of a pluggable look and feel. By modifying the look and feel of an application the users can create the Windows, Motif, Macintosh or Metal (Java own look and feel) look and feel on platforms other then the selected look and feel.

Figure 3.3 Two different look and feels of Java ( Metal Motif Windows)The down side to swing is its speed. Due to the fact that the components are lightweight and are entirely written in Java code, they are rendered much slower then their AWT counter parts. Applications that use Swing can run notably slower then if they used the AWT toolkit. However this does not pose any big problems to todays standard desktop computer of 600 MHz and up.3.1.3 What is Java 2D Also included in the Java Foundation Classes (JFC), is the Java 2D APIs used for developing/manipulating graphics.The Java 2D Application Programming Interface (API) is a set of classes that can be used to create high quality graphics. It includes features like geometric transformation, alpha compositing, bi-directional text layout, image processing, antialiasing and many more classes. Before Java 2D the AWT's graphics toolkit had some serious limitations: Few fonts were supported. Rotate or scaling wasn't included. All lines were drawn with a single-pixel thickness. Gradients, special fills and patterns weren't included. Along with the other Java Media APIs, the Java 2D API was developed to empower developers to create applications that incorporate advanced user interfaces. The design goals for the Java 2D API include: Supporting high-quality, platform-independent graphics, text, and images Delivering a simple and compact 2D graphics and imaging solution Leveraging Java's "Write Once, Run Anywhere" paradigm in order to provide consistent access to 2D graphics across major Java platforms Complementing other Java technologies, thus providing an integrated media solution for Java

3.1.4 Java 2D Features GraphicsAntialiased renderingBezier pathsTransformsCompositingArbitrary fill stylesStroking parameters for lines and curvesTransparency

TextExtended font supportAdvanced text layout

ImagesFlexible in-memory image layoutsExtended imaging operations, such as convolution,lookup tables, and affine transformations

DevicesHooks for supporting arbitrary graphics devices such asprinters and screens

Color ManagementICC profile supportColor conversion from different color spacesArbitrary color spacesBenefits for Developers

Figure 3.4 Java 2D classesThe Java 2D API provides many benefits to developers who want to incorporate graphics, text, and images into their applications and applets. In other words, the Java 2D API benefits virtually all Java developers. By enabling the incorporation of sophisticated graphics and text, the Java 2D API makes it possible to create Java programs that provide a richer end-user experience. With the Java 2D API, you have the necessary support to create real-world applications that meet the expectations of today's user for font, graphics, and image capabilities.The Java 2D API is part of a set of class libraries that are designed to enable you to develop full-featured Java programs. With these libraries, developers have the essential tools to build applications that meet market needs. They make it possible to reach a broad audience running applications on any Java enabled platform.3.1.5 Images The Java 2D API provides a full range of features for handling images by supplementing the image-handling classes in java.awt and java.awt.image with several new classes, including: BufferedImage, Tile, Channel, ComponentColorModel and ColorSpace.These classes give us greater control over images. They allow us to create images in color spaces other than RGB and characterize colors for accurate reproduction. The Java 2D API BufferedImage class allows us to specify exactly how pixels are laid out in an in-memory image.Like all other graphic elements, images are altered by the Transform object associated with the Graphics2D object when they are drawn. This means that images can be scaled, rotated, skewed, or otherwise transformed just like text and paths. However, images maintain their own color information, represented by a color model for interpreting color data, rather than using the current color. Images can also be used as the rendering target of a Graphics2D.I make extensive use of BufferedImage, BufferedImageOp, Transform, ConvolveOp, Kernel, Graphics2D, and LookupOp in my projects which are all classes in the Java 2D package.3.1.5 Naming Conventions The Java language has standard naming conventions. The reason for these conventions is so people can read code more easily and understand it quicker then if these conventions were not in place. Classes start with a capital letter, and each word in the class also starts with one. The class name for a buffered image would be BufferedImage, the file input stream is named FileInputStream etc. Objects start with a small letter, the first letter of other words in the name start with a capital, an object named "bits per pixel" would be bitsPerPixel and so on. Packages are a group of classes and are named the same way as an object. So java.awt.image is a package and java.awt.Image is a class. I have used these naming conventions through out my code.

3.2 Hardware RequirementsUser will use the mouse as a selecting device and keyboard will be use to name the image which is to be uploaded. There are no communicating interfaces.

Chapter-4GRAPHICAL USER INTERFACE DESIGNThis chapter discusses the user interface design of our software.

4.1 The Main Window4.1.1 DescriptionOn running the software, the main window of the software looks like as shown in the diagram below. There are 2 basic features of the window: The Menu bar An area divided into two portions: The Input image and the Output image. 4.1.2 Illustration

Figure 4.1 The Main Window

4.2Upload an Image4.2.1 Description The user can upload any compatible image format for further processing of the image.4.2.2 Stimulus/Response SequencesThe feature can be found under the File option of the toolbar. On clicking the Upload button, a new window opens for the user to define the path of the image (the location of the image) to be uploaded.

4.3 Saving an Image4.3.1 Description The processed image can be saved to any directory on the pc. The image will be saved in the same format as it was uploaded.4.3.2 Stimulus/Response Sequences The feature can be found under the File option of the toolbar. On clicking theSave As button, a new window opens for the user to set the path of the image where it is to be saved.

4.4 Image Sharpening4.4.1 DescriptionThe main aim in image sharpening is to highlight fine detail in the image, or to enhance detail that has been blurred (perhaps due to noise or other effects, such as motion).4.4.2 Illustration Input Image Output ImageFigure 4.2 Image Sharpening4.5 Edge Detection4.5.1 Description The purpose of edge detection is to highlight the edges of different objects in an image where color changes and shadows produce rapid changes in the color and/or intensity of the image. Other circumstances, such as the edges of letters on a sign will also trigger edge detection on the basis of strong color contrasts between the letters and the background. Our edge detection is a feature that grabs the pixel changes in a picture and displays the picture edges by applying a suitable algorithm on it.4.5.2 Illustration Input ImageOutput ImageFigure 4.3 Edge Detection

4.5 Contrast Enhancement4.5.1 DescriptionThe filter changes the brightness and contrast of an image. Contrast enhancement involves changing the original values so that more of the available range is used, thereby increasing the contrast between targets and their backgrounds.4.5.2 Illustration Input Image Output ImageFigure 4.4 Contrast Enhancement

4.6Image Grayscale4.6.1 Description The filter converts an image to a Grayscale image. To do this it finds the brightness of each pixel and sets the red, green and blue of the output to the brightness value. The weighting used by the Grayscale filter is Luma = 77Red + 151Green + 28Blue4.6.2 Illustration Input ImageOutput ImageFigure 4.5 Gray Scale

4.7 Image Sketching4.7.1 DescriptionSketching is the loose drawing of the main features of an object. The software provides the same effect that can be produced while drawing with a pencil.4.7.2 Illustration Input Image Output ImageFigure 4.6 Image Sketching4.8 Image Glass view4.8.1 Description This feature gives the effect as if image is seen through glass.4.8.2 Illustration Input Image Output ImageFigure 4.7 Glass View

4.9 Image Darkening4.9.1 Description This feature allows increasing or decreasing the inherent brightness of the image.4.9.2 Illustration Input Image Output ImageFigure 4.8 Darkening4.10 Image Flipping4.10.1 DescriptionThis feature flips the image laterally as well as vertically.4.10.2 Illustration (Flip laterally) Input Image Output ImageFigure 4.9 Flipping4.11 Image Embossing

4.11.1 DescriptionImage embossing is a computer graphics technique where each pixel of an image is replaced either by a highlight or a shadow, depending on light/dark boundaries on the original image. Low contrast areas are replaced by a gray background.

4.11.2 Illustration Input ImageOutput ImageFigure 4.10 Embossing

4.12 Image Blurring4.12.1 DescriptionBlurring is the thing that happens when your camera is out of focus, and the image captured gets dizzy or hazy. What happens is that what should be seen as a sharp point gets smeared out, usually into a disc shape. In image terms this means that each pixel in the source image gets spread over and mixed into surrounding pixels. Another way to look at this is that each pixel in the destination image is made up out of a mixture of surrounding pixels from the source image.4.12.2 Illustration Input Image Output ImageFigure 4.11 Blurring

4.13 Histogram Equalization4.13.1 Description Histogram Equalization is a technique for increasing the detail of an image that is lacking in contrast. This technique changes the intensity levels in the image to cause the image to conform to some desired histogram. Histogram Equalization helps the quality of dithered images in the One Bit mode. Sometimes, this technique increases the contrast too much. By clicking on the histogram button in the toolbar, a dialog appears asking the user to draw the histogram of the original image or of the filtered image. If the YUV frame is open, only histograms for the selected channels (checked check boxes) will be drawn. The user can open as many histogram windows as wanted. So histograms of different images can be compared side by side. There are four radio buttons (RGB, R, G and B) at the top of the histogram window. By selecting different buttons, four different graphs can be drawn, one for the RGB value and one for each of the R, G and B values.

Figure 4.12 Histogram Graph The maximum number of equal color is displays in the top left corner of the histogram window. This number is there to give the user an idea of the scale of the histogram plot. My program uses 36-bits to store each pixel of an image. The top two bytes is the value of the Alpha component, the next two is the value of the Red component, then the green and finally blue. So the different number of possible RGB colors is greater then 16 million colors (24-bit). When I display a histogram of an image, I only look at 256 different colors. This works fine when displaying the histogram of the R, G or B channels. But when I wish to display the diagram of the RGB values, I convert all the values from 24-bit down to 8-bit (256 colors) and then display them. This explains why the x-axis ranges from 0 to 256.4.13.2 Illustration Input Image Output ImageFigure 4.13 Histogram Equalization

4.13Image Negative4.13.1 Description This operation creates an effect that looks like a color negative in conventional film. Also note that applying this operation twice will restore the original image; you're basically taking a negative of the negative To invert an image, we simply subtract each color component from 255. There are no filter parameters.4.13.2 Illustration Input Image Output ImageFigure 4.14 Negative4.15. Help 4.15.1 DescriptionThis feature lets the user know about the functionality of each feature in our project. It will add to the knowledge of the user regarding the terminology of image processing, its functions and techniques.

Chapter-5CODING5.1 Image Editor Interfaceimport java.awt.*; import javax.swing.*; import javax.swing.border.*; import java.io.File; import java.awt.event.*; import java.awt.event.KeyEvent; import java.awt.event.ActionListener; import java.awt.image.BufferedImage; import java.awt.image.Kernel; import java.awt.image.ConvolveOp; import java.awt.image.BufferedImageOp;

@SuppressWarnings("serial")public class ImageEditor extends JFrame { // Variables declaration private JLabel jLabel4; private JLabel jLabel5; private JSplitPane jSplitPane2; private JPanel contentPane; private JFileChooser chooser = new JFileChooser();private Picture pic = new Picture(256,256);private Picture pic1 = new Picture(256,256); private Picture pic3 = new Picture(256,256);// End of variables declaration public ImageEditor() { super(); initializeComponent(); this.setJMenuBar(createMenuBar()); this.setVisible(true); } private void initializeComponent() { jLabel4 = new JLabel(); jLabel5 = new JLabel(); jSplitPane2 = new JSplitPane(); contentPane = (JPanel)this.getContentPane(); jLabel4.setHorizontalAlignment(SwingConstants.CENTER); jLabel4.setText("Load Input..."); jLabel5.setHorizontalAlignment(SwingConstants.CENTER); jLabel5.setText("See Output..."); jSplitPane2.setLeftComponent(jLabel4); jSplitPane2.setRightComponent(jLabel5); jSplitPane2.setDividerLocation(300); // // contentPane // contentPane.setLayout(new GridLayout(1,1)); contentPane.add(jSplitPane2); contentPane.setBorder(new TitledBorder("")); contentPane.setBackground(new Color(100 , 149, 237)); contentPane.setForeground(new Color(239, 19, 19)); contentPane.setFocusable(false); // // ImageEditor // this.setTitle("ImageEditor... "); this.setLocation(new Point(550,250)); this.setSize(new Dimension(730,500)); this.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); } public JMenuBar createMenuBar() { JMenu menu,submenu; JMenuItem menuItem;

// create the menu bar JMenuBar menuBar = new JMenuBar();

// build the File menu menu = new JMenu("File"); menu.setMnemonic(KeyEvent.VK_F); // only needed for Alt-f keyboard shortcut menuBar.add(menu);

menuItem = new JMenuItem("Open File..."); menuItem.addActionListener(new OpenFileListener());menuItem.setToolTipText("Open an Image File..."); menu.add(menuItem);

menuItem = new JMenuItem("Save As..."); menuItem.addActionListener(new SaveAsListener());menuItem.setToolTipText("Save an Image File..."); menu.add(menuItem);

// build the Process menu menu = new JMenu("Process"); menu.setMnemonic(KeyEvent.VK_P); // only needed for Alt-p keyboard shortcut menuBar.add(menu);

menuItem = new JMenuItem("Brighten"); menuItem.addActionListener(new BrightenListener());menuItem.setToolTipText("Brighten the image..."); menu.add(menuItem);menuItem = new JMenuItem("Darken"); menuItem.addActionListener(new DarkenListener());menuItem.setToolTipText("Darken the image..."); menu.add(menuItem);

menuItem = new JMenuItem("Negative"); menuItem.addActionListener(new NegativeListener());menuItem.setToolTipText("Invert the image colors..."); menu.add(menuItem);

menuItem = new JMenuItem("Grayscale"); menuItem.addActionListener(new GrayscaleListener());menuItem.setToolTipText("Turn the image into black-n-white image..."); menu.add(menuItem);

menuItem = new JMenuItem("Blur"); menuItem.addActionListener(new BlurListener());menuItem.setToolTipText("Blur the image..."); menu.add(menuItem);

submenu = new JMenu("Edge Detection");menuItem = new JMenuItem("SobelGX Filter");menuItem.addActionListener(new SobelGXListener());menuItem.setToolTipText("Edge Detection using Sobel Filter...");submenu.add(menuItem);

menuItem = new JMenuItem("SobelGY Filter");menuItem.addActionListener(new SobelGYListener());menuItem.setToolTipText("Edge Detection using Sobel Filter...");submenu.add(menuItem);

menuItem = new JMenuItem("Laplasian Filter");menuItem.addActionListener(new LaplasianListener());menuItem.setToolTipText("Edge Detection using Laplasian Filter...");submenu.add(menuItem);

menu.add(submenu);

menuItem = new JMenuItem("Histogram Equalisation"); menuItem.addActionListener(new HistogramListener());menuItem.setToolTipText("Enhance the contrast by Histogram Equalisation Technique..."); menu.add(menuItem);

menuItem = new JMenuItem("Sharpen"); menuItem.addActionListener(new SharpenListener());menuItem.setToolTipText("Sharpen the image..."); menu.add(menuItem);

menuItem = new JMenuItem("Sketching"); menuItem.addActionListener(new SketchingListener());menuItem.setToolTipText("Pencil sketch impression..."); menu.add(menuItem);

menuItem = new JMenuItem("Emboss"); menuItem.addActionListener(new EmbossListener());menuItem.setToolTipText("Emboss impression..."); menu.add(menuItem);

// build the Distortion menu menu = new JMenu("Distortion"); menu.setMnemonic(KeyEvent.VK_D); // only needed for Alt-d keyboard shortcut menuBar.add(menu);

menuItem = new JMenuItem("Flip Horizontal"); menuItem.addActionListener(new FlipHorizontalListener());menuItem.setToolTipText("Get the mirror image of the original picture..."); menu.add(menuItem);

menuItem = new JMenuItem("Glass"); menuItem.addActionListener(new GlassListener());menuItem.setToolTipText("Makes image look like it's being seen through glass..."); menu.add(menuItem);

// build the Distortion menu menu = new JMenu("Help"); menu.setMnemonic(KeyEvent.VK_H); // only needed for Alt-d keyboard shortcut menuBar.add(menu);menuItem = new JMenuItem("Help"); menuItem.addActionListener(new HelpListener());menuItem.setToolTipText("Get the help how to run this software..."); menu.add(menuItem);

return menuBar; }

private class OpenFileListener implements ActionListener { public void actionPerformed(ActionEvent e) { if (chooser.showOpenDialog(ImageEditor.this) == JFileChooser.APPROVE_OPTION) { File file = chooser.getSelectedFile(); pic = new Picture(file); pic1 = new Picture(file); pic3 = new Picture(file); jSplitPane2.setLeftComponent(pic.getJLabel()); jSplitPane2.setRightComponent(pic1.getJLabel()); pack(); } } }

// open a save dialog when the user selects "Save As" from the menu private class SaveAsListener implements ActionListener { public void actionPerformed(ActionEvent e) { if (chooser.showSaveDialog(ImageEditor.this) == JFileChooser.APPROVE_OPTION) { File file = chooser.getSelectedFile(); pic1.save(file); } } }

private class BrightenListener implements ActionListener {public void actionPerformed(ActionEvent e) {int w = pic1.width(); int h = pic1.height();

float [] sharpenKernel = {

0.0f, 0.0f, 0.0f,

0.0f, 1.2f, 0.0f,

0.0f, 0.0f, 0.0f

}; ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3, sharpenKernel), ConvolveOp.EDGE_NO_OP, null); sharpenOp.filter(pic1.retImage(),pic3.retImage());for (int x = 0; x < w; x++){for (int y = 0; y < h; y++) { Color c = pic3.get(x, y);pic1.set(x,y,c);}}repaint(); }} private class DarkenListener implements ActionListener {public void actionPerformed(ActionEvent e) {int w = pic1.width(); int h = pic1.height();

float [] sharpenKernel = {

0.0f, 0.0f, 0.0f,

0.0f, 0.8f, 0.0f,

0.0f, 0.0f, 0.0f

}; ConvolveOp sharpenOp = new ConvolveOp (new Kernel (3, 3, sharpenKernel), ConvolveOp.EDGE_NO_OP, null); sharpenOp.filter(pic1.retImage(), pic3.retImage());for (int x = 0; x > 16) & 0xff);g[x][y] = 255-((argb >> 8) & 0xff);b[x][y] = 255-(argb & 0xff);}}

for (int x = 0; x < pic1.width(); x++){for (int y = 0; y < pic1.height(); y++) {argb = (r[x][y] 16) & 0xff);g2[x][y] = 255-((argb >> 8) & 0xff);b2[x][y] = 255-(argb & 0xff);}}

for (int x = 0; x < pic1.width(); x++){for (int y = 0; y < pic1.height(); y++) {argb = (r1[x][y] >8) & 255)- ((upperLeft >> 8 ) & 255);int bDiff=(current & 255 )- (upperLeft & 255);

int diff=rDiff;if(Math.abs(gDiff) > Math.abs(diff))diff=gDiff;if (Math.abs(bDiff) > Math.abs(diff))diff=bDiff;

int grayLevel = Math.max (Math.min ( (128 + diff ) , 255), 0);pic1.set(y,x,(grayLevel