DIP Image Enhancement 1

Preview:

DESCRIPTION

Slides of DIP (Image enhancement in spatial domain)

Citation preview

Image FundamentalsImage FundamentalsWhat is an image?• An image is a 2D function I(x,y), where x and y

are spatial coordinates and the value of I at any pair of coordinates (x,y) is called the intensity or gray level.

• When (x,y) and the amplitude values of I all are finite and discrete then the image is a Digital Image.

Digital Image is a 2D array of numbers representing the sampled version of an image

PixelPixel- The images are defined over a grid. Each

grid location is called as picture elements or most popularly as Pixel.

What is Digital Image Processing (DIP)?DIP is manipulation of an image to improve or change some qualities of the image. DIP encompasses all the various operations done through a digital processor which can be applied to image data.

Why image processing?Why image processing?

• Two goals of Image Processing: - Improvement of pictorial information for human

interpretation. OR

- Processing of scene data for automatic machine perception

• Image Processing may be considered as the preprocessing for a Pattern Recognition System

Related Areas of Image ProcessingRelated Areas of Image Processing

Image Processing:Input: Image –---- Output: Image

Image Analysis/Understanding:Input: Image –---- Output: Measurements

Computer Vision:Input: Image –---- Output: High-level

description

Aspects of image processingAspects of image processing

Processing of image

Low levelQuality improvement

Mid levelFeature/attribute extraction

High levelEmulation of Human

Vision

Image Processing

Image Processing

Computer Vision

Black & White and Colour ImagesBlack & White and Colour Images

• Black & white images are depicted by gray levels, they don’t have any colour information.

• A (digital) colour image is a digital image that includes colour information for each pixel.

- A colour image is typically represented by BIT depth. With a 24-BIT image, the BITs are often divided into three groupings: 8 for Red, 8 for Green, and 8 for Blue. Combinations of those bits are used to represent other composite colours. A 24-BIT image offers 16.7 million different colour values.

Image EnhancementImage Enhancement• The purpose of image enhancement is to

process an image so that the result is more suitable than the original image for a specificapplication

Two approaches of image enhancement:• Spatial domain methods

- direct manipulation of pixels• Frequency domain methods

- based on modifying the Fourier transform

Spatial Domain MethodsSpatial Domain Methods

• In spatial domain image enhancement methods, manipulations are done directly on the pixels

Here,f(x,y) is the input imageT is the operatorg(x,y) is the processed (enhanced) image

)],([),( yxfTyxg =

Image enhancement in spatial Image enhancement in spatial domaindomain

• Two methods of processing in spatial domain:

- Point processings=T(r)Enhancement at any point depends only on

the gray level of that point

- Mask processingProcessing is done using gray levels of pixel

neighbourhood

Image enhancement in spatial Image enhancement in spatial domaindomain

• Point processing- When the neighbourhood is of size 1 x 1 (that is

a single pixel) then g depends only on the value of f at (x,y) and T becomes a gray-level (or intensity or mapping) transformation function as:

s=T(r)r & s are the gray levels of f(x,y) and g(x,y) at (x,y)

- Point processing methods are based only on the intensity of single pixels

Image enhancement in spatial Image enhancement in spatial domaindomain

• Contrast stretching- A simple method of image enhancement using point processing

A 3x3 Mask

Image enhancement in spatial Image enhancement in spatial domaindomain

• Mask:Mask is small 2-D array of pixels. It is also called as kernel or template or window or filter

• Mask processing or filtering: - Processing is done using mask co-efficients

p

Image enhancement in spatial Image enhancement in spatial domaindomain

A neighbourhood about (x,y) is defined by using a square (or rectangular) sub-image area centered at (x,y)

Some Basic Gray Level Some Basic Gray Level TransformationsTransformations

• Image negatives• Log Transformations• Power Law Transformations• Piecewise-Linear Transformation

Functions:• Contrast stretching• Gray-level slicing• Bit-plane slicing

Linear: Negative, Identity

Logarithmic: Log, Inverse Log

Power-Law: nth power, nth root

Some Basic Gray Level Some Basic Gray Level TransformationsTransformations

Image NegativesImage Negatives• Function reverses the order from black to white

so that the intensity of the output image decreases as the intensity of the input increases.

• The negative of an image with gray levels in the range [0,L-1] is obtained by using the negative transformation given as:

s=(L-1)-r

- Image negatives are used mainly in medical images and to produce slides of the screen

Image NegativesImage Negatives

• Image negative function:s=(L-1)-r

Output graylevels

0 L-1 input gray levels

L-1

Image enhancement in spatial Image enhancement in spatial domaindomain

Log TransformationsLog Transformations

s = c log(1+r)c is a constant

& 0 ≤ r

• Compresses the dynamic range of images with large variations in pixel values

Image enhancement in spatial Image enhancement in spatial domaindomain

PowerPower--Law TransformationsLaw TransformationsIt is has the basic form as

c & γ are positive constants

To account an offset

is the offset

• Gamma correction: - It is a process used to correct power-law

response phenomena

( )s c r γε= +

γcrs =

ε

γ=c=1: identity

PowerPower--Law TransformationsLaw Transformations

Gamma correction in CRTGamma correction in CRT

Application of PowerApplication of Power--Law Law TransformationsTransformations

PiecewisePiecewise--Linear TransformationLinear Transformation

• Contrast Stretching- To increase the dynamic range of the gray levels

in the image being processed

Contrast StretchingContrast Stretching

Contrast StretchingContrast Stretching• The locations of (r1,s1) and (r2,s2) guides

the characteristics of the transformation function

– If r1= s1 and r2= s2 the transformation is a linear function and produces no changes

– If r1=r2, s1=0 and s2=L-1, the transformation becomes a thresholding function that results a binary image (that has two gray levels)

Contrast StretchingContrast Stretching-Intermediate values of (r1,s1) and (r2,s2) produce various degrees of spread in the gray levels of the output image, thus affecting its contrast.

– Generally, it is assumed that r1≤r2

& s1≤s2

GrayGray--Level SlicingLevel Slicing

• Used to highlight a specific range of gray levels in an image

One approach is to display a high value for all gray levels in the range of interest and a low value for all other gray levels, so it produces binary image

GrayGray--Level SlicingLevel Slicing

– The second approach is to brighten the desired range of gray levels but to preserve the background and gray-level tonalities in the image

Image enhancement in spatial Image enhancement in spatial domaindomain

BITBIT--Plane SlicingPlane Slicing• To highlight the contribution made to the total image

appearance by specific BIT in the image

– In a gray level image of 8-BIT, each pixel is represented by 8 BITs. So, it can be considered that the image is composed of eight 1-BIT plan (BIT plan 0 to BIT plan 7)

– BIT plane 0 contains the least significant BIT (LSB) and plane 7 contains the most significant BIT (MSB)

BITBIT--Plane SlicingPlane Slicing– The higher order BITs (top four) contain the

majority of visually significant data. The other BIT planes contribute the more subtle details in the image

– Binary image can be obtained from BIT plan slicing. Plane 7 corresponds exactly with an image thresholded at gray level 128

That is- Pixel values between 0 to 127 → 0- Pixel values between 128 to 255 → 1

BIT Plan Representation of an BIT Plan Representation of an ImageImage

BIT Plan Slicing of an ImageBIT Plan Slicing of an Image

BIT Plan Slicing of an ImageBIT Plan Slicing of an Image

Image enhancement in spatial Image enhancement in spatial domaindomain

Histogram: The plot of gray level (X-axis) versus number of pixels with that gray level (Y-axis) is called a histogram

It is a discrete function h(l)=nlWhere l ∈ [0, L-1] is a gray valuenl = no. of pixels in the image having gray level l

Normalized Histogram:It gives an estimate of the probability of occurrence of gray

level lNormalized histogram p(l)=nl/nWhere n=total number of pixels in an image

Image enhancement in spatial Image enhancement in spatial domaindomain

• Histogram transformation

Dark image Bright image

Low-contrast image High contrast image

Histogram ProcessingHistogram Processing

• The shape of the histogram of an image does provide useful information about the possibility for contrast enhancement

Types of processing:- Histogram equalization- Histogram matching (specification)- Local enhancement- Use of Histogram statistics for image

enhancement

Histogram Equalization• For gray levels that take on discrete values, we

deal with probabilities: p(rk)=nk/n

• The technique used for obtaining a uniform histogram is known as histogram equalization (or histogram linearization)

• Histogram equalization results are similar to contrast stretching but offer the advantage of full automation

Histogram equalizationHistogram equalization

• Expand pixels in peaks over a wider range of gray-levels

• “Squeeze” low plans pixels into a narrower range of gray levels

• Flat histogram.

Histogram equalizationHistogram equalization• To equalize the histogram, probabilities of

occurrences of the gray levels in the image are taken into account.

- That is the normalized histogram is used to spread the histogram throughout the range of the gray level of the image.

• The transformation is

( )0 0

( )k k

jk k r j

j j

ns T r P r

n= =

= = =∑ ∑

Histogram equalizationHistogram equalization

Histogram matching/specificationHistogram matching/specification• Histogram equalization does not allow interactive

image enhancement and generates only one result that is an approximation to a uniform histogram

• Sometimes though, particular histogram shapes are specified to highlight certain gray-level ranges

- The method used to generate a processed image that has a specified histogram is called histogram matching or histogram specification

Histogram matching/specificationHistogram matching/specification• The procedure for histogram-specification based

enhancement is:

1. Obtain the histogram of the given image

2. Use the following equation to pre-compute a mapped level sk for each level of rk

( )0 0

( )k k

jk k r j

j j

ns T r P r

n= =

= = =∑ ∑

Histogram matching/specificationHistogram matching/specification3. Obtain the transformation function G from the given pz(z)

using

4. Pre-compute zk for each value of sk using iterative process

5. For each pixel in the original image, if the value of that pixel is rk, map this value to its corresponding level sk, then map level sk into final level zk

- In these mappings the pre-computed values from step (2) and step (4) are used

( ) ( )1 1z G s z G T r− −= ⇒ = ⎡ ⎤⎣ ⎦

( ) ( )0 0

k kj

k k z i ki j

nv G z p z s

n= =

= = = ≈∑ ∑

Histogram matching/specificationHistogram matching/specification• The principal difficulty in applying the histogram

specification method to image enhancement lies in being able to construct a meaningful histogram

So,- Either a particular probability density function (such

as a Gaussian density) is specified and then a histogram is formed by digitizing the given function

- Or a histogram shape is specified on a graphic device and then is fed into the processor executing the histogram specification algorithm.

Histogram matching/specificationHistogram matching/specification

Histogram equalizationHistogram equalization

Histogram equalizationHistogram equalization

Histogram equalizationHistogram equalization

Local enhancementLocal enhancement

• Local enhancement using histogram is done basically for two purposes:

- When it is necessary to enhance details over small areas in an image

- To devise transformation functions based on the gray-level distribution in the neighbourhood of every pixel in the image

Local enhancementLocal enhancement• The procedure is:

- Define a square (or rectangular) neighbourhood and move the center of this area from pixel to pixel

- At each location, the histogram of the points in the neighbourhood is computed and either a histogram equalization or histogram specification transformation function is obtained

Local enhancementLocal enhancement

- This function is finally used to map the gray level of the pixel centered in the neighbourhood

- The centre of the neighbourhood region is then moved to an adjacent pixel location and the procedure is repeated

Local enhancementLocal enhancement

Local enhancementLocal enhancement

Local enhancementLocal enhancement

Image enhancement using Image enhancement using Arithmetic/Logic operationsArithmetic/Logic operations

• Logical- AND, OR, NOT – functionally complete- In conjunction with morphological operations

• Arithmetic- Subtraction/Addition- Multiplication / Division (as multiplication of reciprocal)- May use multiplication to implement gray-level masks

Logical operationsLogical operations

Image SubtractionImage Subtraction

• Image subtraction of a mask from an image is given as:

Where, f(x,y) is the imageh(x,y) is the image mask

• Image subtraction is used in medical imaging, eg. Mask mode radiography

),(),(),( yxhyxfyxg −=

Image subtractionImage subtraction• Mask mode radiography- h(x,y) is the mask, f(x,y) is image taken after

injecting the contrast medium

Image subtractionImage subtraction• Image subtraction is g(x, y) = f (x, y) − h(x, y)

Image averagingImage averaging

• A noisy image can be modeled as:g(x,y)=f(x,y)+η(x,y)

Where, f(x,y) is the original imageη(x,y) is the noise

Averaging k different noisy images:

∑=

=M

ii yxg

Myxg

1),(1),(

Image AveragingImage Averaging

• As K increases, the variability of the pixel values at each location decreases

- This means that g(x,y) approaches f(x,y) as the number of noisy images used in the averaging process increases.

• Registering of the images is necessary to avoid blurring in the output image.

Image AveragingImage Averaging

Image AveragingImage Averaging

Spatial FilteringSpatial Filtering

• Use of spatial masks for image processing (spatial filters)

• Linear and nonlinear filters(a 3x3 mask)

• Low-pass filters eliminate or attenuate high frequency components in the frequency domain (sharp image details), and result in image blurring

p

Spatial FilteringSpatial Filtering

• High-pass filters attenuate or eliminate low-frequency components (resulting in sharpening edges and other sharp details)

• Band-pass filters remove selected frequency regions between low and high frequencies (for image restoration, not enhancement)

Spatial filteringSpatial filtering• Different filters can be designed to do the

following:

- Blurring / Smoothing- Sharpening- Edge Detection

- All these effects can be achieved using different coefficients in the mask

Spatial filteringSpatial filtering• Blurring / Smoothing- (Sometimes also referred to as averaging or

lowpass-filtering)

~ Average the values of the centre pixel and its neighbours

Purposes:- Reduction of ‘irrelevant’ details- Noise reduction- Reduction of ‘false contours’ (e.g. produced by

zooming)

• Sharpening - Used to highlight fine detail in an image- Or to enhance detail that has been blurred

• Edge DetectionPurposes:

- Pre-processing- Sharpening

Spatial filteringSpatial filtering

Spatial filteringSpatial filtering

• A mask of a fixed size and having constant weight in its every location is used to do the filtering

• The mask is moved from point to point on the image

• At each point (x,y), the response of the filter at that point is calculated using a predefined relationship

Spatial filteringSpatial filtering• For a linear spatial filtering, the response is

given by a sum of products of the filter coefficients and the corresponding image pixels in the area spanned by the filter mask

• For a 3×3 mask, the result (response) R, of linear filtering with the filter mask at a point (x,y) in the image is

R=w(-1,-1)f(x-1,y-1)+w(-1,0)f(x-1,y)+ …..….. +w(0,0)f(x,y)+ ……….. +w(1,0)f(x+1,y)+w(1,1)f(x+1,y+1)

Spatial filteringSpatial filtering

Spatial FilteringSpatial Filtering• The basic approach is to sum products between

the mask coefficients and the intensities of the pixels under the mask at a specific location in the image

• For a 3 x 3 filter

992211 ... zwzwzwR +++=

1

mn

i ii

R w z=

= ∑

Spatial FilteringSpatial Filtering

z3z2z1

z6z5z4

z9z8z7

Spatial filteringSpatial filteringImage pixels (N8)

Spatial filteringSpatial filtering• Linear filtering of an image of size M×N with

filter mask of size m×n is given as

a=(m-1)/2 and b=(n-1)/2,

For x=0,1,…,M-1 and y=0,1,…,N-1• Linear spatial filtering is also called convolving

a mask with an image

g( x , y ) = w (s, t ) f ( x + s, y + t )t= − b

b

∑s= − a

a

Smoothing FiltersSmoothing Filters

• Used for blurring (removal of small details prior to large object extraction, bridging small gaps in lines) and noise reduction.

• Low-pass (smoothing) spatial filtering– Neighborhood averaging

- Results image blurring

Smoothing FilteringSmoothing Filtering

(Used in box filter) Weighted average(used to reduce blurring)

Averaging filterAveraging filter

• In averaging filter

( , ) ( , )( , )

( , )

a b

s a t ba b

s a s b

w s t f x s y tg x y

w s t

=− =−

=− =−

+ +=∑ ∑

∑ ∑

Smoothing FilteringSmoothing Filtering

• Averaging using different mask sizes

- Note the blurring at greater mask size

Smoothing FilteringSmoothing Filtering

1 1 1

1 1 1

1 1 1

* 1/9

Apply this schemeto every single pixel !

Smoothing FilteringSmoothing FilteringSome more examples:

Blurring / Smoothing

1 2 1

2 4 2

1 2 1

* 1/16

Smoothing FilteringSmoothing Filtering• Example 2:• Weighted average

Spatial FilteringSpatial Filtering

• Non-linear filters also use pixel neighbourhoods but do not explicitly use coefficients

– e.g. noise reduction by median gray-level value computation in the neighbourhood of the filter

OrderOrder--statistics filtersstatistics filters• Median filter (nonlinear)

- Used primarily for noise reduction, eliminates isolated spikes

• Most important properties of the median:- Less sensible to noise than mean- An element of the original set of values - Needs sorting

Median filterMedian filter- The gray level of each pixel is replaced by

the median of the gray levels in the neighborhood of that pixel (instead of by the average as before)

To calculate the median of a set of gray values, arrange the values in the set in ascending order. Then the middle most value in the string is the median (in case of odd number of values)

1 2 3 3 4 4 5 6 6

4 1 2

6 5 3

3 4 6

4 1 2

6 4 3

3 4 6

Median filterMedian filter

Sharpening filterSharpening filter• Used to highlight fine detail in an image or to enhance

image that has been blurred

• Accomplished by spatial differentiation, as it enhances the sharp transition in gray levels

• First derivative of a 1-D function f(x) is given as

• And Second derivative

( 1) ( )f f x f xx∂

= + −∂

2

2 ( 1) ( 1) 2 ( )f f x f x f xx

∂= + + − −

Sharpening filterSharpening filter

• First derivative enhances any sharp transition

• Second derivative enhances even a fine transition

- Usually the second derivative is preferred when fine details are to be enhanced

Sharpening filterSharpening filter

Sharpening filterSharpening filter• First order derivatives produce thick edges and second

order derivatives much finer ones• A second order derivative is much more aggressive than

a first order derivative in enhancing sharp changes

Sharpening filterSharpening filterThe first and second order derivatives:

1. First order derivative produces thicker edges

2. Second order derivative has a stronger response to fine detail such as thin lines and isolated points

3. First order derivative has a stringer response to a gray level step

4. Second order derivative produces a double response at step changes in gray levels

Sharpening filterSharpening filter• Laplacian operator- It is a linear operator; used for image

sharpening

For an image function f(x,y), Laplacian is defined as

2 22

2 2

f ffx y

∂ ∂∇ = +

∂ ∂

Sharpening filterSharpening filter• Partial second order derivative in X-direction:

• Partial second order derivative in Y-direction:

• So,2 [ ( 1, ) ( 1, ) ( , 1) ( , 1)] 4 ( , )f f x y f x y f x y f x y f x y∇ = + + − + + + − −

2

2 ( 1, ) ( 1, ) 2 ( , )f f x y f x y f x yx

∂= + + − −

2

2 ( , 1) ( , 1) 2 ( , )f f x y f x y f x yy

∂= + + − −

Sharpening filterSharpening filter• If the centre coefficient of the Laplacian mask is

negative

• If the centre coefficient of the Laplacian mask is positive

2( , ) ( , ) ( , )g x y f x y f x y= −∇

2( , ) ( , ) ( , )g x y f x y f x y= +∇

Sharpening filterSharpening filter

Recommended