87
CHAPTER 1 INTRODUCTION 1.1 MEDIAN FILTER Median filtering is a non-linear, low-pass filtering method, which you use to remove speckle" noise from an image. A median filter can outperform linear, low-pass filters on this type of noisy image because it can potentially remove all the noise without affecting the "clean" pixels. Median filters remove isolated pixels, whether they are bright or dark. Prior to any hardware design, the software versions of the algorithms are created in MATLAB. Using MATLAB procedural routines to operate on images represented as matrix data, these software algorithms were designed to resemble the hardware algorithms as closely as possible. Figure 1.1 Example of median filter While a hardware system and a matrix-manipulating software program are fundamentally different, they can produce identical results, provided that care is taken in development. This approach was taken because it speeds understanding of the algorithm design. 1

new FINAL 1

Embed Size (px)

Citation preview

Page 1: new FINAL 1

CHAPTER 1

INTRODUCTION

1.1 MEDIAN FILTER

Median filtering is a non-linear, low-pass filtering method, which you use to remove speckle"

noise from an image. A median filter can outperform linear, low-pass filters on this type of noisy

image because it can potentially remove all the noise without affecting the "clean" pixels. Median

filters remove isolated pixels, whether they are bright or dark. Prior to any hardware design, the

software versions of the algorithms are created in MATLAB. Using MATLAB procedural routines to

operate on images represented as matrix data, these software algorithms were designed to resemble

the hardware algorithms as closely as possible.

Figure 1.1 Example of median filter

While a hardware system and a matrix-manipulating software program are fundamentally

different, they can produce identical results, provided that care is taken in development. This

approach was taken because it speeds understanding of the algorithm design. In addition, this

approach facilitates comparison of the software and synthesized hardware algorithm outputs. This

project is focused on developing hardware implementations of image processing algorithm for use in

an FPGA -based image processing system. The rank order filter is a particularly common algorithm

in image processing systems. For every pixel in an image, the window of neighbouring pixels is

found. Then the pixel values are sorted in ascending, or rank, order.

1

Page 2: new FINAL 1

Next, the pixel in the output image corresponding to the origin pixel in the input image is

replaced with the value specified by the filter order. The VHDL code can be simulated to verify its

functionality.

Figure 1.2 functional diagram for the median filter

1.2 MOVING WINDOW ARCHITECTURE

In order to implement a moving window system in VHDL, a design was devised that took

advantage of certain features of FPGAs. FPGAs generally handle flip -flops quite easily, but

instantiation of memory on chip is more difficult. Still, compared with the other option, off-chip

memory, the choice using on chip memory was clear. It was determined that the output of the

architecture should be vectors for pixels in the window, along with a data -valid signal, which is used

to inform an algorithm using the window generation unit as to when the data is ready for processing.

Since it was deemed necessary to achieve maximum performance in a relatively small space, FIFO

Units specific to the target FPGA were used. Importantly though, to the algorithms using the window

generation architecture, the output of the window generation units is exactly the same. This useful

feature allows algorithm interchange ability between the two architectures, which helped

significantly, cut down algorithm development time.

A window size was chosen because it was small enough to be easily fit onto the target

FPGAs, and is considered large enough to be effective for most commonly used image sizes. With

larger window sizes, more FIFOs and flip -flops must be used, which increases the FPGA resources

used significantly.

2

Page 3: new FINAL 1

Figure 1.3 and 1.4 shows a graphic representation of the FIFO and flip -flop architecture used

for this design for a given output pixel window.

Figure 1.3 graphic representation of the FIFO

Figure 1.4 graphic representation of flip -flop architecture

1.3 PARALLEL SORTING STRATEGY

To make a fair comparison of the parallel sorting strategy against wave sorter strategy in

terms of the total number of required steps to sort an array, it is necessary to consider the steps used

to read data from memory and the steps required to store the sorted data back to memory. The

proposed approach is based on the same structure of the registers array used in the wave sorter

strategy. With this kind of array, data can be stored in the array by sending a datum to the first

register and later, when the second datum is sent to the first register, the value on the first array is

shifted to the second register.

Thus, for every datum sent to the array to be stored, values in registers are shifted to their

respective adjacent registers.

3

Page 4: new FINAL 1

Figure 1.5 parallel sorting registers

This process requires n steps. The same number of steps is required to take data out from the

array. This approach allows storing a new set of data in the array while the previous set is being sent

back into the memory. As mentioned in section 2, suffix sorting might imply more than one sorting

iterations. If k sorts are required, then the parallel sorting requires to ((n+n/2) * k + n) to sort an array

of n data. Thus total number of steps required can be obtained by the following equation:

The parallel strategy leads to a significant reduction compared to the wave sorter approach.

Furthermore, in additional sorts the necessary number of steps for sorting is equal to the number of

characters in the biggest group of identical characters divided by 2 (remember that an additional

sorting is implied if groups of identical adjacent characters appear in the array). This implies that in

practice, it is possible to reduce more than the number of steps to solve the suffix problem.

Weighted median filters have been shown to be effective in the removal of impulsive noise

from a wide variety of video images.  Median filters in general, however, are computationally

expensive to implement because of the fact that a running sort of the incoming data must be

performed.  Weighted median filters, although they remove noise more effectively than standard

median filters, are even more difficult to implement because the traditional software-based

approach has been to increase the amount of sorting that is done, proportional to the weighting

values used.  Because of the computational difficulties associated with the use of the algorithm, the

use of weighted median filters in video processing applications has mostly been limited to non-real-

time software implementations.  This paper describes a fairly simple solution to the problem, and

allows a high-speed, low-complexity hardware solution to be built for real-time applications.

4

Page 5: new FINAL 1

The technique described in this paper is unique because it uses three additional

information fields that are appended to the incoming data words as they are being processed.  These

three fields, the weight value, the new bit, and the age value are updated appropriately by

associated control logic, and then stored within the array.  Advantages of this technique are that a

minimal amount of extra storage is used and the control logic necessary to update the three extra

fields is of low complexity.  Because of this, relatively inexpensive real-time hardware

implementations may be accomplished by using devices such as Field Programmable Gate Array

(FPGA) chips.

1.4 TECHNICAL DESCRIPTION

The standard two-dimensional median filter is good at removing impulse noise from video

images, but the price that is paid is that the image tends to get “softer”, as there is a low-pass filtering

effect.  This effect may be minimized by the use of a weighted two-dimensional median filter, whose

weights may be either permanently fixed or adaptively modified depending on the particular criteria

being satisfied.  This description does not go into details on methods for the adaptation process itself,

as that is a totally independent problem from that of the weighted median filter calculations

themselves.  The mechanism described is also independent of the width of the data values to be

filtered, the NxN size of the data array, and the actual weight values them.  For the sake of

simplicity, examples will be given detailing the use of 8-bit data values, a 3x3 size, and a fixed array

of weights (1, 3, 1, 3, 5, 3, 1, 3, 1 in linear order).  These values are the ones that were used in

software simulations and VHDL hardware descriptions.

The basic updating scheme for the 3x3 matrix is shown in Figure 1.1.  It should be noted that

this method is not limited to the 3x3 example given here, and it is straightforward to extend it to any

arbitrarily sized filter.  It is simpler to think in single-dimensional terms, so the data values are

linearized to appear to be a 9x1 filter block.  The data is stored in an array of N2 + 1 register or

memory locations. 

In the example case, 32 + 1 = 10, so there are 10 registers allocated for data storage. In the

first part of the example, the nine 8-bit values are pre-sorted so that the current median filter output

will be the sample in the exact middle of the register array, or the output of register #(N2 – 1)/2,

which in this example is register #4.  A new data value is about to be read in during the next cycle.

5

Page 6: new FINAL 1

The new data value is compared to all of the values currently stored in the array, and is

placed in the correct location immediately.  Lower values are shifted to the right, and the lowest

value “overflows” into the extra register at the end of the array (register #9). In the second half of the

cycle, the data tags are read to determine which data value to remove from the array, and all data

values to the right of it are now shifted left, stopping at the point where the oldest previous value was

removed from.  After these operations are completed, the median value is again the value in the

middle of the array, or the one located in register #4 of this example.

The implementation of the weighted median filter is similar to that of the 3x3 median filter, but more

complicated because of the weighting operations that must take place.  As the three new data values

are input into the sorting array, they must be assigned a weight depending on what position they have

in the original data matrix.  The first data value to be input is assigned a weight of “1”, the second is

assigned a weight of “3”, and the third is assigned a weight of “1”.  This weighting is now

maintained and updated within the sorting array registers, so another three bits must be assigned to

the register length to accommodate the highest possible weight value of 5.  Because of this, the

actual width of this bit field in an implementation must be wide enough to accommodate a binary

representation of the highest weight value that will be used.  A final single bit is also associated with

each register within the array, and that bit is used to tell the internal logic which three weight values

have just been entered as new values, and which six remaining weights need to be updated reflective

of their new positions in the data matrix.  As this value merely tells the internal logic which data

values are new and which ones are not, the width of this field does not have to change.   Because of

these extra bits associated with weight maintenance, each register within the sorting array must be at

least 16 bits wide in this 3x3 weighted median filter implementation, and possibly more depending

on the other factors described previously. 

A fourth clocking cycle is added compared to the 3x3 median filter implementation to allow for

weight updates to occur within the sorted array.  Because of this, the total time necessary for an

output is eight clock cycles.  The use of eight clock cycles is somewhat arbitrary, as the example

implementation is done using general-purpose registers constructed of flip-flops.  A memory-based

storage array could also be designed that would result in the use of less clock cycles for a particular

output.  Note that the basic sorting operation of the register array is accomplished in the same

manner as in Figure 1, but the middle register value is not automatically output as the answer. 

Further processing via the data value weights is necessary, and there are basically two methods for

6

Page 7: new FINAL 1

doing this.  After analysing the hardware designs for both approaches, the second method was

chosen for the final implementation, as it is considerably simpler.  

In the transmission of images over channels, images are corrupted by salt and pepper noise, due to

faulty communications. Salt and Pepper noise is also referred to as Impulse noise. The objective of

filtering is to remove the impulses so that the noise free image is fully recovered with minimum

signal distortion. Noise removal can be achieved, by using a number of existing linear filtering

techniques which are popular because of their mathematical simplicity and the existence of the

unifying linear system theory. A major class of linear filters minimizes the mean squared error

(MSE) criterion and it provides optimum performance among all classes of filters if the noise is

additive and Gaussian. The best-known and most widely used non-linear digital filters, based on

order statistics, are median filters.

Median filters are known for their capability to remove impulse noise without damaging the edges.

The main drawback of Standard Median Filter (SMF) is that it is effective only for low noise density.

At high noise densities, standard median filters often results in loss of lines and sharp corners for

large window sizes (5*5 or 7*7) and insufficient noise suppression for small window (3*3) size . The

Alpha-Trimmed Mean Filter (ATMF) and Alpha trimmed-midpoint filter (ATMP) are another type

of non-linear filters. These filters are also used to remove the impulse noise in which the parameter

"α", called the trimming factor, controls the number of values that are trimmed. It can be seen that as

the value of the trimming factor "α" increases, the ability of the filter to remove impulse noise is

further increased. However, when the noise density is as high as 50% and above, there is insufficient

noise-removal and loss of image edge details.

The reason for this loss in image edge details is due to the fact that these filters trim the extreme

values even if they are not impulse values. The decision based filter is a class of impulse rejecting

filter it replaces only the corrupted pixels by some nonlinear filter and the uncorrupted pixels leave it

unaltered. The decision whether a pixel is corrupted or not is determined by the local image

structure. Gouchol Pok and Jyh- Charn Lu proposed an algorithm to classify the uncorrupted pixels

from the corrupted pixels and are used to train and predict the relationship between the centre pixel

and its neighbour pixel which can be used to replace the corrupted pixels. The main drawback of this

method is that at high noise density the probability of occurrence of corrupted pixel is greater than

uncorrupted pixels. The prediction of original pixels from the rest of uncorrupted pixel disturbs the

edges because it finds the variation between the centre pixel and its neighbouring pixels and

7

Page 8: new FINAL 1

smoothest it. As a result of this the edges get blurred and it is a quite complex procedure. Recently,

in decision based algorithm (DBA), the corrupted pixels are replaced by either the median value of

the window or neighbourhood pixel, in contrast to other existing algorithms that use only median

value for replacement of corrupted pixels. At higher noise densities, the median value may also be a

noisy pixel, in which case, neighbourhood pixel is used for replacement from the previously

processed window. The main drawback of this method is that the quality of the restored image

degrades as the noise level increases above 60%. Since neighbour- hood pixel value is used for

replacement, when median value remains to be corrupted one, streaking in the image becomes

persistent. In order to overcome the above mentioned difficulties, a new two-stage cascaded filter is

proposed in this paper which removes the noise as high as possible, without blurring and retains the

fine edge details. The proposed algorithm contains a new kind of decision-based median filter and

un-symmetric filter which are connected in cascade. This decision based-median filter directly

replaces the corrupted pixels only with a median value of its neighbourhood pixels while the

uncorrupted pixels are left unchanged. The Un-symmetric Filter, as the name indicates, performs un-

symmetric trimming of the impulse values and averages the remaining pixels. The purpose of using

an un-symmetric filter is to remove only the impulse noise lying at the extreme ends, while the

original pixel values are retained.

8

Page 9: new FINAL 1

CHAPTER 2

LITERATURE REVIEW

2.1 E.Abreu, M.Lightstone, S.K.Mitra and K.Arakawa, “A new efficient approach for the removal of impulse noise from highly corrupted images,” IEEE Transactions on Image Processing, Vol. 5, pp. 1012-1025, 1996.

The simplest of these algorithms is the Mean Filter. The Mean Filter is a linear filter which uses a mask over each pixel in the signal. Each of the components of the pixels which fall under the mask are averaged together to form a single pixel. This new pixel is then used to replace the pixel in the signal studied. The Mean Filter is poor at maintaining edges within the image.

MEAN FILTER= 1N∑i=1

N

xi

2.2 Raymond H.Chan, Chung-Wa Ho and Mila Nikolova, “Salt-and-Pepper Noise Removal by Median-Type Noise Detectors and Detail-Preserving Regularization” IEEE Transactions on Image Processing, Vol. 14, No. 10, pp. 402-410, 2005.

The use of the median in signal processing was first introduced by J.W.Tukey. The Median Filter is performed by taking the magnitude of all of the vectors within a mask and sorting the magnitudes. The pixel with the median magnitude is then used to replace the pixel studied. The Simple Median Filter has an advantage over the Mean filter in that it relies on median of the data instead of the mean. A single noisy pixel present in the image can significantly skew the mean of a set. The median of a set is more robust with respect to the presence of noise.

MEDIAN FILTER(x1 , …, xN ¿=¿MEDIAN¿¿,….,¿∨xN∨¿2)

When filtering using the Simple Median Filter, an original pixel and the resulting filtered pixel of the sample studied are sometimes the same pixel. A pixel that does not change due to filtering is known as the root of the mask. It can be shown that after sufficient iterations of median filtering, every signal converges to a root signal.

The Component Median Filter, also relies on the statistical median concept. In the Simple Median Filter, each point in the signal is converted to a single magnitude. In the Component Median

9

Page 10: new FINAL 1

Filter each scalar component is treated independently. A filter mask is placed over a point in the signal. For each component of each point under the mask, a single median component is determined.

CMF(x1 ,… , xN ¿=¿

These components are then combined to form a new point, which is then used to represent the point in the signal studied. When working with color images, however, this filter regularly outperforms the Simple Median Filter. When noise affects a point in a grayscale image, the result is called “salt and pepper” noise. In colour images, this property of “salt and pepper” noise is typical of noise models where only one scalar value of a point is affected. For this noise model, the Component Median Filter is more accurate than the Simple Median Filter. The disadvantage of this filter is that it will create a new signal point that did not exist in the original signal, which may be undesirable in some applications.

2.3 Jaakko, Astola, Petri Haavisto and Yrjo Neuvo, “Vector Median Filters” Proceedings of the IEEE, Vol. 78, No. 4, pp. 678-689,1990.

The Vector Median Filter (VMF) was developed by Astola, Haavisto and Neuvo in 1990. In the VMF, a filter mask is placed over a single point.

VMF(x¿¿1 ,…x N )¿ =MIN (∑i=1

N

¿|x1−x i||,....,∑i=1

N

¿|xN−x i||)

The sum of the vector magnitude differences using the L2 norm from each point to each other point within the mask is computed. The point with the minimum sum of vector differences is used to represent the point in the signal studied. The VMF is a well-researched filter and popular due to the extensive modifications that can be performed in conjunction with it.

2.4 Kober Vitaly, Mozerov Mikhail and Alvarez-Borrego Josue, “Spatially Adaptive Algorithm for impulse Noise Removal from colour Images,” Proceedings of the IEEE, Vol. 78, No. 4, pp. 678-689,1990.

When transferring an image, sometimes transmission problems cause a signal to spike, resulting in one of the three point scalars transmitting an incorrect value. This type of transmission error is called

“salt and pepper” noise due to the bright and dark spots that appear on the image as a result of the noise. The ratio of incorrectly transmitted points to the total number of points is referred to as the noise composition of the image. The goal of a noise removal filter is to take a corrupted image as

10

Page 11: new FINAL 1

input and produce an estimation of the original with no foreknowledge of the noise composition of the image.

In images containing noise, there are two challenges. The first challenge is determining noisy points. The second challenge is to determine how to adjust these points.

Among these points, the summed vector distance from each point to every other point within the filter is computed. The point in the signal with the smallest vector distance among these points is the minimum vector median.

The point in space that has the smallest distance to every other point is considered to be the best representative of the set. The original VMF approach does not consider if the current point is original data or not. If a point has a small summed vector distance, yet is not the minimum vector median, it is replaced anyway. The advantage of replacing every point achieves a uniform smoothing across the image. The disadvantage to replacing every point is that original data is sometimes overwritten. A good smoothing filter should simplify the image while retaining most of the original image shape and retain the edges. A benefit of a smoothed image is a better size ratio when the image needs to be compressed.

The Spatial Median Filter (SMF) is a new noise removal filter. The SMF and the VMF follow a similar algorithm and it will be shown that they produce comparable results. To improve the quality of the results of the SMF, a new parameter will be introduced and experimental data demonstrate the amount of improvement. The SMF is a uniform smoothing algorithm with the purpose of removing noise and fine points of image data while maintaining edges around larger shapes.

The SMF is based on the spatial median quantile function developed by P. Chaudhuri in 1996, which is a L1 norm metric that measures the difference between two vectors. R.Serfling noticed that a spatial depth could be derived by taking an invariant of the spatial median. He first gave the notion that any two vectors of a set could be compared based on their “centrality” using the Spatial Median. Y. Vardi and C. Zhang have improved the spatial median by deriving a faster estimation formula. The spatial depth between a point and set of points is defined by,

Sdepth¿,xN) = 1- 1

N−1||∑

i=1

N X−x i

||X−x i||||

The following is the basic algorithm for determining the Spatial Median of a set of points, x1,…, xN :

11

Page 12: new FINAL 1

Let r1, r2,…, rN represent x1,x2, …,xN in rank order such that

Sdepth (r1 , x1 , ….x N )

≥ Sdepth¿)

≥ Sdepth (r3 , x1 , ….x N)

≥ Sdepth¿)

and let rc represent the center pixel under the mask.

Then,

SMF(x1, …,xN) = rN

The SMF is an unbiased smoothing algorithm and will replace every point that is not the maximum spatial depth among its set of mask neighbors. The Modified Spatial Median Filter attempts to address these concerns.

2.5 Kwanghoon Sohn, Kyu-Cheol Lee and Jungeun Lim, “Impulsive noise Filtering based on noise detection in corrupted digital colour images”, Circuits, Systems, and Signal Processing, Vol. 20, No.6, pp 643-648,2001.

The SMF is similar to the VMF in that in both filters, the vectors are ranked by some criteria and the top ranking point is used to replace the center point. No consideration is made to determine if that center point is original data or not. The unfortunate drawback to using these filters is the smoothing that occurs uniformly across the image. Across areas where there is no noise, uncorrupted data is removed unnecessarily.

In the Modified Spatial Median Filter (MSMF), after the spatial depth of each point within the mask is computed, an attempt is made to use this information to first decide if the mask’s center point is an uncorrupted point. If the determination is made that a point is not corrupted, then the point will not be changed. We first calculate the spatial depth of every point within the mask and then sort these spatial depths in descending order. The point with the largest spatial depth represents the Spatial Median of the set. In cases where noise is determined to exist, this representative point then replaces the point currently located under the center of the mask. The point with the smallest spatial depth will be considered the least similar point of the set.

12

Page 13: new FINAL 1

By ranking these spatial depths in the set in descending order, a spatial order statistic of depth levels is created. The smallest depth measures, representing points with the largest spatial difference among others in the mask are pushed to the end of the ordered set.

The MSMF is defined by,

MSMF(T,X1,….,Xn)={ r c c≤ T¿ r1 c≤ T

Two things should be noted about the use of T in this approach. When T is 1, this is the equivalent to the unmodified SMF. When T is equal to the size of the mask, the center point will always fall within the first T ranks of the spatial order statistic and every point is determined to be original. This is the equivalent of performing no filtering at all, since all of the points are left unchanged.

CHAPTER-3

13

Page 14: new FINAL 1

MEDIAN FILTERS

3.1 INTRODUCTION

Weighted median filters have been shown to be effective in the removal of impulsive noise

from a wide variety of video images.  Median filters in general, however, are computationally

expensive to implement because of the fact that a running sort of the incoming data must be

performed.  Weighted median filters, although they remove noise more effectively than standard

median filters, are even more difficult to implement because the traditional software-based approach

has been to increase the amount of sorting that is done, proportional to the weighting values used.  

Because of the computational difficulties associated with the use of the algorithm, the use of

weighted median filters in video processing applications has mostly been limited to non-real-time

software implementations.  This paper describes a fairly simple solution to the problem, and allows a

high-speed, low-complexity hardware solution to be built for real-time applications.

The technique described in this paper is unique because it uses three additional information

fields that are appended to the incoming data words as they are being processed.  These three fields,

the weight value, the new bit, and the age value are updated appropriately by associated control

logic, and then stored within the array.  Advantages of this technique are that a minimal amount of

extra storage is used and the control logic necessary to update the three extra fields is of low

complexity.  Because of this, relatively inexpensive real-time hardware implementations may be

accomplished by using devices such as Field Programmable Gate Array (FPGA) chips.

3.2 TECHNICAL DESCRIPTION

The standard two-dimensional median filter is good at removing impulse noise from video

images, but the price that is paid is that the image tends to get “softer”, as there is a low-pass filtering

effect.  This effect may be minimized by the use of a weighted two-dimensional median filter, whose

weights may be either permanently fixed or adaptively modified depending on the particular criteria

being satisfied.  This description does not go into details on methods for the adaptation process itself,

as that is a totally independent problem from that of the weighted median filter calculations

themselves. 

The mechanism described is also independent of the width of the data values to be filtered,

the NxN size of the data array, and the actual weight values themselves.  For the sake of simplicity,

14

Page 15: new FINAL 1

examples will be given detailing the use of 8-bit data values, a 3x3 size, and a fixed array of weights

(1, 3, 1, 3, 5, 3, 1, 3, 1 in linear order).  These values are the ones that were used in software

simulations and VHDL hardware descriptions.

The basic updating scheme for the 3x3 matrix is shown in Figure 1.  It should be noted that

this method is not limited to the 3x3 example given here, and it is straightforward to extend it to any

arbitrarily sized filter.  It is simpler to think in single-dimensional terms, so the data values are

linearized to appear to be a 9x1 filter block.  The data is stored in an array of N2 + 1 register or

memory locations.  In the example case, 32 + 1 = 10, so there are 10 registers allocated for data

storage. In the first part of the example, the nine 8-bit values are pre-sorted so that the current median

filter output will be the sample in the exact middle of the register array, or the output of register #(N 2

– 1)/2, which in this example is register #4.  A new data value is about to be read in during the next

cycle. The new data value is compared to all of the values currently stored in the array, and is placed

in the correct location immediately.  Lower values are shifted to the right, and the lowest value

“overflows” into the extra register at the end of the array (register #9).

In the second half of the cycle, the data tags are read to determine which data value to remove

from the array, and all data values to the right of it are now shifted left, stopping at the point where

the oldest previous value was removed from.  After these operations are completed, the median value

is again the value in the middle of the array, or the one located in register #4 of this example. The

first data value to be input is assigned a weight of “1”, the second is assigned a weight of “3”, and

the third is assigned a weight of “1”.  This weighting is now maintained and updated within the

sorting array registers, so another three bits must be assigned to the register length to accommodate

the highest possible weight value of 5.  Because of this, the actual width of this bit field in an

implementation must be wide enough to accommodate a binary representation of the highest weight

value that will be used. 

A final single bit is also associated with each register within the array, and that bit is used to

tell the internal logic which three weight values have just been entered as new values, and which six

remaining weights need to be updated reflective of their new positions in the data matrix.  As this

value merely tells the internal logic which data values are new and which ones are not, the width of

this field does not have to change.  Because of these extra bits associated with weight maintenance,

each register within the sorting array must be at least 16 bits wide in this 3x3 weighted median filter

implementation, and possibly more depending on the other factors described previously. 

15

Page 16: new FINAL 1

A fourth clocking cycle is added compared to the 3x3 median filter implementation to allow

for weight updates to occur within the sorted array.  Because of this, the total time necessary for an

output is eight clock cycles.  The use of eight clock cycles is somewhat arbitrary, as the example

implementation is done using general-purpose registers constructed of flip-flops.  A memory-based

storage array could also be designed that would result in the use of less clock cycles for a particular

output.  Note that the basic sorting operation of the register array is accomplished in the same

manner as in Figure 1, but the middle register value is not automatically output as the answer. 

Further processing via the data value weights is necessary, and there are basically two methods for

doing this.  After analysing the hardware designs for both approaches, the second method was

chosen for the final implementation, as it is considerably simpler.  

In the transmission of images over channels, images are corrupted by salt and pepper noise,

due to faulty communications. Salt and Pepper noise is also referred to as Impulse noise. The

objective of filtering is to remove the impulses so that the noise free image is fully recovered with

minimum signal distortion. Noise removal can be achieved, by using a number of existing linear

filtering techniques which are popular because of their mathematical simplicity and the existence of

the unifying linear system theory. A major class of linear filters minimizes the mean squared error

(MSE) criterion and it provides optimum performance among all classes of filters if the noise is

additive and Gaussian. The best-known and most widely used non-linear digital filters, based on

order statistics, are median filters.

Median filters are known for their capability to remove impulse noise without damaging the

edges. The main drawback of Standard Median Filter (SMF) is that it is effective only for low noise

density. At high noise densities, standard median filters often results in loss of lines and sharp

corners for large window sizes (5*5 or 7*7) and insufficient noise suppression for small window

(3*3) sizes. The Alpha-Trimmed Mean Filter (ATMF) and Alpha trimmed-midpoint filter (ATMP)

are another type of non-linear filters. These filters are also used to remove the impulse noise in which

the parameter "α", called the trimming factor, controls the number of values that are trimmed. It can

be seen that as the value of the trimming factor "α" increases, the ability of the filter to remove

impulse noise is further increased. However, when the noise density is as high as 50% and above,

there is insufficient noise-removal and loss of image edge details. The reason for this loss in image

edge details is due to the fact that these filters trim the extreme values even if they are not impulse

values . The decision based filter is a class of impulse rejecting filter it replaces only the corrupted

pixels by some nonlinear filter and the uncorrupted pixels leave it unaltered.

16

Page 17: new FINAL 1

The decision whether a pixel is corrupted or not is determined by the local image structure .

GoucholPok and Jyh- Charn Lu proposed an algorithm to classify the uncorrupted pixels from the

corrupted pixels and are used to train and predict the relationship between the centre pixel and its

neighbour pixel which can be used to replace the corrupted pixels. The main drawback of this

method is that at high noise density the probability of occurrence of corrupted pixel is greater than

uncorrupted pixels. The prediction of original pixels from the rest of uncorrupted pixel disturbs the

edges because it finds the variation between the centre pixel and its neighbouring pixels and

smoothest it. As a result of this the edges get blurred and it is a quite complex procedure. Recently,

in decision based algorithm (DBA) , the corrupted pixels are replaced by either the median value of

the window or neighbourhood pixel, in contrast to other existing algorithms that use only median

value for replacement of corrupted pixels.

At higher noise densities, the median value may also be a noisy pixel, in which case,

neighbourhood pixel is used for replacement from the previously processed window. The main

drawback of this method is that the quality of the restored image degrades as the noise level

increases above 60%. Since neighbour- hood pixel value is used for replacement, when median value

remains to be corrupted one, streaking in the image becomes persistent. In order to overcome the

above mentioned difficulties, a new two-stage cascaded filter is proposed in this paper which

removes the noise as high as possible, without blurring and retains the fine edge details. The

proposed algorithm contains a new kind of decision-based median filter and un-symmetric filter

which are connected in cascade. This decision based-median filter directly replaces the corrupted

pixels only with a median value of its neighbourhood pixels while the uncorrupted pixels are left

unchanged. The Un-symmetric Filter, as the name indicates, performs un-symmetric trimming of the

impulse values and averages the remaining pixels. The purpose of using an un-symmetric filter is to

remove only the impulse noise lying at the extreme ends, while the original pixel values are retained.

3.3 Decision Based Median Filter

In DBUTM, the corrupted pixels are identified and processed. The DBUTM algorithm checks

whether the left and right extreme values of the sorted array obtained from the 3x3 window are

impulse values. The corrupted processing pixel is replaced by a median value of the pixels in the 3 X

3 window after trimming impulse values. The corrupted pixel is replaced by the median of the

resulting array.

17

Page 18: new FINAL 1

3.3.1 Shear Sorting Algorithm

Sorting is the most important operation used to find the median of a window. There are various

sorting algorithms such as binary sort, bubble sort, merge sort, quick sort etc. In the proposed

algorithm, shear sorting technique is used since it is based on parallel architecture. In practice the

parallel architectures help to reduce the number of logic cells required for its implementation. The

illustration of shear sorting is shown in Figure.3.1.1(a)-3.1.1(d). In the odd phases (1,3,5) even rows

are sorted in descending order and rows are sorted out in ascending order. In the even phases

columns are sorted out independently in ascending order.

Figure.3.3.1(a) Original matrix before sorting

Figure.3.3.1(b) Row sorting

18

Page 19: new FINAL 1

Figure.3.3.1(c) Column sorting

Figure.3.3.1(d) Row sorting

19

Page 20: new FINAL 1

FIGURE 3.2(b) Illustration of DBM

20

Page 21: new FINAL 1

Illustration

The figure 6 shows an image matrix. In the first window at the left end, the processing pixel is ‘57’

which lies between ‘0’ and ‘255’. Hence the processing pixel is uncorrupted and left unchanged.

However, in the second window, the centre processing pixel ‘255’ which is noisy is replaced by the

median of the neighbourhood pixels which is found by eliminating ‘0’ and ‘255’ among the

neighbourhood pixels (here median=63).

3.4 Image compression

Image compression is the application of data compression on digital images. In effect, the

objective is to reduce redundancy of the image data in order to be able to store or transmit data in an

efficient form.

Image compression can be lossy or lossless. Lossless compression is preferred for archival

purposes and often medical imaging, technical drawings, clip art or comics. This is because lossy

compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy

methods are especially suitable for natural images such as photos in applications where minor

(sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate.

The lossy compression that produces imperceptible differences can be called visually lossless.

3.5 METHODS FOR LOSSLESS IMAGE COMPRESSION:

Run-length encoding – used as default method in PCX and as one of possible in BMP,

TGA, TIFF

DPCM and Predictive Coding

Entropy encoding

Adaptive dictionary algorithms such as LZW – used in GIF and TIFF

Deflation – used in PNG, MNG and TIFF

Chain codes

21

Page 22: new FINAL 1

3.6 METHODS FOR LOSSY COMPRESSION:

Reducing the color space to the most common colors in the image. The selected colors are

specified in the color palette in the header of the compressed image. Each pixel just references the

index of a color in the color palette. This method can be combined with dithering to avoid

posterization.

Chroma subsampling:

This takes advantage of the fact that the eye perceives spatial changes of brightness more

sharply than those of color, by averaging or dropping some of the chrominance information in the

image.

Transform coding:

This is the most commonly used method. A Fourier-related transform such as DCT or the

wavelet transform are applied, followed by quantization and entropy coding.

Fractal compression:

The best image quality at a given bit-rate (or compression rate) is the main goal of image

compression. However, there are other important properties of image compression schemes:

3.7 SCALABILITY

Scalability generally refers to a quality reduction achieved by manipulation of the bit stream

or file (without decompression and re-compression). Other names for scalability are progressive

coding or embedded bit streams. Despite its contrary nature, scalability can also be found in lossless

codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially useful for previewing

images while downloading them (e.g. in a web browser) or for providing variable quality access to

e.g. databases.

3.8 TYPES OF SCALABILITY

There are several types of scalability:

Quality progressive or layer progressive: The bitstream successively refines the

reconstructed image.

22

Page 23: new FINAL 1

Resolution progressive: First encode a lower image resolution; then encode the difference

to higher resolutions.

Component progressive: First encode grey; then color.

3.9 REGION OF INTEREST CODING:

Certain parts of the image are encoded with higher quality than others. This can be combined

with scalability (encode these parts first, others later).

3.10 META INFORMATION:

Compressed data can contain information about the image which can be used to categorize,

search or browse images. Such information can include color and texture statistics, small preview

images and author/copyright information.

3.11 PROCESSING POWER:

Compression algorithms require different amounts of processing power to encode and

decode. Some high compression algorithms require high processing power.

3.12 PEAK SIGNAL-TO-NOISE RATIO

The quality of a compression method is often measured by the Peak signal-to-noise ratio. It

measures the amount of noise introduced through a lossy compression of the image. However, the

subjective judgement of the viewer is also regarded as an important, perhaps the most important,

measure.

Compressing an image is significantly different than compressing raw binary data. Of course,

general purpose compression programs can be used to compress images, but the result is less than

optimal. This is because images have certain statistical properties which can be exploited by

encoders specifically designed for them. Also, some of the finer details in the image can be sacrificed

for the sake of saving a little more bandwidth or storage space. This also means that lossy

compression techniques can be used in this area.

Lossless compression involves with compressing data which, when decompressed, will be an

exact replica of the original data. This is the case when binary data such as executables, documents

etc. are compressed. They need to be exactly reproduced when decompressed. On the other hand,

images (and music too) need not be reproduced 'exactly'. An approximation of the original image is

enough for most purposes, as long as the error between the original and the compressed image is

23

Page 24: new FINAL 1

tolerable.

3.13 ERROR METRICS

Two of the error metrics used to compare the various image compression techniques are the

Mean Square Error (MSE) and the Peak Signal to Noise Ratio (PSNR). The MSE is the cumulative

squared error between the compressed and the original image, whereas PSNR is a measure of the

peak error. The mathematical formulae for the two are

where I(x,y) is the original image, I'(x,y) is the approximated version (which is actually the

decompressed image) and M,N are the dimensions of the images. A lower value for MSE means

lesser error, and as seen from the inverse relation between the MSE and PSNR, this translates to a

high value of PSNR. Logically, a higher value of PSNR is good because it means that the ratio of

Signal to Noise is higher. Here, the 'signal' is the original image, and the 'noise' is the error in

reconstruction. So, if you find a compression scheme having a lower MSE (and a high PSNR), you

can recognise that it is a better one.

3.14 THE OUTLINE

We'll take a close look at compressing grey scale images. The algorithms explained can be

easily extended to colour images, either by processing each of the colour planes separately, or by

transforming the image from RGB representation to other convenient representations like YUV in

which the processing is much easier.

The usual steps involved in compressing an image are

1. Specifying the Rate (bits available) and Distortion (tolerable error) parameters for the target

image.

2. Dividing the image data into various classes, based on their importance.

3. Dividing the available bit budget among these classes, such that the distortion is a minimum.

4. Quantize each class separately using the bit allocation information derived in step 3.

5. Encode each class separately using an entropy coder and write to the file.

24

Page 25: new FINAL 1

Remember, this is how 'most' image compression techniques work. But there are exceptions. One

example is the Fractal Image Compression technique, where possible self similarity within the image

is identified and used to reduce the amount of data required to reproduce the image. Traditionally

these methods have been time consuming, but some latest methods promise to speed up the process.

Literature regarding fractal image compression can be found at find out.

Reconstructing the image from the compressed data is usually a faster process than

compression. The steps involved are

1. Read in the quantized data from the file, using an entropy decoder. (Reverse of step 5).

2. Dequantize the data. (Reverse of step 4).

3. Rebuild the image. (Reverse of step 2).

3.15 DOWNSAMPLING AND UPSAMPLING METHODS

Downsampling and upsampling are widely used in image display, compression, and

progressive transmission. In this paper we examine new down/upsampling methods using frequency

response analysis and experimental evaluation. We consider six classes of filters for

down/upsampling: decimation/duplication, bilinear interpolation, least-squares filters, orthogonal

wavelets, biorthogonal wavelets, and a new class that we term binomial filters.

Downsampling and upsampling are two fundamental and widely used image operations, with

applications in image display, compression, and progressive transmission. Downsampling is the

reduction in spatial resolution while keeping the same two-dimensional (2D) representation. It is

typically used to reduce the storage and/or transmission requirements of images.Upsampling are the

increasing of the spatial resolution while keeping the 2D representation of an image. It is typically

used for zooming in on a small region of an image, and for eliminating the pixelation effect that

arises when a low-resolution image is displayed on a relatively large frame. More recently,

downsampling and upsampling have been used in combination: in lossy compression,

multiresolution lossless compression, and progressive transmission.

The standard methods for down/upsampling are decimation/duplication and bilinear

interpolation which yield low visual performance. The increasing uses of down/upsampling,

especially in combination warrant the development of better methods for them. In this paper, we

25

Page 26: new FINAL 1

examine the existing methods and propose new down/upsampling- combination methods, and

formulate a frequency-response approach for evaluating them. The approach is validated

experimentally. Our findings show that the best down/upsampling filters are what we term binomial

filters and some well-chosen biorthogonal wavelets. Bilinear interpolation was found significantly

inferior, and decimation duplication came last.

26

Page 27: new FINAL 1

CHAPTER-4

PROPOSED ALGORITHM

4.1 DECISION-BASED MEDIAN FILTERING (DMF)

The first stage of the proposed algorithm is the decision based median filter. In standard

median filter (SMF), each and every pixel value is replaced by the median of its neighbourhood

values. The decision-based median filter however makes a rough approximation of whether a pixel is

corrupted or not. Then, the uncorrupted pixel values are retained and the corrupted pixels are

replaced by the median value of their neighbours. Hence the' output of DMF is superior to the SMF.

The algorithm for DMF works as follows:

Step 1: A 2-D window ‘S xy’ of size 3 x 3 is selected. Let Z xy be the processing pixel which lies in

the centre of the window.

Step 2: If 0 < Z xy < 255, then Z xy is considered to be a noise-free pixel and it is left unchanged.

Step 3: Otherwise, calculate the median of the pixels in the window and replace the processing pixel

by the median value.

Step 4: Move the 3x3 window to the next pixel in the image.

The above steps are repeated for the entire image and the DMF output obtained is subjected to

further processing. Below shows the structure of DMF.

27

Page 28: new FINAL 1

Figure 4.1 structure of DMF.

4.2 UN-SYMMETRIC TRIMMED FILTERING (UTF)

The idea behind a trimmed filter is to reject the most probable outliers. In the existing Alpha-

Trimmed (Mean and Midpoint) Filtering (ATF), the trimming is symmetric at either end. The

number of values being trimmed depends upon the trimming factor "α". If the value of α=4, then

"α/2" values are trimmed on both the ends and the remaining pixels are averaged. It is observed that

higher the value of "α", greater is the noise suppression.

In ATF, sometimes even un-corrupted pixel values are also trimmed. This eventually leads to

loss of image detail and blurring of the image. In order to overcome the above problem an un-

symmetric filtering algorithm is proposed. In this algorithm, the trimming is un-symmetric i.e. the

numbers of pixels trimmed at the two ends are not always equal. The UTF checks whether the

extreme values of the sorted array, obtained from the 3x3 window, are impulse values and trim only

those impulse values. For example, if a 3 x 3 window does not have any impulse, the UTF would not

trim any values. On the contrary, the ATF would trim values, irrespective of whether the 3 x 3

window has an impulse or not. This property of UTF makes it more efficient in noise suppression

than the existing alpha trimmed filters. The UTF may be a mean filter or midpoint filter. The

partially restored image from DMF filter can be further enhanced through Un-symmetric Trimmed

Mean filter (UTMF) and Un-symmetric Trimmed Mid-Point filter (UTMP).

28

Page 29: new FINAL 1

4.2.1 UN-SYMMETRIC TRIMMED MEAN FILTERING (UTMF)

Step 1: A 2-D window ' S xy’ of size 3 x 3 is selected.

Step 2: The pixel values in the window are stored in a 1-D array.

Step 3: If any pixel value in the array is either '0' or '255', the corresponding pixel values are

eliminated from the array. Otherwise, all the 9 elements are retained.

Step 4: The pixel being processed is replaced by the mean of the values in the 1-D array.

Move the window by one step and repeat from step 1 to step 4. The above steps are repeated, until

the processing is completed for the entire image. Below shows the structure of UTMF.

Fig.4.2 Structure of UTMF

4.2.2 UN-SYMMETRIC TRIMMED MIDPOINT FILTERING (UTMP)

In UTMP, the impulses are trimmed unsymmetrically from both the ends, and midpoint is calculated

from the maximum and minimum of the remaining values.

The algorithm for UTMP is as follows:

Step 1: A 2-D window ‘S xy’ of size 3x3 is selected.

29

Page 30: new FINAL 1

Step 2: The pixel values in the window are sorted in ascending order, and stored in a 1-D array.

Step 3: If the pixel value in the array is either ‘0’ or ‘255’, the corresponding pixel values are

trimmed or eliminated, and the midpoint of remaining values is calculated.

Step 4: The pixel being processed is replaced by the midpoint value calculated.

Step 5: If all the values are eliminated then the pixel is replaced by the processed neighbouring pixel

value of the array. Move the window by one step, and repeat from step 2 to step 5. The above steps

are repeated, until the processing is completed for entire image. Below shows the structure of UTMP.

Fig. 3.3 Structure of UTMP

4.3 CASCADED FILTER

The DMF is superior to the SMF because it removes only corrupted pixels. The disadvantage of

DMF is that, the median pixel also be a noisy pixel value results in DMF fails to remove the noise

effectively at high density. The UTF has better noise removing ability than the existing

ATF, owing to the fact that it uses an un-symmetric trimming method to discard only the

impulse values. The UTF completely removes the noise even the noise density is 90%, but it also

slightly blurs the edges if the noise density is 80% and above.

30

Page 31: new FINAL 1

Thus a new class of cascaded filtering algorithm is proposed. In this structure DMF is

cascaded with UTF, to further improve the output obtained from the DMF. The noisy image is first

processed using the DMF. The output of DMF is given as the input to the UTF. The UTF may be a

UTMF or UTMP. The Proposed Algorithm 1 (PA1) is the cascaded version of DMF and UTMF

whereas Proposed Algorithm 2 (PA2) is the cascaded version of DMF and UTMP.

Fig. 4.4 Cascaded filter

The UTMF performs well for the noise density of 70% and less. But the performance of UTMP is

better than UTMF if noise density is above 70%. The cascaded connection is hence used to remove

impulse noise with a noise density as high as 90%. The cascaded connection yields the highest value

of Peak Signal to Noise Ratio (PSNR) and lesser value for Mean Square Error (MSE) and Mean

Absolute Error (MAE).

4.4 PSNR

The proposed algorithm is tested using 256X256, 8- bits/pixel image Lena (Gray). The

performance of the proposed algorithm is tested for various levels of noise corruption and compared

with standard filters namely standard median filter (SMF), weighted median filter (WMF), recursive

weighted median filter (RWM), adaptive median filter (AMF) and decision based algorithm (DBA).

Each time the test image is corrupted by salt and pepper noise of different density ranging from 10 to

90 with an increment of 10 and it will be applied to various filters. In addition to the visual quality,

the performance of the proposed algorithms (PA1, PA2) and other standard algorithms are

quantitatively measured by the following parameters such as peak signal-to-noise ratio (PSNR),

mean absolute error (MAE) and Mean square error (MSE).

31

Page 32: new FINAL 1

IMPLEMENTATION FOR VIDEO SEQUENCE

The video sequence is first converted into frames and frame into images. Then algorithm is applied to the images which are separated from frames. After the filtering process, the frames are converted back to the original movie.

Algorithm:

Video to frames:

The noisy video sequence containing impulse noise is converted into avi format, which is an uncompressed format and frames are extracted from the

video.

Frames to images:

Frames are then converted to images for further processing.

Filtering method:

32

Page 33: new FINAL 1

The noisy images are denoised using DBM ,UTMF algorithm .

Frames to movie:

After completing the entire process, the processed frames are finally converted back into original movie.

4.5 THE IMAGE PROCESSING SYSTEM

The term digital image refers to processing of a two dimensional picture by a digital

computer. In a broader context, it implies digital processing of any two dimensional data. A digital

image is an array of real or complex numbers represented by a finite number of bits. An image given

in the form of a transparency, slide, photograph or an X-ray is first digitized and stored as a matrix of

binary digits in computer memory. This digitized image can then be processed and/or displayed on a

high-resolution television monitor. For display, the image is stored in a rapid-access buffer memory,

which refreshes the monitor at a rate of 25 frames per second to produce a visually continuous

display. A typical digital image processing system is given in fig.1.1

4.5.1 DIGITIZER

A digitizer converts an image into a numerical representation suitable for input into a

digital computer. Some common digitizers are

1. Microdensitometer

33

Fig 3.5 Block Diagram of a Typical Image Processing System

Operator Console

Digital Computer

Image Processor

Display Hard Copy Device

Mass StorageDigitizer

Page 34: new FINAL 1

2. Flying spot scanner

3. Image dissector

4. Videocon camera

5. Photosensitive solid- state arrays.

4.5.2 IMAGE PROCESSOR

An image processor does the functions of image acquisition, storage, pre-processing,

segmentation, representation, recognition and interpretation and finally displays or records the

resulting image. The following block diagram gives the fundamental sequence involved in an image

processing system

.

As detailed in the diagram, the first step in the process is image acquisition by an

imaging sensor in conjunction with a digitizer to digitize the image. The next step is the pre-

processing step where the image is improved being fed as an input to the other processes.

Preprocessing typically deals with enhancing, removing noise, isolating regions, etc. Segmentation

partitions an image into its constituent parts or objects. The output of segmentation is usually raw

pixel data, which consists of either the boundary of the region or the pixels in the region themselves.

Representation is the process of transforming the raw pixel data into a form useful for subsequent

processing by the computer. Description deals with extracting features that are basic in

differentiating one class of objects from another. Recognition assigns a label to an object based on

the information provided by its descriptors. Interpretation involves assigning meaning to an

ensemble of recognized objects. The knowledge about a problem domain is incorporated into the

knowledge base. The knowledge base guides the operation of each processing module and also

34

Fig. 4.6 Block Diagram of Fundamental Sequence involved in an image

Processing system

Result

Representation & Description

Recognition & interpretation

Image Acquisition

Preprocessing

Segmentation

Knowledge

Base

Problem Domain

Page 35: new FINAL 1

controls the interaction between the modules. Not all modules need be necessarily present for a

specific function. The composition of the image processing system depends on its application.

The frame rate of the image processor is normally around 25 frames/second.

4.5.3 DIGITAL COMPUTER

Mathematical processing of the digitized image such as convolution, averaging, addition,

subtraction etc. are done by the computer.

4.5.4 MASS STORAGE

The secondary storage devices normally used are floppy disks, CD ROMs etc.

4.5.5 HARD COPY DEVICE

The hard copy device is used to produce a permanent copy of the image and for the storage of

the software involved.

4.5.6 OPERATOR CONSOLE

The operator console consists of equipment and arrangements for verification of intermediate

results and for alterations in the software as and when require. The operator is also capable of

checking for any resulting errors and for the entry of requisite data.

4.6 VLSI CIRCUITS DESIGN

Digital VLSI circuits are predominantly CMOS based. The way normal blocks like latches

and gates are implemented is different from what students have seen so far, but the behaviour

remains the same. All the miniaturisation involves new things to consider. A lot of thought has to go

into actual implementations as well as design. Let us look at some of the factors involved.

1. Circuit Delays. Large complicated circuits running at very high frequencies have one big problem

to tackle - the problem of delays in propagation of signals through gates and wires, even for areas a

few micrometres across. The operation speed is so large that as the delays add up, they can actually

become comparable to the clock speeds.

2. Power. Another effect of high operation frequencies is increased consumption of power. This has

two-fold effect - devices consume batteries faster, and heat dissipation increases. Coupled with the

35

Page 36: new FINAL 1

fact that surface areas have decreased, heat poses a major threat to the stability of the circuit itself.

3. Layout. Laying out the circuit components is task common to all branches of electronics. What’s

so special in our case is that there are many possible ways to do this; there can be multiple layers of

different materials on the same silicon, there can be different arrangements of the smaller parts for

the same component and so on.

The power dissipation and speed in a circuit present a trade-off; if we try to optimise on one, the

other is affected. The choice between the two is determined by the way we chose the layout the

circuit components. Layout can also affect the fabrication of VLSI chips, making it either easy or

difficult to implement the components on the silicon.

4.7 VHDL

4.7.1 About VHDL

VHDL is a hardware description language. It describes the behaviour of an electronic circuit or

system, from which the physical circuit or system can then be attained (implemented).

VHDL stands for VHSIC Hardware Description Language. VHSIC is itself an abbreviation for Very

High Speed Integrated Circuits, an initiative funded by the United States Department of Defence in

the 1980s that led to the creation of VHDL. Its first version was VHDL 87, later upgraded to the so-

called VHDL 93. VHDL was the original and first hardware description language to be standardized

by the Institute of Electrical and Electronics Engineers, through the IEEE 1076 standard. An

additional standard, the IEEE 1164, was later added to introduce a multi-valued logic system.

VHDL is intended for circuit synthesis as well as circuit simulation. However, though VHDL is fully

simulatable, not all constructs are synthesizable. We will give emphasis to those that are. A

fundamental motivation to use VHDL (or its competitor, Verilog) is that VHDL is a standard,

technology/vendor independent language, and is therefore portable and reusable. The two main

immediate applications of VHDL are in the field of Programmable Logic Devices (including CPLDs

—Complex Programmable Logic Devices and FPGAs—Field Programmable Gate Arrays) and in the

field of ASICs (Application Specific Integrated Circuits). Once the VHDL code has been written, it

can be used either to implement the circuit in a programmable device (from Altera, Xilinx, Atmel,

etc.) or can be submitted to a foundry for fabrication of an ASIC chip. Currently, many complex

commercial chips (microcontrollers, for example) are designed using such an approach.

36

Page 37: new FINAL 1

A final note regarding VHDL is that, contrary to regular computer programs which are sequential, its

statements are inherently concurrent (parallel). For that reason, VHDL is usually referred to as a

code rather than a program. In VHDL, only statements placed inside a PROCESS, FUNCTION, or

PROCEDURE are executed sequentially.

4.7.2 DESIGN FLOW

As mentioned above, one of the major utilities of VHDL is that it allows the synthesis of a circuit or

system in a programmable device (PLD or FPGA) or in an ASIC. The steps followed during such a

project are summarized in figure 3. We start the design by writing the VHDL code, which is saved in

a file with the extension .vhd and the same name as its ENTITY’s name. The first step in the

synthesis process is compilation.

Fig.4.7 VHDL Design Flow

Compilation is the conversion of the high-level VHDL language, which describes the circuit at the

Register Transfer Level (RTL), into a net list at the gate level. The second step is optimization,

which is performed on the gate-level net list for speed or for area. At this stage, the design can be

simulated. Finally, aplace and-route (fitter) software will generate the physical layout for a

PLD/FPGA chip or will generate the masks for an ASIC.

4.7.3 EDA Tools

37

Page 38: new FINAL 1

There are several EDA (Electronic Design Automation) tools available for circuit synthesis,

implementation, and simulation using VHDL. Some tools (place and route, for example) are opened

as part of a vendor’s design suite (e.g., Altera’s Quarto’s II, which allows the synthesis of VHDL

code onto Altera’s CPLD/FPGA chips, or Xilinx’s ISE suite, for Xilinx’s CPLD/FPGA chips). Other

tools (synthesizers, for example), besides being opened as part of the design suites, can also be

provided by specialized EDA companies (Mentor Graphics, Synopsis, Simplicity ,etc.).

Examples of the latter group are Leonardo Spectrum (a synthesizer from Mentor Graphics),

Simplify (a synthesizer from Simplicity), and Model Sim (a simulator from Model Technology, a

Mentor Graphics company). The designs presented in the book were synthesized onto CPLD/FPGA

devices (appendix A) either from Altera or Xilinx. The tools used were either ISE combined with

Model Sim (for Xilinx chips—appendix B), MaxPlus II combined with Advanced

Synthesis Software or Quartus II. Leonardo Spectrum was also used occasionally. Although different

EDA tools were used to implement and test the examples presented in the design, we decided to

standardize the visual presentation of all simulation graphs. Due to its clean appearance, the

waveform editor of MaxPlus II was employed. However, newer simulators, like ISE þ ModelSim

and Quartus II, over a much broader set of features, which allow, for example, a more refined timing

analysis. For that reason, those tools were adopted when examining the fine details of each design.

4.7.4 VHDL

VHDL (VHSIC hardware description language) is commonly used as a design-entry language for

field-programmable gate arrays and application-specific integrated circuits in electronic design

automation of digital circuits.

History VHDL was originally developed at the behest of the US Department of Defence in order to

document the behaviour of the ASICs that supplier companies were including in equipment. That is

to say, VHDL was developed as an alternative to huge, complex manuals which were subject to

implementation-specific details.

The idea of being able to simulate this documentation was so obviously attractive that logic

simulators were developed that could read the VHDL files. The next step was the development of

logic synthesis tools that read the VHDL, and output a definition of the physical implementation of

the circuit. Modern synthesis tools can extract RAM, counter, and arithmetic blocks out of the code,

and implement them according to what the user specifies. Thus, the same VHDL code could be

38

Page 39: new FINAL 1

synthesized differently for lowest area, lowest power consumption, highest clock speed, or other

requirements.

VHDL borrows heavily from the Ada programming language in both concepts (for example, the

slice notation for indexing part of a one-dimensional array) and syntax. VHDL has constructs to

handle the parallelism inherent in hardware designs, but these constructs (processes) differ in syntax

from the parallel constructs in Ada (tasks). Like Ada, VHDL is strongly-typed and is not case

sensitive. There are many features of VHDL which are not found in Ada, such as an extended set of

Boolean operators including nand and nor, in order to represent directly operations which are

common in hardware. VHDL also allows arrays to be indexed in either direction (ascending or

descending) because both conventions are used in hardware, whereas Ada (like most programming

languages) provides ascending indexing only. The reason for the similarity between the two

languages is that the Department of Defence required as much of the syntax as possible to be based

on Ada, in order to avoid re-inventing concepts that had already been thoroughly tested in the

development of Ada.

The initial version of VHDL, designed to IEEE standard 1076-1987, included a wide range of data

types, including numerical (integer and real), logical (bit and Boolean), character and time, plus

arrays of bit called bit_vector and of character called string.

A problem not solved by this edition, however, was "multi-valued logic", where a signal's drive

strength (none, weak or strong) and unknown values are also considered. This required IEEE

standard 1164, which defined the 9-value logic types: scalar std_ulogic and its vector version

std_ulogic_vector.The second issue of IEEE 1076, in 1993, made the syntax more consistent,

allowed more flexibility in naming, extended the character type to allow ISO-8859-1 printable

characters, added the xnor operator, etc.

Minor changes in the standard (2000 and 2002) added the idea of protected types (similar to the

concept of class in C++) and removed some restrictions from port mapping rules.

In addition to IEEE standard 1164, several child standards were introduced to extend functionality of

the language. IEEE standard 1076.2 added better handling of real and complex data types. IEEE

standard 1076.3 introduced signed and unsigned types to facilitate arithmetical operations on vectors.

IEEE standard 1076.1 (known as VHDL-AMS) provided analog and mixed-signal circuit design

extensions.

39

Page 40: new FINAL 1

Some other standards support wider use of VHDL, notably VITAL (VHDL Initiative Towards ASIC

Libraries) and microwave circuit design extensions.

In June 2006, VHDL Technical Committee of Accellera (delegated by IEEE to work on next update

of the standard) approved so called Draft 3.0 of VHDL-2006. While maintaining full compatibility

with older versions, this proposed standard provides numerous extensions that make writing and

managing VHDL code easier. Key changes include incorporation of child standards (1164, 1076.2,

1076.3) into the main 1076 standard, an extended set of operators, more flexible syntax of 'case' and

'generate' statements, incorporation of VHPI (interface to C/C++ languages) and a subset of PSL

(Property Specification Language). These changes should improve quality of synthesizable VHDL

code, make testbenches more flexible, and allow wider use of VHDL for system-level descriptions.

In February 2008, Accellera approved VHDL 4.0 also informally known as VHDL 2008, which

addressed more than 90 issues discovered during the trial period for version 3.0 and includes

enhanced generic types.

4.8 DESIGN

VHDL is a fairly general-purpose language, and it doesn't require a simulator on which to run the

code. There are a lot of VHDL compilers, which build executable binaries. It can read and write files

on the host computer, so a VHDL program can be written that generates another VHDL program to

be incorporated in the design being developed. Because of this general-purpose nature, it is possible

to use VHDL to write a test bench that verifies the functionality of the design using files on the host

computer to define stimuli, interacts with the user, and compares results with those expected. This is

similar to the capabilities of the Verilog language. VHDL is a strongly typed language, and as a

result is considered by some to be superior to Verilog. The superiority of one language over the

other has been the subject of intense debate among developers, for a long time. Both languages make

it relatively easy for an inexperienced developer to produce code that simulates successfully but that

cannot be synthesized into a real device, or is too large to be practicable. One particular pitfall in

both languages is the accidental production of transparent latches rather than D-type flip-flops as

storage elements.

VHDL is not a case sensitive language. One can design hardware in a VHDL IDE (such as Xilinx or

Quartus) to produce the RTL schematic of the desired circuit. After that, the generated schematic can

be verified using simulation software (such as ModelSim) which shows the waveforms of inputs and

40

Page 41: new FINAL 1

outputs of the circuit after generating the appropriate test bench. To generate an appropriate test

bench for a particular circuit or VHDL code, the inputs have to be defined correctly. For example,

for clock input, a loop process or an iterative statement is required.

The key advantage of VHDL when used for systems design is that it allows the behaviour of the

required system to be described (modelled) and verified (simulated) before synthesis tools translate

the design into real hardware (gates and wires).

Another benefit is that VHDL allows the description of a concurrent system (many parts, each with

its own sub-behaviour, working together at the same time). VHDL is a Dataflow language, unlike

procedural computing languages such as BASIC, C, and assembly code, which all run sequentially,

one instruction at a time.

A final point is that when a VHDL model is translated into the "gates and wires" that are mapped

onto a programmable logic device such as a CPLD or FPGA, then it is the actual hardware being

configured, rather than the VHDL code being "executed" as if on some form of a processor chip.

4.8.1 GETTING STARTED

Although background in a computer programming language (such as C) is helpful, it is not essential.

Free VHDL simulators are readily available, and although these are limited in functionality

compared to commercial VHDL simulators, they are more than sufficient for independent study. If

the user's goal is to learn RTL coding, (that is, design hardware circuits in VHDL, as opposed to

simply document or simulate circuit behaviour), then a synthesis/design package is also needed.

As with VHDL simulators, free FPGA synthesis tools are readily available, and are more than

adequate for independent study. Feedback from the synthesis tool gives the user a feel for the relative

efficiencies of different coding styles. A schematic/gate viewer shows the user the synthesized

design as a navigable net list diagram. Many FPGA design packages offer alternative design input

methods, such as block-diagram (schematic) and state-diagram capture. These provide a useful

starting template for coding certain types of repetitive structures, or complex state-transition

diagrams. Finally, the included tutorials and examples are valuable aids.

Nearly all FPGA design and simulation flows support both Verilog and VHDL, allowing the user to

learn either or both languages.

41

Page 42: new FINAL 1

In VHDL, a design consists at a minimum of an entity which describes the interface and an

architecture which contains the actual implementation. In addition, most designs import library

modules. Some designs also contain multiple architectures and configurations.

A simple AND gate in VHDL would look something like this:

-- (this is a VHDL comment)

-- import std_logic from the IEEE library

library IEEE;

use IEEE.std_logic_1164.all;

-- this is the entity

entity ANDGATE is

port (

IN1 :instd_logic;

IN2 :instd_logic;

OUT1: outstd_logic);

end ANDGATE;

architecture RTL of ANDGATE is

begin

OUT1 <= IN1 and IN2;

end RTL;

While the example above may seem very verbose to HDL beginners, one should keep in

mind that many parts are either optional or need to be written only once. And generally simple

functions like this are part of a larger behavioural module, instead of having a separate module for

something so simple. In addition, use of elements such as the std_logic type might at first seem an

overkill. One could easily use the built-in bit type and avoid the library import in the beginning.

However, using this 9-valued logic (U, X, 0, 1, Z, W, H, L,-) instead of simple bits (0,1) offers a very

powerful simulation and debugging tool to the designer which currently does not exist in any other

HDL.

In the examples that follow, you will see that VHDL code can be written in a very compact

form. However, the experienced designers usually avoid these compact forms and use a more

verbose coding style for the sake of readability and maintainability. Another advantage to the

verbose coding style is the smaller amount of resources used when programming to a Programmable

Logic Device such as a CPLD.

42

Page 43: new FINAL 1

4.8.2.Synthesizeable constructs and VHDL templates

VHDL is frequently used for two different goals: simulation of electronic designs and synthesis of such designs. Synthesis is a process where a VHDL is compiled and mapped into an implementation technology such as an FPGA or an ASIC. Many FPGA vendors have free (or inexpensive) tools to synthesize VHDL for use with their chips, where ASIC tools are often very expensive.

Not all constructs in VHDL are suitable for synthesis. For example, most constructs that explicitly deal with timing such as wait for 10 ns; are not synthesizable despite being valid for simulation. While different synthesis tools have different capabilities, there exists a common synthesizable subset of VHDL that defines what language constructs and idioms map into common hardware for many synthesis tools. IEEE 1076.6 defines a subset of the language that is considered the official synthesis subset. It is generally considered a "best practice" to write very idiomatic code for synthesis as results can be incorrect or suboptimal for non-standard constructs.

Some examples of synthesizable code follow below:

MUX template

The multiplexer, or 'MUX' as it is usually called, is a simple construct very common in hardware design. The example below demonstrates a simple two to one MUX, with inputs A and B, selector S and output X. Note that there are many other ways to express the same MUX in VHDL.

X <= A when S = '1' else B;

Latch template

A transparent latch is basically one bit of memory which is updated when an enable signal is raised. Again, there are many other ways this can be expressed in VHDL.

-- latch template 1:

Q <= D when Enable = '1' else Q;

-- latch template 2:

process(D,Enable)

begin

if Enable = '1' then

Q <= D;

43

Page 44: new FINAL 1

end if;

end process;

D-type flip-flops

The D-type flip-flop samples an incoming signal at the rising (or falling edge) of a clock. This example has an asynchronous, active-high reset, and samples at the rising clock edge.

DFF : process(RST, CLK)

begin

if RST = '1' then

Q <= '0';

elsif rising_edge(CLK) then

Q <= D;

end if;

end process DFF;

Another common way to write edge-triggered behavior in VHDL is with the 'event' signal attribute. A single apostrophe has to be written between the signal name and the name of the attribute.

DFF : process(RST, CLK)

begin

if RST = '1' then

Q <= '0';

elsif CLK'event and CLK = '1' then

Q <= D;

end if;

end process DFF;

Example: a counter

The following example is an up-counter with asynchronous reset, parallel load and configurable width. It demonstrates the use of the 'unsigned' type, type conversions between 'unsigned' and 'std_logic_vector' and VHDL generics. The generics are very close to arguments or templates in other traditional programming languages like C or C++.

library IEEE;

44

Page 45: new FINAL 1

use IEEE.std_logic_1164.all;

use IEEE.numeric_std.all; -- for the unsigned type

entity COUNTER is

generic (

WIDTH : in natural := 32);

port (

RST : in std_logic;

CLK : in std_logic;

LOAD : in std_logic;

DATA : in std_logic_vector(WIDTH-1 downto 0);

Q : out std_logic_vector(WIDTH-1 downto 0));

end entity COUNTER;

architecture RTL of COUNTER is

signal CNT : unsigned(WIDTH-1 downto 0);

begin

process(RST, CLK) is

begin

if RST = '1' then

CNT <= (others => '0');

elsif rising_edge(CLK) then

if LOAD = '1' then

CNT <= unsigned(DATA); -- type is converted to unsigned

else

CNT <= CNT + 1;

end if;

45

Page 46: new FINAL 1

end if;

end process;

Q <= std_logic_vector(CNT); -- type is converted back to std_logic_vector

end architecture RTL;

More complex counters may add if/then/else statements within the rising_edge(CLK) elsif to add other functions, such as count enables, stopping or rolling over at some count value, generating output signals like terminal count signals, etc. Care must be taken with the ordering and nesting of such controls if used together, in order to produce the desired priorities and minimize the number of logic levels needed.

simulation and debugging. For example, the following code will generate a clock with the frequency of 50 MHz. It can, for example, be used to drive a clock input in a design during simulation. It is, Simulation-only constructs

A large subset of VHDL cannot be translated into hardware. This subset is known as the non-synthesizable or the simulation-only subset of VHDL and can only be used for prototyping, however, a simulation-only construct and cannot be implemented in hardware. In actual hardware, the clock is generated externally; it can be scaled down internally by user logic or dedicated hardware.

process

begin

CLK <= '1'; wait for 10 ns;

CLK <= '0'; wait for 10 ns;

end process;

The simulation-only constructs can be used to build complex waveforms in very short time. Such waveform can be used, for example, as test vectors for a complex design or as a prototype of some synthesizable logic that will be implemented in the future.

process

begin

wait until START = '1'; -- wait until START is high

for i in 1 to 10 loop -- then wait for a few clock periods...

wait until rising_edge(CLK);

46

Page 47: new FINAL 1

end loop;

for i in 1 to 10 loop -- write numbers 1 to 10 to DATA, 1 every cycle

DATA <= to_unsigned(i, 8);

wait until rising_edge(CLK);

end loop;

-- wait until the output changes

wait on RESULT;

-- now raise ACK for clock period

ACK <= '1';

wait until rising_edge(CLK);

ACK <= '0';

-- and so on...

end proceSS

CHAPTER – 5

5.1 DENOISING PERFORMANCE

47

Page 48: new FINAL 1

The de noising performance of the proposed algorithm and other standard methods are tested

for gray scale images and videos. The visual quality results are presented in Fig. 5.1 and 5.2. Fig.

5.1(a) and 5.2(a) shows the Lena image. Fig.5.1(b) and 5.2(b) shows the noisy image of noise density

70 and 90 respectively. Fig. 5.1(c)-5.1(g) shows the restoration results of standard algorithms such as

SMF, WMF, RWM, AMF, and DBA. Fig. 5.2(c)-5.2(e) shows the restoration results of standard

algorithms such as RWM, AMF, and DBA. Fig. 5.1(h) and 5.2(f) shows the restoration results of the

proposed Algorithm 1(DMF+UTMF) and the pro-posed Algorithm 2(DMF+UTMP) respectively.

The quantitative performances in terms of PSNR, MAE and MSE for all the algorithms are given in

Table 1 to Table 3. The same are plotted in Fig. 5.3. For lower noise density upto 30% almost all the

algorithms perform equally well in removing the salt and pepper noise completely with edge

preservation. For the case of noise density 50%, the standard algorithms such as SMF, WMF,

RWMF fails to remove the salt and pepper noise completely. But the AMF, DBA and proposed

method completely remove noise. In the case of high density noise, the performance of the standard

methods is very poor in terms of noise suppression and detail preservation. For the case of 70% noise

density, the AMF and DBA perform slightly less than that of the proposed filters in terms of noise

removal and edge preservation as shown in the fig. 5.1(f) and 5.1(g). The maximum window size of

17 x 17 is selected for AMF to give best result at high density noise level. At high density noise level

the recently proposed decision based algorithm produce streaking effect at the edges as shown in fig.

5.2(e), the visual quality, PSNR, MAE and MSE results clearly show that the proposed filters

outperforms than the many of the standard filters and recently proposed methods.

For a given 10% noised image the maximum PSNR obtained is 34.62 where DBA is used, but

proposed algorithm (PA1, PA2) gives values of 46.37 and 46.57 respectively .The trend slowly

changes towards proposed algorithm in terms of increased PSNR as the noise densities increased

upto 90% i.e., it can be observed the maximum PSNR is obtained using the proposed algorithm when

compared to other mentioned algorithm with noise density increased from 30% onwards.

When comparing the performance of PA1 and PA2 with respect to fig. 5.1(h) and fig. 5.2(f), the

variation of PSNR for PA2 is almost uniform for noise density ranging from 10% to 90%, in

particular PSNR is comparatively high for noise density is above 70% than PA1. The proposed

cascaded filtering algorithm completely removes the noise even if the noise density is as high as

possible. It also shows that PAl performs well for noise density is less than 70% and PA2 performs

well above 70% noise in all images.

48

Page 49: new FINAL 1

The VLSI implementation for this proposed algorithm is under progress. Comparatively the PA2

requires less complex circuits than PAl, and the computation time for PA2 is lesser than PAl, in

addition the test results proves the performance of PA2 is better than PAl even at high noise density.

Thus the proposed cascaded filter is implemented using DMF cascading with UTMP. MATLAB

Simulation results for various filters are shown in Fig. 5.1 and Fig 5.2.

Fig. 5.1.(a) Original Lena image (b) Noisy image of noise density 70%. Restoration results of (c)

Standard median filter (d) Weighted median filter (e) Recursive weighted median filter (f) Adaptive

median filter (g) Decision based algorithm (h) Proposed Algorithm 1

49

Page 50: new FINAL 1

Fig. 5.2. (a) Original Lena image (b) Lena image Corrupted by 90% Noise density. Restoration Results of (c) RWM (d) AMF (e) DBA (f) Proposed Algorithm

5.3(a) GRAPH FOR NOISE DENSITY Vs PSNR COMPARATIVE RESULTS OF VARIOUS FILTERS IN TERMS OF PSNR FOR 'LENA' IMAGE

50

Page 51: new FINAL 1

51

Page 52: new FINAL 1

5.3(b) GRAPH FOR NOISE DENSITY VS MSE COMPARATIVE RESULTS OF VARIOUS FILTERS IN TERMS OF MSE FOR 'LENA' IMAGE

52

hvh

Page 53: new FINAL 1

5.3(c) GRAPH FOR NOISE DENSITY VS MAE COMPARATIVE RESULTS OF VARIOUS FILTERS IN TERMS OF MAE FOR 'LENA' IMAGE

53

Page 54: new FINAL 1

TABLE 1

COMPARATIVE RESULTS OF VARIOUS FILTERS IN TERMS OF PSNR

FOR LENA IMAGE

54

Page 55: new FINAL 1

TABLE 2

COMPARATIVE RESULTS OF VARIOUS FILTERS IN TERMS OF MAE

FOR LENA IMAGE

55

Page 56: new FINAL 1

TABLE 3

COMPARATIVE RESULTS OF

VARIOUS FILTERS IN TERMS OF MSE FOR LENA IMAGE

56

Page 57: new FINAL 1

CHAPTER-6

SIMULATION RESULTS

6.1 DECISION BASED MEDIAN

6a. simulation output graph of DMF

57

Page 58: new FINAL 1

Description for simulation

Input : {0,0,255,14,255,16,13,17,18}

1) Mid value is noisy pixel so sort and take median value

Sorted Output :{0,0,13,14,16,17,18,255,255

Median =mid value (sort) =16

Input :{0,0,255,14,128,16,13,17,18}

1) Mid value is noise-free pixel so no change

No Change, Output is 128

58

Page 59: new FINAL 1

6.2 UTMF

6b. simulation output graph of UTMF

Description of Simulation

Input : {5,0,69,98,152,255,0,58,84}

59

Page 60: new FINAL 1

The pixel values 0, 255, 0 are identified as corrupted pixel and replaced by mean values The pixel

value “0” was replaced by “80” for integer 1.Similarly 255 and 0 are replaced by “89” and “70”

respectively.

CHAPTER 7

7.1 APPLICATION

An important application of VHDL is visual communication in wireless networks, where the

bandwidth is at a premium and end devices have diverse display capabilities, ranging from small

screens of cell phones to regular screens of laptops. As illustrated in Fig. 1.1, the same standard-

compliant code stream feeds displays of different resolutions. The only difference is that high-

resolution displays invoke VHDL up conversion while own-resolution displays do not. This design

happens to be compatible with network hardware levels since high-resolution displays are typically

associated with more powerful computers. In such a heterogeneous wireless environment scalable or

layered image compression methods (e.g., JPEG 2000) are inefficient, because the refinement

portion of the scalable code stream still consumes bandwidth and yet generates no benefits to low-

resolution devices.

Because the down-sampled image is only a small fraction of the original size, VHDL greatly

reduces the encoder complexity, regardless what third-party codec is used in conjunction. This

property allows the system to shift the computation burden from the encoder to decoder, making

VHDL a viable asymmetric compression solution when the encoder is resource deprived.

Furthermore, the superior low bit-rate performance of the VHDL approach seems to suggest that a

camera of unnecessarily high resolution can ironically produce inferior images than a lower

resolution camera, if given a tight bit budget. This rather counter-intuitive observation should be

60

Page 61: new FINAL 1

heeded when one designs visual communication devices/systems with severe constraints of energy

and bandwidth.

7.2 ADVANTAGE

The key advantage of VHDL, when used for systems design, is that it allows the behavior of the required system to be described (modeled) and verified (simulated) before synthesis tools translate the design into real hardware (gates and wires).

Another benefit is that VHDL allows the description of a concurrent system. VHDL is a dataflow language, unlike procedural computing languages such as BASIC, C, and assembly code, which all run sequentially, one instruction at a time.

VHDL project is multipurpose. Being created once, a calculation block can be used in many other projects. However, many formational and functional block parameters can be tuned (capacity parameters, memory size, element base, block composition and interconnection structure).

VHDL project is portable. Being created for one element base, a computing device project can be ported on another element base, for example VLSI with various technologies

61

Page 62: new FINAL 1

CHAPTER 8

8.1 CONCLUSION

In this work, it can be observed that the performance of the proposed filters is superior to the existing

filters. Even at very high noise densities (around 90%), the texture details and edges are preserved

effectively. The filtering method suggested completely removes the noise with a PSNR value of

more than 25 dB which is higher compared to other existing and recently proposed filters. The

important features of the proposed filters include edge preservation, noise suppression and removal

of streaking and blurring effects at high noise densities. Thus as a future work it can be further

improved by using DE blurring technique or a neural networks for enhancing the outputs and also to

remove other kinds, of noise, such as speckle noise, mixed noise etc.

62

Page 63: new FINAL 1

REFERENCES

[1]An Efficient Non-linear Cascade Filtering Algorithm for Removal of High Density Salt and

Pepper Noise in Image and Video sequence

S. Balasubramanian, S. Kalishwaran, R. Muthuraj, D. Ebenezer, V. Jayaraj ,2009.

[2] Srinivasan K. S. and Ebenezer D., "A New Fast and Efficient Decision- Based Algorithm for

Removal of High-Density Impulse Noises", IEEE signal processing letters, Vol. 14, No. 3, p.189 -

192,2007

[3] S.Manikandan, O.UmaMaheswari, D.Ebenezer, "An Adaptive Recursive Weighted Median

Filter with Improved Performance in Impulsive Noisy Environment", WSEAS Transactions on

Electronics, Issue 3, Vol.1, July 2004.

[4] E.Srinivasan and D.Ebenezer, "A New Class of Midpoint Type Non-linear Filters for Eliminating

Short and Long Tailed Noise in Images”. Third International Symposium on Wireless Personal

Multimedia Communication, November 2000, Bangkok, Thailand

[5] Pok, G. Jyh-Charn Liu, "Decision based median filter improved by predictions", ICIP 99(1999)

proceedings vol 2 pp 410-413

63

Page 64: new FINAL 1

[6] J. Astola and P. Kuosmanen, Fundamentals of Non-Linear Digital Filtering, BocRaton, CRC,

1997

[7] T.S. Huang, G.J. Yang, and G.Y. Tang, "Fast two dimensional Median Filtering algorithm" ,

IEEE Transactions on Acoustics, Speech and Signal Processing, 1(1995), pp. 499-502.

[8] W.K.Pr[att, Digital Image Processing (New York: Wiley 1978).[3] I. Pitas and A.N.

Venetsanopoulous, Non-linear Digital Filter Principles and Applications, (Boston: Kluwer Academic

publishers,1990

[9] Ho-Ming Lin, "Median Filters with Adaptive Length", IEEE Transactions on Circuits and

Systems, Vol. 35, No. 6, June 1988

64