Review Article VLSI Architectures for Image Review Article VLSI Architectures for Image Interpolation:

  • View
    0

  • Download
    0

Embed Size (px)

Text of Review Article VLSI Architectures for Image Review Article VLSI Architectures for Image...

  • Review Article VLSI Architectures for Image Interpolation: A Survey

    C. John Moses,1 D. Selvathi,2 and V. M. Anne Sophia1

    1 Department of ECE, St. Xavier’s Catholic College of Engineering, Nagercoil 629003, India 2Department of ECE, Mepco Schlenk Engineering College, Sivakasi 626005, India

    Correspondence should be addressed to C. John Moses; jofjef@yahoo.com

    Received 18 October 2013; Revised 20 April 2014; Accepted 5 May 2014; Published 19 May 2014

    Academic Editor: Marcelo Lubaszewski

    Copyright © 2014 C. John Moses et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    Image interpolation is a method of estimating the values at unknown points using the known data points. This procedure is used in expanding and contrasting digital images. In this survey, different types of interpolation algorithm and their hardware architecture have been analyzed and compared.They are bilinear, winscale, bi-cubic, linear convolution, extended linear, piecewise linear, adaptive bilinear, first order polynomial, and edge enhanced interpolation architectures.The algorithms are implemented for different types of field programmable gate array (FPGA) and/or by different types of complementary metal oxide semiconductor (CMOS) technologies like TSMC 0.18 and TSMC 0.13. These interpolation algorithms are compared based on different types of optimization such as gate count, frequency, power, and memory buffer. The goal of this work is to analyze the different very large scale integration (VLSI) parameters like area, speed, and power of various implementations for image interpolation. From the survey followed by analysis, it is observed that the performance of hardware architecture of image interpolation can be improved by minimising number of line buffer memory and removing superfluous arithmetic elements on generating weighting coefficient.

    1. Introduction

    In digital image scaling, image interpolation algorithms are used to convert an image from one resolution to another resolution without losing the visual content in the image. In the colour, image interpolation is the process of estimating the missing colour samples to reconstruct a full colour image [1]. Image scaling is widely used in many fields, ranging from consumer electronics, such as digital camera, mobile phone, tablet, display devices and medical imaging like computer assisted surgery (CAS) and digital radiographs [2]. In many applications, from consumer electronics to medical imaging, it is desirable to improve the restructured image quality and processing performance of hardware implementation [3]. For example, a video source with a 640 × 480 video graphics arrays (VGA) resolution may need to fit the 1920 × 1080 resolution of a high definitionmultimedia interface (HDMI).

    Image up scaling [4] methods are implemented for a vari- ety of computer equipments like printers, digital television, media players, image processing systems, graphics renderers, and so on. On the other hand, high resolution image may need to be scaled down to a small size in order to fit the lower

    resolution of small liquid crystal display panels. That is, the image scaling is a challenging and very significant issue in digital image processing [5].The problem is to attain a digital image to be displayed on a large bitmap from unique data sample in a smaller grid, and this image should appear like it had been attained with a sensor having the resolution of the up-scaled image or, as a minimum, present a “natural” texture. Methods that are normally used to solve the problem (i.e., pixel replication, bilinear, or bi-cubic interpolation) do not realize these requirements, producing images with visual artifacts like pixelization, jagged contours, and over smoothing. Therefore, a set of advanced adaptive methods have been presented [4].

    The interpolation algorithms can be classified as adaptive and nonadaptive algorithms [3]. Nonadaptive algorithms perform interpolation in a fixed pattern for every pixel, that is, by averaging neighbouring pixels. This causes an artifact, the zipper effect in the interpolated image. This leads to a blur across the edges. This technique is fixed irrespective of the input image features. Adaptive algorithms use both spectral and spatial features present in the neighbourhood pixel in order to interpolate the missed pixel as close to

    Hindawi Publishing Corporation VLSI Design Volume 2014, Article ID 872501, 10 pages http://dx.doi.org/10.1155/2014/872501

  • 2 VLSI Design

    the original as possible [6]. To restore more accurate and visually pleasing results, image spatial or spectral correlation or both have been exploited. Adaptive algorithms can detect local spatial features present in the pixel neighbourhood and then make effective choices as to which predictor to use for that neighbourhood. The result is a reduction or elimination of zipper-type artifacts.

    Further, the interpolationmethods are generally classified into two classes as [7] spatial domain and frequency domain. Spatial domain technique is adopted frequently for real- time applications because it has low complexity. The other method uses proper transform such as discrete cosine trans- form (DCT), discrete fourier transform (DFT), or wavelet transform to scale images in the frequency domain. Due to its higher computational complexity andmemory requirements, the frequency domain technique is not appropriate for real- time or low cost applications.

    In many practical real-time applications, the scaling process is included in end user equipment and it can be implemented by VLSI. VLSI architecture is implemented using either FPGA or application specific integrated circuit (ASIC). FPGA is designed to be configured by a customer or a designer often manufacturing. Thus, FPGAs are used to construct reconfigurable computing systems with high performance and low cost. But ASIC is not reconfigurable. Recent FPGAplatformshavemillions of gates, up to 500MHz clock frequency, reasonably large on chip memory and fast input output interface [8]. FPGAs are widely used for real-time implementation of various image processing algorithms such as motion detection, image enhancement, image correlation, and image compression. In this paper, different hardware architectures of interpolationmethods are discussed and analysed their performance.

    Section 2 explains the hardware architecture of image interpolation, Section 3 provides the important characteris- tics of different hardware implementations of image interpo- lation algorithms using FPGA/TSMC. Section 4 deals with a detailed analysis of different optimization parameters of VLSI architecture like gate count, speed, and power. Finally, Section 5 concludes the analysis.

    2. Hardware Architecture of Image Interpolation

    The hardware architecture of image interpolation includes the coordinate accumulator, line buffer, weighting coefficients generator, vertical interpolator, and horizontal interpolator. Each of the units performs specified operations for the inter- polation [14]. The hardware architecture with its different units is shown in Figure 1.

    2.1. Co-Ordinate Accumulator. The coordinate accumulator calculates the corresponding coordinate of the current target pixel to be interpolated. Its inputs are two scaling factors: one is for the horizontal direction scale 𝑥 and the other is for the vertical direction scale 𝑦. These factors provide the scaling

    Coordinate accumulator

    Weighting coefficients generator

    Vertical interpolator

    Q

    Line buffer

    Horizontal interpolator

    Image inScale x Scale y

    Figure 1: Block diagram of the hardware architecture of image interpolation.

    1 s� 1 − s�

    V0A00 A10 A20 A30

    Figure 2: Vertical interpolation.

    ratio of each direction. The current target pixel’s coordinate (𝑥 𝑐 , 𝑦 𝑐 ) is calculated as [7]

    𝑥 𝑐 = 𝑥 𝑐−1 + scale 𝑥,

    𝑦 𝑐 = 𝑦 𝑐−1 + scale 𝑦.

    (1)

    Equation (1) represents the coordinate of the previous target pixel.

    2.2. Line Buffers (LB). The line buffers are used to store source image data.

    2.3. Weighting Coefficient Generator. In 1D interpolation, vertical and horizontal weighting coefficients have to be determined. The method of calculating vertical weighting coefficients is identical to the way of obtaining horizontal weighting coefficients. The way of calculating vertical and horizontal weighting coefficients is shown in Figures 2 and 3, respectively [14].

    According to Figure 2 the vertical weighting coefficients can be calculated as

    V𝑤 𝑖 = − (1 − 𝑠V)

    2

    × 𝑠V,

    V𝑤 𝑖+1 = (1 − 𝑠V) + 2(1 − 𝑠V)

    2

    𝑠 − (1 − 𝑠V) 𝑠 2

    V ,

    V𝑤 𝑖+2 = 𝑠V + 2 (1 − 𝑠V) 𝑠

    2

    V − (1 − 𝑠V) 2

    𝑠V,

    V𝑤 𝑖+3 = − (1 − 𝑠V) × 𝑠

    2

    V ,

    (2)

    where V𝑤 𝑖 , V𝑤 𝑖+1

    , V𝑤 𝑖+2

    and V𝑤 𝑖+3

    are the vertical coefficients of the corresponding source pixels𝐴

    00 ,𝐴 10 ,𝐴 20 , and𝐴

    30 . 𝑠V

    is the distance between the source pixel𝐴 𝑖+1,𝑗

    and the vertical interpolation pixel V

    0 .

  • VLSI Design 3

    Q

    1 sh 1 − sh

    V0 V1 V2 V3

    Figure 3: Horizontal interpolation.

    Similarly, based on Figure 3, all the horizontal weighting coefficients can be found by

    ℎ𝑤 𝑗 =

Recommended

View more >