11
Fitting coding scheme for image wavelet representation Artur Przelaskowski Institute of Radioelectronics, Warsaw University of Technology Nowowiejska 15/19, 00-665 Warszawa, Poland ABSTRACT Efficient coding scheme for image wavelet representation in lossy compression scheme is presented. Spatial-frequency hierarchical stmcture of quantized coefficient and their statistics is analyzed to reduce any redundancy. We applied context- based linear magnitude predictor to fit Pt order conditional probability model in arithmetic coding of significant coefficients to local data characteristics and eliminate spatial and inter-scale dependencies. Sign information is also encoded by inter and intra-band prediction and entropy coding of prediction errors. But main feature of our algorithm deals with encoding way of zerotree structures. Additional symbol of zerotree root is included into magnitude data stream. Moreover, four neighbor zerotree roots with significant parent node are included in extended high-order context model of zerotrees. This significant parent is signed as significant zerotree root and information about these roots distribution is coded separately. The efficiency of presented coding scheme was tested in dyadic wavelet decomposition scheme with two quantization procedures. Simple scalar uniform quantizer and more complex space-frequency quantizer with adaptive data thresholding were used. The final results seem to be promising and competitive across the most effective wavelet compression methods. Keywords: zerotree, context modeling, entropy coding, wavelet compression 1. INTRODUCTION Suitable wavelet transform and space-frequency quantization scheme including adaptability and R-D optimization is very important for compression efficiency. But adaptive context modeling and conditional entropy coding as a fmal stage of compression scheme could also significantly increase the effectiveness of whole process. In case of natural image analysis, simple assumptions on the structure of trends identified with large smooth image regions and transients identified with edges could be hold. Textures can be established to be dense concentration of small edges. Moreover edges play a fundamental perceptual role in image coding. Even slender edges contain negligible amount of signal energy could be very important in psychovisual evaluation of reconstructed images and therefore should be preserved and coded. Quantization scheme is responsible for accurate edge reconstruction, and only noisy components of trends are appropriated for elimination. Thus, mainly edge information in wavelet domain should be efficiently coded after quantization. Because of space-frequency data localization after wavelet signal decomposition, the formations of energy clusters in successive subbands at spatial locations associated with edges in the original image are distributed. The edge information is concentrated in small areas of corresponding spatial positions across scales of multiresolution decomposition in case of small compact support of filter banks used. Signal energy due to smooth regions is compacted mostly into a few low frequency components of the highest level of inultiresolution tree and its contributions to coefficients in the higher frequency bands is negligible. Consequently, the variance of the coefficients decreases as we move from the highest to the lowest levels of the tree. The corresponding source model considered in data coding posses subsequently decreasing number of alphabet symbols. These properties should be exploited in efficient coding scheme like a priori known structure of data source to be encoded. Among different tree-structured data models zerotree is very popular and efficient for coding applications.6'7'8 Spatially oriented quad tree structure of zerotree directly depicts hierarchical data structure after two-channel wavelet decomposition. Zerotree allows the successful prediction of insignificant (zero quantized) coefficients across scales to be efficiently represented as a part of exponentially growing tree. It provides a compact multiresolution representation of significance Part of the SPIE Conference on Multimedia Storage and Archiving Systems Ill Boston, Massachusetts • November 1998 465 SPIE Vol. 3527 • 0277-786X/981$1O.OO Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

SPIE Proceedings [SPIE Photonics East (ISAM, VVDC, IEMB) - Boston, MA (Sunday 1 November 1998)] Multimedia Storage and Archiving Systems III - Fitting coding scheme for image wavelet

Embed Size (px)

Citation preview

Fitting coding scheme for image wavelet representationArtur Przelaskowski

Institute of Radioelectronics, Warsaw University of TechnologyNowowiejska 15/19, 00-665 Warszawa, Poland

ABSTRACT

Efficient coding scheme for image wavelet representation in lossy compression scheme is presented. Spatial-frequencyhierarchical stmcture of quantized coefficient and their statistics is analyzed to reduce any redundancy. We applied context-based linear magnitude predictor to fit Pt orderconditional probability model in arithmetic coding of significant coefficientsto local data characteristics and eliminate spatial and inter-scale dependencies. Sign information is also encoded by inter andintra-band prediction and entropy coding of prediction errors. But main feature of our algorithm deals with encoding way ofzerotree structures. Additional symbol of zerotree root is included into magnitude data stream. Moreover, four neighborzerotree roots with significant parent node are included in extended high-order context model of zerotrees. This significantparent is signed as significant zerotree root and information about these roots distribution is coded separately. The efficiencyof presented coding scheme was tested in dyadic wavelet decomposition scheme with two quantization procedures. Simplescalar uniform quantizer and more complex space-frequency quantizer with adaptive data thresholding were used. The finalresults seem to be promising and competitive across the most effective wavelet compression methods.

Keywords: zerotree, context modeling, entropy coding, wavelet compression

1. INTRODUCTION

Suitable wavelet transform and space-frequency quantization scheme including adaptability and R-D optimization is veryimportant for compression efficiency. But adaptive context modeling and conditional entropy coding as a fmal stage ofcompression scheme could also significantly increase the effectiveness of whole process.

In case of natural image analysis, simple assumptions on the structure of trends identified with large smooth imageregions and transients identified with edges could be hold. Textures can be established to be dense concentration of smalledges. Moreover edges play a fundamental perceptual role in image coding. Even slender edges contain negligible amountof signal energy could be very important in psychovisual evaluation of reconstructed images and therefore should bepreserved and coded. Quantization scheme is responsible for accurate edge reconstruction, and only noisy components oftrends are appropriated for elimination. Thus, mainly edge information in wavelet domain should be efficiently coded afterquantization.

Because of space-frequency data localization after wavelet signal decomposition, the formations of energy clusters insuccessive subbands at spatial locations associated with edges in the original image are distributed. The edge information isconcentrated in small areas of corresponding spatial positions across scales of multiresolution decomposition in case ofsmall compact support of filter banks used. Signal energy due to smooth regions is compacted mostly into a few lowfrequency components of the highest level of inultiresolution tree and its contributions to coefficients in the higherfrequency bands is negligible. Consequently, the variance of the coefficients decreases as we move from the highest to thelowest levels of the tree. The corresponding source model considered in data coding posses subsequently decreasing numberof alphabet symbols. These properties should be exploited in efficient coding scheme like a priori known structure of datasource to be encoded.

Among different tree-structured data models zerotree is very popular and efficient for coding applications.6'7'8 Spatiallyoriented quad tree structure of zerotree directly depicts hierarchical data structure after two-channel wavelet decomposition.Zerotree allows the successful prediction of insignificant (zero quantized) coefficients across scales to be efficientlyrepresented as a part of exponentially growing tree. It provides a compact multiresolution representation of significance

Part of the SPIE Conference on Multimedia Storage and Archiving Systems IllBoston, Massachusetts • November 1998 465SPIE Vol. 3527 • 0277-786X/981$1O.OO

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

map, which is binary map indicating the positions ofthe significant (nonzero quantized) coefficients. In essence the zerotreeis a kind ofhigh-order context model ofzero-valued wavelet coefficients.

The success of this approach is caused by the fact that typically a large number of insignificant coefficients occur inthe form of zerotree. Then these zeros can be encoded very efficiently. Additionally, zerotree reduces the uncertainty of theposition of significant values and reduce the entropy of coded source.

But the basic question is as follows: is it optimal structure reflecting data dependencies in the best way? Establishedsquare shape of context in spatial domain seems to be slightly artificial in context of real structure shapes. Certain group ofzeros can not be included in zerotree structure. This disadvantage could be reduced by R-D optimization,7 application ofmorphological representation ofwavelet data,' complex context modeling,2 set partitioning6 etc.

The purpose of our research is a construction of efficient method of lossless coding for wavelet image representation toexploit time-scale data characteristics. The optimization of coding process is realized in four aspects: zerotree structurecoding, adaptive context based coding with a conditional probability model fitted to data dependencies, sign informationcoding and data set partitioning because of different statistics.

2. CODING SCHEME OF WAYELET DATA REPRESENTATION WITH CONTEXT BASEDPREDICTION AND EXTENDED ZEROTREE MAPPING

Suitable entropy coding of quantized transform coefficients seems to be very important for final compression efficiency ofwavelet-based algorithms. Zerotree structure, complex models of high order contexts, data set partitioning etc. allow toachieve very high compression effectiveness even with very simple quantization model.

Th idea of our algorithm is simple. Firstly, binary tree of significant and insignificant nodes is calculated. This tree ispruned by removing zerotrees and inserting additional symbols of zerotree roots. The wavelet coefficient values are codedsubband-sequentially. Spectral selection is made in raw and column ordering but for the high frequency coefficients, row-column or column-raw ordering is arranged in dependence on entropy in both directions. Minor conditional entropy valuedescribes more efficient direction of data ordering. The subbands are coded in the following way: the lowest frequencysubband (LL) first, then right side coefficient block (HL), down-left (LH) and down-right block (HH) at the end. Next moreprecisely scale data blocks are coded in the same order HL, LH, HH etc.

2.1. Zerotree coding

The problem is uncertainty in the location ofthe significant coefficients in the child band. Unlike in a case of zerotree wherethe existence of insignificant parent node provides the presence of all its insignificant children, here the appearance ofnonzero parent does not immediately imply the presence of nonzero children. However since nonzero values are expected tobe generated by image edges, it is reasonable to expect their presence at all scales in related positions. Therefore, althoughthe exact location of the significant children will depend on various factors (kind of edge, filter length and otherparameters), somewhere near the location of these children the probability of nonzero data is increased. This is one sort ofdata inter-band dependencies to be reduced. Also clear relation of insignificant parent-all insignificant children captured byzerotree could be broken by significance of any child node. This isolated zero (IZ) suggests possibility of other nonzerochildren appearance close his children.

We noticed that many times significant parent has not got any significant child but this case is not expressed byzerotree structure and many zero-value squares remain in map of data significance.

To remove any redundancy in spatial-frequency domain convenient zerotree structure is used. But the way of zerotreecoding is modified. New symbol called significant zerotree root is entered when parent node is significant and all childnodes are insignificant. Thus we need to investigate significant nodes and generate their root or not root characteristics.Additional binary map is created as separate data stream, which could be efficiently coded by context modeling, predictionand arithmetic coding of the prediction errors. Magnitude of significant nodes, isolated zeros and insignificant zerotree rootsymbols could be efficiently coded together by arithmetic coder of the 1storder.

466

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

The procedure of applying of zerotree model is as Ibllows. Alier image wavelet decomposition the typical hierarchicaltree in dyadic form is analyzed. The four lowest frequency subbands of the coarsest scale level are located at the top of thetree. These data have not got parent nodes and three of them are the parents fi.r the coefficients in lower tree levels (incorresponding spatial location). Each parent coefficient has got four direct children in finer scale and each child is underdirect parent in coarser scale. Procedure of zerotree construction starts at the bottom of wavelet tree. Firstly each parentcoefficient is checked to be significant (S) or insignificant (I) - binary tree is built. l'he pruning of this tree is preformednext. Only the branches with insignificant coefficients can be pruned and the procedure is slightly difh.rent at the top and atthe bottom of the tree. Starting from the tree bottom successive values of four children data and their parent from higherlevel are tested. If all children are insignificant - the tree branches are removed and the parent is signed to he pruned branchnode: insignificant zerotree root (IZTR) in case of insignificant parent or significant zerotree root (SZTR) in other case.Because of this the tree alphabet is enlarged and new symbol code is arranged thr IL. Bit information about distribution ofSZTR is written to separate file. Pruning of the middle level branches is performed in case of all children signed asinsignificant zerotree root. The last step of this pruning process is different. The spatial correlation of the coarsest scale datafrom different subbands is exploited. LL data are treated as parent fir three coefficients from Ill,, LII and EIH in the samespatial location (horizontal relations). The mode of LL histogram is used for significance node evaluation, instead of zero.Tree is pruned when the coefficient from LL is equal to mode value and three corresponding data are IZTR.

The benefit of applying extended zerotree context model is presented in Figure 1.

2.2. Magnitude redundancy and context based entropy coding

The results of experiments reported by Servettoi and Buccigrossi show that correlations among neighboring coefficients ofany subband excluding the lowest frequency data are negligible. But correlations measured among the magnitudes ofneighboring coefficients are somewhat higher. Because wavelet decomposition is linear transform and any linear datadependencies are expected to be removed, any remain dependencies could be proved by nonlinear data processing. Takingmagnitudes is such nonlinear process. The results support that some redundancy is present atler wavelet transformation.Large magnitude coefficients are likely to be found spatially gathered, and also low magnitude coefficients are located closeto the other low magnitude coefficients. That means magnitude of data should be considered in context-based data coding.The dependence between magnitudes occurs not only locally in spatial domain but is also observed between pixels indifferent levels of hierarchical pyramid (frequency domain).

I

I

Figure 1 (continued on next page)

467

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

468

Figure 1. Zerotrees coding effect is presented in three pairs of pictures. First-left: original Lenna image. Significant andinsignificant coefficient after quantization (0.25 bpp) are shown on the next tirst pair-right image. Significant coefficientsare black, insignificant coefficients are white. Second pair of pictures presents effect of zerotrees encoding by insertingadditional zerotree root symbol. Pruned insignificant coefficients are colored gray. Mixed image of significant, insignificantand pruned coefficients is left. Right image contains map of insignificant coefficients, which were not included into prunedzerotree set (benefit of zerotree context model is lost in this case). Pictures of third pair have the same meaning but afterproposed zerotree coding including additional zerotrees into context model. The improved correctness of this model isvisible especially for HH subband of the finest scale.

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

We examine linear prediction for coefficient magnitude estimation (ME). The magnitude value sets are arithmeticcoded conditioned to the ME value of each significant coefficient C,. The ME's are calculated on the base of adjacentcoefficient magnitudes and magnitude of parent node. The equation of ME is simply expressed in the following way:

ME, = cik M,k, (1)

where the adjacent coefficient magnitude set {Mk Icorresponds to context modeled. The weights ak of linear estimator are

chosen to minimize square error of real magnitude values approximation. They can be determined via linear regression sothat ME values are least-square estimates of c, . Predictor optimization can be done each time for compressed image or

more generally using a training image set. To decrease time of compression and cancel additional encoding cost of adaptivelinear predictor we used image-independent set of weights presented in Table 1 . These values were heuristically chosenbased on natural and medical images used in our tests. A proper way of coefficient scanning and ordering in successivesubbands was considered but raster scanning occurred the most efficient in probability model estimation.

Figure 2. Contexts used for magnitude estimation in out-of-zerotree coefficients coding scheme. Dotted line signifiesparent-children relations.

Some context modeling is applied. We decided to implement image-independent contexts presented in Figure 2. In ourresearch we realized that adaptive context modeling is rather not profitable in many cases.

Table 1. Weights of linear predictor used for coefficient magnitude estimation in statistical model of spatial-frequencydependencies in wavelet domain. It corresponds to Figure 2.

Subband a1 (left) a2 (up) a,(parent) a4 (up-up)Horizontal 0.25 0.5 0.025 0.2

VarticalDiagonal

J1

0.490.48

0.460.5

LI

0.0250.01

--

Linear prediction is used instead of conditional probability model of K-order P(c, I MK )because of contextdilution. Such context quantization can be continued by quantization of ME'S. The number of levels was decreased about30% to improve statistical model. Therefore the conditional probability model P(c, /Q(ME,)) used in adaptive arithmetic

coding can decrease conditional entropy in comparison to first order model P(c1 I c,1 ) of ordered magnitudes, which isused in many applications. Proposed model is simpler than models from ECECOW 2but deals with multilevel magnitudemaps. It captures a majority of mutual information between such linear predictor and coefficient magnitude values in ouropinion. It is similar to the results presented by Buccigrossi3.

469

vertical diagonal

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

2.3. Strong dependencies in map data structure

Approach of assigning one bit information about root feature of significant nodes creates map of binary data, which can becoded efficiently. However, if a given coefficient is known to be root, then coefficients in small neighborhood probably willbe roots establishing non-one-point length of poor edge related. It suggests that local probability models depended on theregion being coded may provide efficient way of encoding this map. Causal model can be completed by higher levelinformation in related spatial location. Lower but worthy of notice level of correlation was remarked for sign map, thereforesmall contexts were used into entropy coding ofboth maps.

In our application only simple context modeling and linear prediction scheme was applied to encode SZTR map. Topredict SZTR presence or absence we used status of surrounding data in causal context in current and higher levels asfollows:

P=NINT{±f(ck)}(2)

11, f Ck C {IZ) or Ck {SchS}wheref(ck)=

, (3)0, else

Iz - isolated zero, SchS - significant coefficient with one or more significant child (not significant zerotree root), K orderof context. Ifthe value ofP for successive significant coefficient is zero, SZTR is expected.

Contexts for root node prediction are presented in Figure 3 . Predictive errors are coded as separate data stream by the1st order arithmetic coder.

Figure 3 . Contexts used for prediction of significant zerotree roots appearance. The parent node is signed by blank circleand is not included in context modeling, only adjacent coefficient ofthe same level.

Similar redundancy reduction was performed in encoding of sign information. For high frequency subbands theprobability ofpositive and negative coefficients is equal and zero-mean valued but however not spatially independent. Someapplication, like CREW and ECECOW, take advantage of this to increase coding efficiency. The dependencies are rathersmall and only between direct neighbors. Useful influence was noticed only with 1storder conditional probability model of'causal neighbor' in proper direction. Thus, simple prediction model is as follows:

• for horizontal subbands sign of left neighbor coefficient is a prediction,• for vertical and diagonal subbands sign of up neighbor coefficient is a prediction.

470

vertical diagonal

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

Map of binary sign information is coded separately. As a result coding efficiency, for example of Lenna, was increasedclose to 1%. Because the lowpass bands contain mostly positive coefficients (97,6% over 13 test images in Buccigrossi3),none procedure of sign prediction is applied in this case.

2.4. Encoding the LL band

Low-pass or the lowest frequency subband is very small size usually, and has significantly different statisticalcharacteristics than band-pass subbands. Uniform density model as approximation of LL coefficient distribution isconsidered. Small number of insignificant coefficients appears in LL even in low bit rate range. In our algorithm IZ symbolis included because of zerotrees construction also in horizontal relations of the highest tree level.

A value of LL data correlations is very small. We tested different model of data prediction or conditional estimate butwithout any success. Raster data scanning creates one-dimensional data stream, which can be encoded more efficiently byarithmetic coder without any conditional model. But in several cases the length of arithmetic encoded representation waslonger than uncoded LL data set. It supports solutions where LL band is transmitted or written in uncoded manner, like inSPIHT.6 We decided to implement bitewise arithmetic coder based on statistics of successive bits, which is more efficientthan 0-order wordwise coder even up to 10%. It means that greater correlation appear between successive bitmaps ofcoefficients values.

2.5. Final stage of coding

Three different data sets were distinguished with respect to statistical characteristics. This is a consequence of thedifferences in the coefficient value distributions of the following subbands and increased alphabet of data set after zerotreepruning (additional symbol IZ is included). If we deal with significant difference of data statistics and rather small length ofdata stream an increase in the coding efficiency is achieved by data partitioning. Subsets of different statistics are codedwith appropriate statistical model. Thus, adaptive statistical model of the data built in entropy coding should be reset andinitiated for each separated part of input data stream. The following two data sets are distinguished:

S the middle level data set (the rest ofthe coarsest scale subbands plus next decomposition levels excluding bottom one):a considerable number of zero-valued coefficients and additional IZ code,

. the lowest level data set: greater number of insignificant coefficients, additional code is not included.

LL subband coefficients as third distinguished data set is encoded in a different manner because of containingcompletely different information. The scheme of a whole coding algorithm is presented in Figure 4. The flow of informationis parallel and procedures are not time-consuming. Two dotted line blocks are conditional.

3. TESTS

Natural test images: Lenna, Barbara and Goidhill (512x 5 12 x 8bits), and magnetic resonance (MR) image (256 x 256 x 8bits)are used for results presentation. To evaluate presented coding scheme efficiency we compared it to simple coding method(SCM) with zerotrees and zerotree roots, 1st order arithmetic coder and statistical data distinguishing like in Przelaskowski.4The results are presented in Figure 5 . For ease reference in results presentation, we refer to described coding scheme asCOPEZ, standing for context based prediction with extended zerotrees. We tested also the influence of three fundamentalstages of lossy wavelet compression scheme on final compression efficiency. For this reason we composed compressionscheme with four types of wavelet decomposition - coefficients quantization mixtures and COPEZ coding of quantizedcoefficients:

A. popular Antonini filter bank and simple uniform scalar quantization,B. optimized filter bank (fitted to concrete data characteristics) and simple uniform scalar quantization,C. Antonini filter bank and optimized adaptive space-frequency quantization .D. optimized filter bank and optimized adaptive space-frequency quantization .

The results are presented in Table 2.

471

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

To check the effectiveness of wavelet based algorithm containing COPEZ a comparison with the most efficientcompression techniques like SPIHT,6 SFQ,7 PACC,8 C/B,9 PC-AUTQ'° and EQ'1 was performed. See results in Table 3. Weapplied COPEZ to wavelet decomposition and quantization procedures presented by Przelaskowski.

Structure of hierarchical pyramidof quantized wavelet coefficients

Magnitudes and zerotree rootsymbols set excluding LL data

Calculation of the weightsa,, of linear estimator

Context based coefficientmagnitude estimation

'Jr

Wordwise order arithmeticcoding (other probability model isbuilt for bottom level data)

'Jr

Signinformation

Prediction of

coefficient_signj

Prediction ofsignificant zerotreeroot appearance

Binary or wordwise!; zero-orderi arithmetic coding.

"7

472

Figure 4. General scheme of presented coding algorithm. LL means lowest frequency subband. Dotted line blocks may beperformed conditionally.

4. RESULTS AND DISCUSSION

The results are presented in Figure 5 and Tables 2 and 3.

Tree pruning by construction of zerotreesand inserting additional symbols:

- remove child nodes if all are insignificant- sign root nodes (both significant andinsignificant)-map significant nodes- extract coefficient magnitudes and signs

LL data

I Significantzerotree roots

,map

Binary 1st orderarithmetic coding

Encoded data stream

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

Fig.5 . The benefit of applying COPEZ scheme in wavelet compression algorithm over simple coding method (SCM); testimage Lenna is used.

Table 2. Comparison of different wavelet decomposition-coefficients quantization procedures in wavelet-based compressionscheme. COPEZ coding scheme is used. PSNR values for 0.25 bpp at each case are presented.

CodingProcedure

Lenna Barbara.

Goldhill

A 34.02 27.87 30.39B 34.17 28.34 30.44C 34.33 28.11 30.71D 28.67 30.76

Table 3 . The compression efficiency evaluation. Several wavelet-based techniques with different coding procedures areused. PSNR values for 0.25 bpp at each case are presented.

Compressiontechnique

Lerina Barbara Goidhill MR0.25bpp 0.5bpp 0.25bpp 0.5bpp 0.25bpp 0.5bpp 0.25bpp O.5bpp

SPIHT 34.13 37.24 27.79 31.72 30.63 33.19 34.85 39.10

SFQ 34.33 37.36 28.29 32.15 30.71 33.37 - -

C/B 34.45 37.59 28.38 32.22 30.77 33.43 34.98 39.30

PACC 34.50 37.50 28.62 32.52 30.81 33.49 35.36 39.58

PC-AUTQ 34.46 37.56 - - 30.78 33.46 - -

EQ 34.57 37.68 - - 30.76 33.42 - -

D (with COPEZ) 34.49 37.54 28.67 32.66 30.76 33.41 35.38 39.72

473

PSNR [dB} L- - SCM —O---COPEZ

39

38

37

36

35

0,3 0,4 0,5 0,6 0,7Bit rate [bpp]

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

The effectiveness improvement of COPEZ over SCM is clearly visible. The impact on this improvement of zerotreesextension and context based statistical model in entropy coding is similar. Benefit of predicted sign information encoding ispoorer.

The influence on compression efficiency of suitable filter banks, quantization and coding schemes is different for eachtest image. In case of Lenna the importance of each stage optimization seems to be comparable. For image Barbara theproper filter bank selection and subband decomposition form is the most profitable in compression efficiency improvement.Complex optimization of coding scheme in this case is not promising. Very important in compression of Goldhill is fittingthe quantization and coding algorithms. Choosing of suitable filter banks can not significantly increase the effectiveness ofwhole compression method for this image.

Compression efficiency of wavelet algorithm with applied COPEZ coding scheme is even up to 1 dB of PSNR betterin comparison to SPIHT in some cases. Also SFQ method is worse in PSNR sense. Generally, the effectiveness of presentedcompression technique and four method (C/B, PACC, PC-AUTQ, EQ) is comparable. For two images: Barbara and MR ourtechnique is the most efficient, like EQ for Lenna and PACC for Goldhill.

5. CONCLUSIONS

The removing of 'unusual' information in lossy maimer can be optimized by proper wavelet transform and quantizationalgorithm design in compression scheme. But reduction of any redundancy in wavelet domain should be continued incoding scheme, by applying nonlinear models, etc. Construction of efficient predictions, context modeling in probabilitysource models for wavelet coefficients is difficult because of limited length of data and various data statistics. Thus fastadaptive models are preferable. Also development ofhigh-order context models are promising.

Presented coding scheme applied in wavelet compression algorithm allows achieve very high effectiveness. Extendedzerotrees coding method models distribution of insignificant coefficients in better way. The presence of significant zerotreeroots can be predicted on the base of adjacent coefficients in space-frequency causal model and therefore encodedefficiently. A future research should concern farther the suitable relations between real data dependencies captured byprediction models, and zerotree structures.

The influence of wavelet transform, quantization algorithm and coding method is comparable and the commonoptimization of these three stages with complex model of original data and wavelet domain statistics seems to be the mostreasonable and profitable in compression technique development. Only common models can properly describe and exploitimage data characteristics. To achieve closely optimal code in each case, the balance and proper dependence betweenwavelet decomposition, quantization and coding schemes is needed for any redundancy elimination.

6. REFERENCES

1. S.D. Servetto, K. Ramchandran, and M.T. Orchard, "Image Coding Based on a Morphological Representation ofWavelet Data," submitted to IEEE Tran. Image Processing, 1996.

2. X. Wu, "High-Order Context Modeling and Embedded Conditional Entropy Coding ofWavelet Coefficients for ImageCompression," JPEG2000proposal, from internet, 1997.

3. R.W.Buccigrossi, E. P.Simoncelli, "Image Compression via Joint Statistical Characterization in the Wavelet Domain,"GRASP Laboratory Technical Report #414, University ofPerinsylvania, May, 1997.

4. Przelaskowski, M.Kazubek, T. Janirógiewicz, ,,Effective Wavelet-based Compression Method with AdaptiveQuantization Threshold and Zerotrees Coding", Proceedings oJSPIE, Multimedia Storage andArchiving Systems II,vol. 3229, pp. 348-356, 1997.

5. A. Przelaskowski, "Fitting a quantization scheme to multiresolution detail preserving compression algorithm", IEEE-SPInternational Symposium on Time-Frequency and Time-Scale Analysis, Pittsburgh, USA, October 1998.

6. A. Said and W.A. Peariman, ,,A New Fast and Efficient Image Codec Based on Set Partitioning in Hierarchical Trees",submitted to IEEE Trans. Circ. & Syst. Video. Tech., 1996.

7. Z. Xiong, K. Ramchandran, and M.T. Orchard, ,,Space-Frequency Quantization for Wavelet Image Coding", IEEETrans. Image Proc., to appear, 1997.

474

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms

8. D. Marpe, and H.L. Cycon, "Efficient Pre-Coding Techniques for Wavelet-Based Image Compression," PCS, Berlin,1997.

9. C. Chrysafis and A. Ortega, "Efficient context-based entropy coding for lossy wavelet image compression," DCC,Snowbird, UT, 1997.

10. Y. Yoo, A. Ortega, and B. Yu, "Progressive Classification and Adaptive Quantization of Image Subbands," submittedto IEEE Tran. Image Processing, 1997.

11. S.M. LoPresto, K. Ramchandran, and M.T. Orchard, "Image Coding based on Mixture Modeling of WaveletCoefficients and a Fast Estimation-Quantization Framework, "IEEE Data Compression Conference '97 Proc., pp.221-230, 1997.

475

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 02/20/2014 Terms of Use: http://spiedl.org/terms