Transcript
  • 616 IEEE SIGNAL PROCESSING LETTERS, VOL. 13, NO. 10, OCTOBER 2006

    Vector Quantization in MultiresolutionMesh Compression

    Junlin Li, Student Member, IEEE, Dihong Tian, Student Member, IEEE, and Ghassan AlRegib, Member, IEEE

    Abstract—Irregularly sampled triangular meshes containvertices in a geometric space with arbitrary connectivity de-grees. From a compression point of view, this letter presents amultiresolution analysis for irregular meshes using progressivevector quantization (VQ). We show that VQ exploits the jointdistribution of coordinates in the geometric space, thus improvingthe compression efficiency. Based on the presented compressionalgorithm, we further propose an equalization algorithm thatproperly balances between connectivity downsampling and ge-ometry quantization and achieves the optimal rate-distortionperformance under constrained bit rates.

    Index Terms—Compression, connectivity, geometry, irregularmesh, multiresolution, rate-distortion performance, vector quan-tization (VQ).

    I. INTRODUCTION

    TRIANGULAR meshes are the most widely used dataformat in computer graphics applications. Unlike regu-larly sampled data such as images, triangular meshes are oftenrepresented by vertices with 3-D spatial coordinates (geometry)and interconnected with arbitrary degrees (connectivity). Inmathematical words, a triangular mesh is denoted by a pair

    , where is a set ofpoints, and is a complex containing the topological infor-mation. is a union of three sets defined on the index set of

    : vertices , edges , and faces. Hence, .

    Multiresolution compression for irregular meshes is per-formed by downsampling the topology using edge-collapseoperations, predicting the coordinates of the collapsed ver-tices, and coding the prediction residuals along with neededconnectivity information that tells which edges should be splitto recover the collapsed vertices [1]–[4].1 Entropy coding iscommonly used in the existing algorithms to compress thecoordinate residuals in separate spatial dimensions. Vectorquantization (VQ) [6] has been introduced in single-resolutionmesh compression [7], [8] to code the vertex geometry jointly,

    Manuscript received October 7, 2005; revised March 31, 2006. The associateeditor coordinating the review of this manuscript and approving it for publica-tion was Dr. Mihaela van der Schaar.

    The authors are with the School of Electrical and Computer Engineering,Georgia Institute of Technology, Savannah, GA 31407 USA (e-mail: [email protected]; [email protected]; [email protected]).

    Digital Object Identifier 10.1109/LSP.2006.877142

    1Wavelet-based algorithms (e.g., [5]) reported higher compression efficiencythan the above methods by resampling the irregular mesh to be semi-regularbefore compression, which alters the connectivity of the original mesh in a non-invertible manner. Here we limit our discussion to algorithms that preserve theoriginal connectivity.

    whereas its incorporation with a multiresolution hierarchy hasnot been studied. Such a study is valuable, as the predictivestructure in multiresolution compression is essentially differentfrom that in its single-resolution counterpart. In this letter, wepropose to use progressive VQ [9] in multiresolution 3-D meshcompression. The presented algorithm is referred to as VQMthroughout the letter. By developing a hierarchical predictivecoding structure, we show that VQM not only improves therate-distortion performance considerably but also attains a finergranularity in the multiresolution hierarchy compared to thepreceding algorithms.

    In multiresolution compression, both connectivity downsam-pling and geometry quantization reduce the bit rate while in-troducing certain distortion [10]. Our further investigation onmultiresolution irregular meshes seeks the best tradeoff betweenthe connectivity downsampling and the geometry quantizationwhen the overall bit rate is constrained. Under the framework ofVQM, we propose an equalization algorithm to accomplish thegoal, which performs bit allocation between connectivity andgeometry toward the optimal rate-distortion performance for agiven bit-rate constraint.

    II. VQM ALGORITHM

    The diagram of the proposed VQM algorithm is presentedin Fig. 1. Briefly, the algorithm can be summarized in threesteps: 1) The full-resolution mesh is downsampled to gen-erate different levels of details, , by successivehalf-edge collapses. Each half-edge collapse operation mergesone vertex of an edge to the other, which affects only the neigh-borhood of the collapsed vertex. The corresponding connec-tivity information for collapsed vertices, , is encodedwhile the geometry data, , is buffered. 2) The basemesh is encoded using single-resolution mesh compression al-gorithms, e.g., [11]. 3) The enhancement geometry data are en-coded sequentially from to by a predicted progressive VQalgorithm. In this basic structure of VQM, the processing onconnectivity closely follows the existing algorithms, while es-sential differences exist in the processing of the geometry data.In this section, we address these differences. Interested readersare referred to [4] for further details of the connectivity coding.

    A. Bottom-Up Predictive Coding

    The existing compression algorithms such as CPM [4] usescalar quantization and the input mesh is pre-quantized beforedownsampling. Thus, the geometry data of collapsed verticesare immediately coded after the downsampling (half-edge col-lapse) operations. In other words, those algorithms are imple-mented in a top-down structure. In contrast, VQM encodes the

    1070-9908/$20.00 © 2006 IEEE

  • LI et al.: VECTOR QUANTIZATION IN MULTIRESOLUTION MESH COMPRESSION 617

    Fig. 1. Diagram of the VQM algorithm. (a) VQM encoder. (b) VQM decoder.

    geometry data in a reverse order after all the downsampled levelsare generated (see Fig. 1), which formulates a bottom-up codingstructure. This bottom-up structure is essential for the predic-tive nature of the multiresolution compression, as it ensures thatthe predictions performed at the encoder and the decoder ex-actly match. In particular, the prediction of the th-level geom-etry data, , must be based on the reconstructed version of themesh , denoted by . The higher resolution mesh,

    , is then reconstructed by upsampling with decodedth-level enhancement connectivity and predicted vertices re-

    fined by de-quantized geometry residuals. This procedure canbe expressed by the following relationship:

    (1)

    where and represent the functions of vertex predic-tion and vector quantization, respectively. Note that in general,

    , the unit function, due to the lossy propertyof VQ. is the decoded th-level geometry (lossy recovery),and is the decoded th-level connectivity (lossless recovery).

    denotes the set of prediction residuals (vectors)for the th-level geometry data. More explicitly, for each point

    , , its spatial coordinates are predicted from its1-ring neighbors: as follows:

    (2)

    where is a 1-ring neighboring point of vertex on the recon-structed mesh . Herein, we use to denote the size of aset. Now, for vertex , the reconstructed point on meshis computed by

    (3)

    where is the residual of the predicted point fromits original version .

    Fig. 2. Distribution of the prediction residuals for the HORSE model with fiveenhancement layers. (a) First layer. (b) Fifth layer.

    B. Vector Quantization

    To introduce VQ, we first investigate the distribution of theprediction residuals. Fig. 2 presents a demonstration on the pre-diction residuals for a test model, where Figs. 2(a) and 2(b) arefor two different downsampled layers, respectively. For each ofthe two layers, subplots (i–iv) show the separate distributions ofthe -, -, and - components and the joint distribution of the

    components, respectively. It is observed that the dis-tribution of each component is highly nonuniform with higherprobability density near the origin, and the componentsare also tightly correlated, which promote the introduction ofVQ toward higher compression efficiency than scalar quantiza-tion (with entropy coding). Furthermore, comparing Fig. 2(a)with Fig. 2(b), we note that the residual distributions of dif-ferent enhancement layers are also different. The first layer hasthe largest covariance, while the last layer has the smallest co-variance, which implies that the quantization performance canbe further improved by using different quantization schemes fordifferent enhancement layers.

    We employ a progressive full search algorithm [9] to performVQ in VQM. Two steps are necessary to implement the progres-sive full search VQ: 1) designing the full rate codebook and 2)designing the full search progressive transmission tree and thecorresponding intermediate codebooks. The first step is a gen-eral VQ codebook training problem. In order to obtain globaloptimal codebook, in this letter, we implement a reduced com-plexity stochastic relaxation (SR) algorithm [12] rather than thewidely used suboptimal GLA algorithm. With the full rate code-book, the progressive transmission tree and corresponding in-termediate codebooks can be designed with all the methods dis-cussed in [9]. The intermediate codewords give the full searchVQ codebook a successive approximation character.

    The progressive VQ procedure in the encoder is as follows.The geometry prediction residual vector is first encoded throughan exhaustive search of the leaves of the full-search progressivetransmission tree. Then the tree index of the selected codevectorcan be decoded progressively by traversing the tree one bit ata time from the root to the selected leaf. At the decoder, with

  • 618 IEEE SIGNAL PROCESSING LETTERS, VOL. 13, NO. 10, OCTOBER 2006

    Fig. 3. Rate-distortion curves for two test models. (a) HORSE (39 698 triangles).(b) VENUS HEAD (67 170 triangles).

    each bit decoded, the decoder displays the intermediate code-vector located at the internal node being visited in the tree. Withthe progressive vector quantization, the coding efficiency is im-proved, and a finer progressive granularity is also obtained.

    Summarizing the preceding description, we have presented aVQ-based compression algorithm for irregular meshes (VQM).The algorithm successively downsamples the irregular connec-tivity to generate hierarchies and codes the geometry usingprogressive VQ following a bottom-up predictive structure,which provides bit-by-bit geometry progressiveness.

    III. CONNECTIVITY-GEOMETRY EQUALIZATION

    In the multiresolution compression, both connectivity down-sampling and geometry quantization reduce the bit rate whileintroducing certain distortion. If the overall bit rate for codingthe connectivity and geometry is limited, one may choose togreatly downsample the connectivity while finely quantizingthe geometry or to coarsely quantize the geometry while main-taining a high sampling rate for the connectivity. Naturally, aproper balance between downsampling the connectivity andquantizing the geometry becomes important in order to incurthe least distortion. In this section, we address this equaliza-tion problem under the framework of VQM, using its refinedprogressive granularity.

    Mathematically, for a given bit rate (to simplify the nota-tion, the bit rate for coding the base mesh is not included inin the following discussion), the problem addressed here can bestated as

    (4)

    where denotes the number of layers, which will be encodedunder the given bit rate constraint (totally layers inthe whole multiresolution hierarchy); and denotethe bit rates for the th layer’s connectivity and geometry

    , respectively; and is the corresponding dis-tortion, which is evaluated by the mean-squared surface distance[13] between the resolution-reduced mesh and its full-resolutionversion.

    We solve the problem in (4) in the following two steps.1) Assuming the number of layers encoded is

    , determine the optimal number of the bit-planes for the geometry of each layer

    (5)

    In (5), corresponds to the case where all the layersare encoded in full precision, and corresponds tothe case where all the layers are encoded at a minimumbit rate, which is defined as the connectivity data plusthe least number of bit planes for geometry that are re-quired for the upsampling operations to obtain a higherresolution hierarchy.2

    2) Determine the optimal number of layers and thecorresponding optimal bit rate for geometry of eachlayer by comparing all the solutions obtained in 1) forall possible

    (6)

    The second step in the above procedure is straightforward,while the first step is more sophisticated. An exhaustive searchrequires an exponential computation time and is infeasible when

    becomes large. For a fast solution, we deploy a steepest de-scent search approach, which gains in the computation at a costof reaching a possibly local optimal solution. To perform thesteepest decent search, we treat the number of layers and thecorresponding minimum bit rate as the initial state. Then, at eachstage, we have choices, i.e., adding one more bit-plane to ei-ther of the layers’ geometry. We compare the rate-distortionslope for these cases and choose the one that has the steepestslope. The process is repeated until the given bit rate is reached.

    IV. EXPERIMENTAL RESULTS

    In this section, we present the experimental results for the pro-posed VQM algorithm and compare it with the CPM algorithm[4]. CPM is considered as an improved version of the progres-sive forest split algorithm [2], which is the core of the 3-D meshcoding (3DMC) tool standardized in MPEG-4. In our experi-ments, the training set for VQ design is kept separate from thetest set. Fig. 3 presents, for two test models, the rate-distortioncurves of the comparing algorithms. In each case, the originalmodel is downsampled into five layers with the exactly samedownsampling patterns for both algorithms to make the compar-ison fair. For this reason, the first points on the rate-distortioncurves for the two algorithms coincide, since they start with thesame base representation. It is observed that, for all the tested

    2The upsampling operation splits the collapsed vertex and refines its coordi-nates with the decoded prediction residuals. It is observed in our experimentsthat in order to reduce the mean-squared surface distance after upsampling, asmall (but nonzero) number of bit planes are needed to encode the predictionresiduals.

  • LI et al.: VECTOR QUANTIZATION IN MULTIRESOLUTION MESH COMPRESSION 619

    Fig. 4. Subjective comparison of the rendered results. (a) Original VENUSHEAD model. (b)–(d) Compressed models under the bit rate constraint 31KB by CPM, VQM, and VQM with connectivity-geometry equalization,respectively. (e) and (f) present the marked partitions of the models in (c) and(d), respectively.

    models, VQM considerably outperforms CPM in the rate-dis-tortion performance, i.e., VQM improves the compression effi-ciency compared to the CPM algorithm.

    For a subjective comparison, Fig. 4 presents the renderedVENUS HEAD models under a bit rate of KB, whereFig. 4(a) shows the original model and Fig. 4(b) and (c) are thedecoded results by CPM and VQM, respectively. ComparingFig. 4(b) and (c) with Fig. 4(a), one can see that the modelin Fig. 4(b) is more seriously distorted than that in Fig. 4(c).Note that, for example, the visual distortion is more clearly ob-served in Fig. 4(b) on the model’s front head and the nose, while

    smoother visualization is provided by Fig. 4(c). The differenceis also captured by the measured MSE results.

    The model in Fig. 4(c) is encoded without the connectivity-geometry equalization. In Fig. 4(d), we present the encodedmodel by performing the connectivity-geometry equalizationunder the same bit rate. Both objective and subjective resultsare further improved in Fig. 4(d), due to the subtle balance be-tween connectivity downsampling and geometry quantization.Fig. 4(e) and (f) provide a clearer illustration on such a differ-ence by rendering a partition of the triangulated mesh surface asmarked in Fig. 4(c) and (d). Comparing Fig. 4(f) to Fig. 4(e), onecan see that the higher rate-distortion performance is achievedby compensating the quantization precision of predicted geom-etry residuals for the sampling rate of connectivity. It shouldbe pointed out that in general, such compensation may be per-formed in both ways, with the common objective of minimizingthe resulting distortion under the overall bit rate.

    V. CONCLUSION

    In this letter, we proposed a VQ-based compression algorithm(VQM) for triangular meshes with irregular connectivity. Thisalgorithm includes several components: multiresolution meshrepresentation realized by downsampling, geometry predictionin a bottom-up structure, and full search progressive VQ. Com-pared to the existing algorithms, the VQM algorithm not onlyimproves the rate-distortion performance but also obtains a finerprogressiveness. Based on VQM, an equalization algorithm wasdeveloped to subtly balance between the connectivity downsam-pling and the geometry quantization toward the minimum dis-tortion under a constrained bit rate.

    REFERENCES[1] H. Hoppe, “Progressive mesh,” in Proc. ACM SIGGRAPH, 1996, pp.

    99–108.[2] G. Taubin, A. Gueziec, W. Horn, and F. Lazarus, “Progressive forest

    split compression,” in Proc. ACM SIGGRAPH, 1998, pp. 123–132.[3] J. Li and C. C. Kuo, “Progressive coding of 3-D graphic models,” Proc.

    IEEE, vol. 86, no. 6, pp. 1052–1063, Jun. 1998.[4] R. Pajarola and J. Rossignac, “Compressed progressive meshes,” IEEE

    Trans. Vis. Comput. Graphics, vol. 6, no. 1, pp. 79–93, Jan.–Mar. 2000.[5] A. Khodakovsky, P. Schroder, and W. Sweldens, “Progressive geom-

    etry compression,” in Proc. ACM SIGGRAPH, 2000, pp. 271–278.[6] A. Gersho and R. M. Gray, Vector Quantization and Signal Compres-

    sion. Boston, MA: Kluwer, 1992.[7] E.-S. Lee and H.-S. Ko, “Vertex data compression for triangular

    meshes,” Pacific Graphics, pp. 225–234, 2000.[8] P. H. Chou and T. H. Meng, “Vertex data compression through vector

    quantization,” IEEE Trans. Vis. Comput. Graph., vol. 8, no. 4, pp.373–382, Oct.-Dec. 2002.

    [9] E. A. Riskin, R. Ladner, R. Wang, and L. E. Atlas, “Index assignmentfor progressive transmission of full-search vector quantization,” IEEETrans. Image Process., vol. 3, no. 3, pp. 307–312, May 1994.

    [10] G. Al-Regib, Y. Altunbasak, and R. M. Mersereau, “Bit allocationfor joint source and channel coding of progressively compressed 3-Dmodels,” IEEE Trans. Circuits Syst. Video Technol., vol. 15, no. 2, pp.256–268, Feb. 2005.

    [11] C. Touma and C. Gotsman, “Triangle mesh compression,” in Proc.Graphics Interface, Vancouver, BC, Canada, Jun. 1998.

    [12] K. Zeger, J. Vaisey, and A. Gersho, “Globally optimal vector quantizerdesign by stochastic relaxation,” IEEE Trans. Signal Process., vol. 40,no. 2, pp. 310–322, Feb. 1992.

    [13] P. Cignoni, C. Rocchini, and R. Scopigno, “Metro: measuring error onsimplified surfaces,” in Proc. Eurographics, 1998, vol. 17, no. 2, pp.167–174.


Recommended