14
Multidimensional Systems and Signal Processing, 9, 93–106 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. An ROI Search Method for Still Images Based on Set Descriptions TAEK MU KWON [email protected] Department of Electrical and Computer Engineering, University of Minnesota, Duluth, Duluth, MN 55812 PANKAJ AGRAWAL Department of Computer Science, University of Minnesota, Duluth, Duluth, MN 55812 DONALD B. CROUCH Department of Computer Science, University of Minnesota, Duluth, Duluth, MN 55812 Received November 10, 1995; Revised July 31, 1996 Abstract. Detecting a Region of Interest (ROI) from a digitized image is an important step towards highly efficient image communication. Although a significant amount of work has been done for time varying images, in which the ROI is expressed in terms of the differences between sequential frames, only limited work has been done for analyzing still images. In this paper, we present an ROI search algorithm for still images based on a description of the objects in the image. The proposed approach uses no matching techniques, but rather uses a set descriptor of the pattern. Experimental results obtained by applying this technique to facial images show that the proposed methodology is robust with regard to variations in scale and angle as well as stray variations in the image. Key Words: region of interest, set descriptions, object recognition, feature detection I. Introduction In this paper we introduce a technique for the recognition of an object based on its decompo- sition into features and their relationship, all of which are described through set descriptions. However, in order to demonstrate the results of our algorithm, we apply our technique on a specific case: the human face; the human face has been the center of interest in many algorithms proposed in the past. This problem has been attempted from different pattern recognition perspectives, notably, the neural net approach [1], elastic template matching [2], [3], algebraic moment [4], and isodensity lines [5]. However, the success of these pattern recognition methods is dependent on the specific type of images for which they have been designed, namely the human face, and are not readily extensible to recognition of other objects. For example, profile analysis [4] can deal with matching two faces assuming that the images have no objects other than facial ones. Others deal with the pattern recognition problems in images created in special environments [6] where profiles are taken with back lighting adjusted to make only the outline curves visible. Still others are limited to images based on the assumption that a person looks straight, keeps eyes open, etc. [7], while yet others [8] fail if a person is wearing glasses, the background is not a plain one, or the face of a person with another country of origin is used for testing. Recently, there has been renewed interest in detecting faces from image sequences for video conferencing applications. Most approaches in this class utilize the motion information available from the image sequences

ROI search method for still images based on set descriptions

Embed Size (px)

Citation preview

Multidimensional Systems and Signal Processing, 9, 93–106 (1998)c© 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.

An ROI Search Method for Still Images Based on SetDescriptions

TAEK MU KWON [email protected] of Electrical and Computer Engineering, University of Minnesota, Duluth, Duluth, MN 55812

PANKAJ AGRAWALDepartment of Computer Science, University of Minnesota, Duluth, Duluth, MN 55812

DONALD B. CROUCHDepartment of Computer Science, University of Minnesota, Duluth, Duluth, MN 55812

Received November 10, 1995; Revised July 31, 1996

Abstract. Detecting a Region of Interest (ROI) from a digitized image is an important step towards highly efficientimage communication. Although a significant amount of work has been done for time varying images, in whichthe ROI is expressed in terms of the differences between sequential frames, only limited work has been done foranalyzing still images. In this paper, we present an ROI search algorithm for still images based on a descriptionof the objects in the image. The proposed approach uses no matching techniques, but rather uses a set descriptorof the pattern. Experimental results obtained by applying this technique to facial images show that the proposedmethodology is robust with regard to variations in scale and angle as well as stray variations in the image.

Key Words: region of interest, set descriptions, object recognition, feature detection

I. Introduction

In this paper we introduce a technique for the recognition of an object based on its decompo-sition into features and their relationship, all of which are described through set descriptions.However, in order to demonstrate the results of our algorithm, we apply our technique ona specific case: the human face; the human face has been the center of interest in manyalgorithms proposed in the past. This problem has been attempted from different patternrecognition perspectives, notably, the neural net approach [1], elastic template matching [2],[3], algebraic moment [4], and isodensity lines [5]. However, the success of these patternrecognition methods is dependent on the specific type of images for which they have beendesigned, namely the human face, and are not readily extensible to recognition of otherobjects. For example, profile analysis [4] can deal with matching two faces assuming thatthe images have no objects other than facial ones. Others deal with the pattern recognitionproblems in images created in special environments [6] where profiles are taken with backlighting adjusted to make only the outline curves visible. Still others are limited to imagesbased on the assumption that a person looks straight, keeps eyes open, etc. [7], while yetothers [8] fail if a person is wearing glasses, the background is not a plain one, or the face ofa person with another country of origin is used for testing. Recently, there has been renewedinterest in detecting faces from image sequences for video conferencing applications. Mostapproaches in this class utilize the motion information available from the image sequences

94 TAEK MU KWON, PANKAJ AGRAWAL, AND DONALD B. CROUCH

[9]. However, since such motion information is not available from still images, detecting aface from a still image is considered more difficult.

In this paper an attempt has been made to develop a more general pattern recognitionmethod, so that the algorithm succeeds even if there are significant variations in the image,e.g., if the subject is wearing glasses or if the background consists of other objects. Theproposed technique make no assumptions about the manner in which the face of a subjectis photographed as long as the entire face is visible.

There are in general two approaches for solving the object recognition problem. Oneapproach is to define an ideal object and then define the acceptable level of deviation fromthat object. However in practice the deviation is so varied that this approach is not veryreliable. A more general approach, the one used in this paper, is to describe the objectthrough a set of parameters that serve as input to the detection algorithm. Should an objectsatisfying the description exist inside the image, the region of interest (ROI) containingthe object is detected. Since the algorithm does not assume that there is only one objectsatisfying a certain description, its performance is dependent on the uniqueness of thedefinition of an object. If the definition is not specific enough, multiple occurrences ofobjects satisfying the definition may be detected. On the other hand, if the definition isunique and the image consists of more than one object of the same type, our algorithmsuccessfully detects all of them.

This paper is composed as follows. In Section II, basic parameters and set operatorsneeded for the development of the object detection algorithm are introduced. Section IIIdescribes the proposed algorithm. The robustness of the algorithm is demonstrated throughexamples in Section IV, and implementation details are presented in Section V. Section VIconcludes this paper.

II. Basic Parameters and Operators

Before defining the basic operators, we proceed with a few notational conventions. Theposition of a pixel is described using the complex expression:

Pi = xi + j yi (1)

wherexi and yi are the coordinates of pointPi . The grey scale ofPi is expressed asGi . In order to convert the grey scale information to a form more probable to yield edgeinformation, we use the horizontal and vertical Sobel operators [10]. For a 3× 3 image asgiven below,

G6 G7 G8

G5 G0 G1

G4 G3 G2

AN ROI SEARCH METHOD 95

the vertical and horizontal components of Sobel information are computed respectively asfollows

X = G4+ 2G3+ G2− G6− 2G7− G8 (2)

and

Y = G4+ 2G5+ G6− G2− 2G1− G8 (3)

In a facial image, the horizontal component is more likely to yield information aboutsuch facial features as the eyes, mouth, and to some extent, the nose, while the verticalcomponent yields more information about the edges of the face and the nose.

Before we introduce the algorithm for the extraction of the object, a few simple parametersand sets for a featuref are defined.

(1) Closeness Parameter Cf (·)

The Closeness ParameterCf (Pi , Pj ) for a certain pointPi defines an integer value thatrepresents how close a pointPj is from a given pointPi in the object space. It describesthe cost of movement and level of discontinuity in a feature. The further away the pointPj

is from Pi , the larger the discontinuity and costCf (Pi , Pj ).

(2) Pixel Proximity rf (·)

The Pixel Proximityr f (·) is used to determine whether a pixel is to be considered a part of

the given object. It is simply defined byr f (Pi ) = |∼Bi (Pi )− c2|.

∼Bi (Pi ) is defined as

∼Bi (Pi ) =

{c1 if S(Pi ) ≤ Th

c2 if S(Pi ) > Th, (4)

whereS(Pi ) is the grey scale at the pointPi in the image obtained as a result of the Sobeloperation described above,Th represents the threshold value determined iteratively (theimplementation of this iterative method is described in Section V), andc2 > c1.

(3) Constraint Set Yf (·)

The Constraint SetYf (Pi ) is the set that defines the points near the pixelPi which are to beexamined to ascertain whether they belong to the featuref .

(4) Minimization Set Mf (·)

The Minimization SetMf (Pi ) consists of all points through which a given featuref runs.The set expands when a given pointPi is found to be a part of the object. It is recursively

96 TAEK MU KWON, PANKAJ AGRAWAL, AND DONALD B. CROUCH

defined as

Mf (Pi ) = Mf (Pi−1)⋃

Pi . (5)

A point Pi is said to be a part of the object if it is close physically to the previously extractedminimization set and has a zero Pixel Proximity value indicating the possibility of theexistence of a pattern.

(5) Length of Feature Lf

The length of a certain featuref is defined as

L f =∑

i

X f (i ) (6)

where

Xf (i ) ={

1 if Pi ∈ Mf (Pi )

0 otherwise

L f can be decomposed as

L f =∑φi∈8 f

l f φi (7)

wherel f φi is the sub-length of the featuref in the directionφi , and8 f represents thedirection constraints for the featuref .

The path takenpf is expressed as

pf = sf × df × φ f (8)

wheredf refers to thef th component of the setD, φ f ∈ 8 f , andsf is the starting pointof the featuref that bears a spatial relationship with the corresponding pointsf+1 on thefeature f + 1. D and8 are the sets that describe the above displacement and angularrelationships, respectively (see Section V for a detailed description).

(6) Travel Set Tf (·)

The Travel SetTf (Pi ) defines the set of all points that lie in a certain direction and at a givendistance from the given pixelPi . This set is important as it relates all the features spatially.When a particular feature is detected, the algorithm will look for the other features in a certaindirection and thus correlate them to get the correct set of features enclosing an object.

A set{sf , sf+1} ∈ Tf if the following three conditions are satisfied:

i) sf ∈ Mf (PL f ),

ii) sf+1 ∈ Mf+1(PL f+1),

AN ROI SEARCH METHOD 97

iii) sf+1 lies on the pathpf which is found as given in (8).

In the present paper, we assume that an object is describable byF features and theoverall constraints imposed among them. More specifically we describe an object in termsof constraint sets which describe the structure of each feature, travel sets that determinethe path taken during the search for a particular feature, and certain overall constraintsimposed on them corresponding to each of theF features that are part of the object. Overallconstraints are further discussed in Sections III and V.

III. Methodology

The objective of the detection algorithm is to find the points for a given pointPi thatminimize the distance with respect toPi and are closest to its pixel value, i.e. our purposeis to minimize the expression for a featuref :

Qf (Pi ) = minPj∈Yf (Pi )

[Cf (Pj , Pi )+ r f (Pi )+ Qf (Pj )

]. (9)

At each step of minimization, the minimization set is obtained, i.e.

Mf (Pi ) = Mf (Pj )⋃

Pi (10)

This is equivalent to solving a sequence of 2L f + 1 equations recursively starting withP1 = sf and ending when the minimization set for the pointPL f is found. Once this set ofcomputations for the featuref is completed, we perform the computations for the featuref + 1 starting with the pointsf+1 which is found fromTf . The process continues until allthe features have been formally covered. However, no featuref is pursued further once itviolates the description of the feature.

Essentially,Mf (Pi ) represents a set of points which are close spatially, in terms of pixelvalue and continuity to the pointPi . In conjunction with the search process of the mini-mization set, we impose an additional constraint called the overall constraint which definesthe relation among the features. This constraint rejects objects within the image whosefeatures do not correspond to the object we are looking for, even though the individualfeature descriptions and the orientation with respect to each other as described in the travelset are satisfied. To be more specific, consider an example. If we select the human face asan object of interest in a portrait image, it may be characterized by defining features (suchas facial edges, eyes, nose, and mouth) and their spatial correlation through the constraintand travel sets, respectively. The overall constraints may be then specified: for example,the direction of the nose must be within a certain angle from the direction of mouth; thenose should be positioned within the triangle formed by the limit of two eyes and mouth;the distance between two eyes should be within a certain range of the distance between thenose and mouth. The overall constraint exerts syntactic control over the features withinthe object space such as the relative positioning of the coherent features in relation to eachother, while the minimization sets (candidates of features) define the local spatial relationswith respect to the formal description. The consequence of imposing the overall constraintsis that they not only help eliminate candidates that do not fit the given descriptions, but

98 TAEK MU KWON, PANKAJ AGRAWAL, AND DONALD B. CROUCH

also reduce computational and storage requirements. The implementational details of theoverall constraints are discussed in Section V.

The description of a region of interest (ROI) that encloses the desired object must bedescribed in an efficient manner in order to minimize the overhead during application of thealgorithm. To accomplish this a simple geometrical structure may be used. For example,an object can be enclosed by a tight rectangle or an ellipse. Each requires only four valuesof data for their description: the opposite vertices in the rectangle or the center with majorand minor axes for the ellipse. In our experimental work, we chose the former descriptionalthough the latter description performs equally well. Letf be the feature index, thenthe Total SetG is the set that includes all minimization sets of the object, viz., the finallyselected setsM1 throughMF :

G =⋃

f

Mf . (11)

The boundary of an object is thus computed as an enclosure ofPn andPm, where

Pn = (xn, yn), with xn = minPi∈G

(xi ), yn = minPj∈G

(yj ) and (12)

Pm = (xm, ym), with xm = maxPi∈G

(xi ), ym = maxPj∈G

(yj ) (13)

Finally, the desired ROI is described by

SG = {Pi = xi + j yi |xn ≤ xi ≤ xm, yn ≤ yi ≤ ym} (14)

wherePn andPm satisfy equations (12) and (13) andf refers to thef th feature.Figure 1 summarizes the overall structure of the proposed methodology through a block

diagram.

IV. Experimental

A. Pattern Recognition and ROI Detection Results

To evaluate the performance of the algorithm, we demonstrate the following five variancesin facial images using three test images:

1) Scale: Faces may be of varying sizes in the image.2) Angle: Faces may be toward the camera with the head slightly up or down and right

or left.3) Noise: Noise can manifest itself in the form of other objects and curves that might

even submerge the facial objects completely in the Sobel operated image.4) Stray Variations: The images may include glasses, hats, moustaches, and hair style

variations.

First, we show the results of the detection algorithm applied to the image “Lena”. The“Lena” test image is a 256×256 image with 256 grey scales per pixel; the image is shown in

AN ROI SEARCH METHOD 99

Figure 1. Block diagram of overall methodology.

100 TAEK MU KWON, PANKAJ AGRAWAL, AND DONALD B. CROUCH

Figure 2a. Notice that this image includes stray variations, such as, a hat, shoulder and otherobjects in the background that the algorithm must discriminate from the main facial objectsof interest. Moreover, the face is not completely straight; it has been photographed at anangle. The objects we specified for the expression of ROI are eyes, nose, and mouth. Asshown in Figure 2b–2f, the proposed algorithm successfully discriminates the user specifiedfacial objects and defines the desired ROIs.

Next, the algorithm was applied to the image “Lady” (Figure 3a). This test image alsohas 256× 256 pixels with 256 grey scales per pixel. The image consists of flowers andother objects in addition to the face. The face of this image is different in terms of scaleand dimensions from the “Lena” image and is photographed from the opposite angle thanthat of “Lena”. The ROIs searched for are again eyes, nose and mouth regions. As shownin Figure 3b, 3c and 3d, the algorithm successfully discriminates the facial objects clearlyfrom the rest and extracts the desired ROIs. These two examples collectively demonstratethat the algorithm can detect the object of interest and can properly locate ROIs even if theimages differ in scale, have been photographed at different angles, and have included strayvariations.

The third image we chose for testing the algorithm is the image of a man with glassesand a face much smaller in dimension compared to the “Lena” and “Lady” images. Thebackground is very noisy in terms of undesirable lines. The Sobel operation yields numerousstraight lines that completely submerge the face as shown in Figure 4a. Moreover, the facialobjects to be determined are blurred. The results of the detected facial-object ROIs areshown in Figures 4b, 4c and 4d, respectively. In spite of the background noise, additionalcurves, scale variation, glasses, face angle with respect to the cameras etc., the algorithmsuccessfully recognizes the desired ROIs.

B. Application Examples

We now demonstrate a selective image compression technique which is an immediateapplication of the proposed ROI search method. In selective image compression, the ROIs,which are more important visually from the human perspective, are compressed less therebyconveying more information compared to the rest of the area. In effect, in a uniformcompression, both the ROIs as well as the rest of the image are compressed to the sameextent. Hence, selective compression is much more efficient in terms of conveying the trueinformation for image communication where the bandwidth is limited. Using the detectedROIs (eyes, nose, and mouth), a selective image compression is easily achieved. The “Lady”image demonstrated in Figure 3a was used as a test image. For comparison, the result of auniform compression on the “Lady” image is shown in Figure 5a; the PPC technique [11]along with Arithmetic Coding [12] were used. The compression rate of this image whichincludes all overhead information is 17:1. In Figure 5b, we show a selectively compressedimage using the ROIs of eyes, nose, and mouth. The overall compression rate of this imageis 17:1, the same as the uniform compression case. Notice that the selectively compressedimage has much clearer facial objects than that of the uniformly compressed image, due tothe fact that more information is conveyed in the selected objects without sacrificing thecompression rate.

AN ROI SEARCH METHOD 101

Figure 2. ROI search results of Lena image. a) original image; b) detected right-eye region; c) detected left-eyeregion; d) detected nose region; e) detected mouth region; f) detected face region.

102 TAEK MU KWON, PANKAJ AGRAWAL, AND DONALD B. CROUCH

Figure 3. ROI search results of Lady image. a) original image; b) detected eye region; c) detected nose region;d) detected mouth region.

There are perhaps many other applications for which the proposed ROI search algorithmwould be useful. For example, this technique could easily be applied to pattern identificationpurposes in robotic applications by specifying the desired objects. Other applicationsinclude space variant image restoration in which more specifically tuned algorithms can beused for each individual region.

V. Implementation Details

The closeness parameterCf (·) is defined as 0, 1 or 2 for points in the vicinity ofPi andinfinity otherwise, i.e.

AN ROI SEARCH METHOD 103

Figure 4. ROI search results of Man image. a) Sobel image; b) detected eye region; c) detected nose region;d) detected mouth region.

Cf (Pj , Pi ) =

0 for |xi − xj | + |yi − yj | = 1,1 for |xi − xj | = 1 and |yi − yj | = 1,2 for |xi − xj | = 2 and |yi − yj | = 0

and for |xi − xj | = 0 and |yi − yj | = 2,∞ otherwise

(15)

where f is the feature index.The value of∞ provides for computational speed-up by eliminating all points too far

away to be considered a part of the continuous object.The Pixel Proximityr f (.) defines how close a pixel is to the grey scale ofc2. In all of our

experiments,c1 andc2 were set to 0 and 255, respectively.

104 TAEK MU KWON, PANKAJ AGRAWAL, AND DONALD B. CROUCH

Figure 5. Image compression using the ROI regions found by the proposed algorithm. a) traditional uniformcompression (compression rate 17:1); b) selective compression using the found ROIs (compression rate = 17.1).

The constraint setYf (.) describes the structure of the feature to be detected in terms ofan approximate contour likely to yield information about the feature. The constraint setcan also be described in terms of an approximation polynomial, in terms of some constrainton the curvature of the object at any point, or in terms of some other loose mathematicaldefinition. One method of representing the curves and finding its curvature is by the methodof cubic B-splines [13]. Another method of defining a curve is through the splitting method[14], where an arbitrary curve is broken into line segments and the position of the angles isfound.

It is this object of the constraint set that makes our algorithm a general one that canaccommodate the descriptions used in other algorithms.

There are overall constraints, as discussed before, that are imposed in order to furtherconstrain the description of the object of interest. In effect, this constraint works as acriterion for quick rejection of features that are incompatible with the given descriptionand reduces the computational time and transient storage requirements. The first overallconstraint we imposed in our implementation is the scaling relation between feature lengths,i.e.,

L f = hf−m(L f−m), 1≤ m≤ N(Q) (16)

wherehf−m is the scaling factor between featuresf and f −m and is defined by a boundhf−m = [ f f−m,1, f f−m,2]. N(Q) is the dimensionality of the cost expressionQf (.) (definedphysically as the number of features yet to be searched, connected spatially to an extracted

AN ROI SEARCH METHOD 105

feature in the description of an ROI). Another alternate way of describing the overallconstraints is using the sub-lengths as defined in Eq. (17), i.e.,

l f φi = t( f−m)φj (l( f−m)φj ), φi ∈ 8 f , φj ∈ 8 f−m, 1≤ m≤ N(Q), (17)

wheret( f−m)φj is the scaling factor associated with sub-lengths described by a bound simi-larly to hf−m.

There might be additional constraints imposed onQf (.).The next overall constraint we imposed in our implementation is the relationship between

starting points of featuressf andsf+1:

|sf − sf+1| = f r f · l jφn, φn ∈ 8j (18)

where f r f = [b1 f , b2 f ] (b1 f andb2 f are lower and upper bounds off r f ) and j is the lastfeature such that its sub-length along the directionφn, l jφn , is of the order of|sf − sf+1|where 1≤ j ≤ f . This relation comes from the definition of the object structure itself.The idea behind such a relationship is to avoid any relation in terms of distances that arenot suitable for comparison.

Implementation of the set8 = {φ1, φ2, . . . , φF } is chosen such that

tanφi = im(si − si+1)

re(si − si+1)(19)

where re(.) and im(.) return the real and imaginary components of their complex argumentrespectively. We choseφi = mπ/4, m= 1, 2, . . . ,8.

The performance of the algorithm is dependent on the threshold used to digitize the imageafter the Sobel operation is performed. Since the presence or the absence of the object in theimage and its location are not known a priori, it is difficult to find a suitable threshold withcomplete accuracy. Hence an approximate threshold is used initially. When the procedureis unable to find a object fitting its description, the threshold is lowered and the procedureis reapplied. The process is repeated until the object is found or the threshold reaches acertain minimum value. If the procedure does not locate the object, the algorithm concludesthat there is no object fitting the given description in the image. In most cases, the processis repeated fewer than three times with a reasonable step size of 20 or 30. The purposeis to avoid as many unnecessary curves in the image as possible in order to speed up therecognition process.

VI. Discussion and Conclusion

In this paper we introduced a new ROI search technique for still images which can identifythe desired ROI based on a few set and parameter descriptions of object space. Unlike manyconventional pattern recognition techniques which use some sort of template matching, thepresent algorithm recognizes or searches a pattern based on the existence or absence ofcertain features and the overall constraints imposed on them. The notable advantages ofthis approach are the lack of storage and computational requirements for templates and

106 TAEK MU KWON, PANKAJ AGRAWAL, AND DONALD B. CROUCH

robustness on the numerous idosyncrasies of the object such as angle, scale, and strayvariations. Robustness of the algorithm is empirically shown using three portrait imageswhich include numerous variations.

For future research, the proposed technique can be taken one step further to accommodatethe concepts of self-definition, in which the constraint and travel sets are defined by thealgorithm on itself if a physical model of ROI is given. Moreover we can introduce dynamicsets where the sets can be modified automatically to account for deviations from the referencemodel or too rigid initial set definitions. This may have potentially many applications inthe future.

References

1. G. Cotrell and M. Fleming, “Face recognition using unsupervised feature extraction.” InProc. Int. NeuralNetwork Conf, 1990.

2. J. Buhmann, J. Lange, and C. von der Malsburg, “Distortion invariant object recognition by matchinghierarchically labeled graphs.” InProc. IJCNNN’89, 1989, pp. 151–159.

3. A. L. Yuille, “Deformable Templates for Face Recognition.”J. Neurosci., vol. 3, no. 1, 1991, pp. 59–70.

4. Zi-Quan Hong, “Algebraic Feature Extraction of Image For Recognition.”Pattern Recognition, vol. 24,no. 3, 1991, pp. 211–219.

5. O. Nakamura, S. Mathur, and T. Minami, “Identification of Human Faces Based On Isodensity Maps.”Pattern Recognition, vol. 24, no. 3, 1991, pp. 263–272.

6. Chyuan J. Wu and J. S. Huang, “Human Face Profile Recognition by Computer.”Pattern Recognition,vol. 23, no. 3/4, 1990, pp. 255–259.

7. H. Okada, O. Nakamura, and T. Minami, “A Consideration for Automatic Face Identification.” InIECENational Convention, 1985, pp. 104.

8. T. Sakai, M. Nagao, and T. Kanade, “Computer Analysis and Classification of Photographs of HumanFaces.” InProc. 1st USA-Japan Computer Conference, AFIPS Press, 1972, pp. 55–62.

9. P. Gerken, “Object-Based Analysis-Synthesis Coding of Image Sequences at Very Low Bit Rates.” InIEEETrans. on Circuits and Systems for Video Technology, vol. 4, June 1994, pp. 228–235.

10. W. K. Pratt,Digital Image Processing, New York: John Wiley & Sons, 1979.

11. Y. Huang, H. M. Dreizen, and N. P. Galatsanos, “Prioritized DCT for Compression and Progressive Trans-mission of Images.”IEEE Trans. on Image Processing, vol. 1, no. 4, Oct. 1992, pp. 477–487.

12. G. G. Langdon, Jr., “Compression of Black-White Images with Arithmetic Coding.” InIEEE Trans. onCommunications, vol. 29, no. 6, June 1981.

13. G. Medioni and Y. Yasumoto, “Corner Detection and Curve Representation Using cubic B-splines.”Com-puter Vision Graphics Image Processing, 39, 1987, pp. 267–278.

14. Y. Lin, J. Dou, and H. Wang “Contour Shape Description Based On An Arch Height Function.”PatternRecognition, vol. 25, no. 1, 1992, pp. 17–23.