1_Feature-Based Volume Metamorphosis

Embed Size (px)

Citation preview

  • 8/8/2019 1_Feature-Based Volume Metamorphosis

    1/8

    Feature-Based Volume Metamorphosis

    Apostolos Lerios, Chase D. Garnkle, Marc Levoy

    Computer Science DepartmentStanford University

    Abstract

    Image metamorphosis, or image morphing , is a popular tech-nique for creating a smooth transition between two images. Forsynthetic images, transforming and rendering the underlyingthree-dimensional (3D) models has a number of advantages overmorphing between two pre-rendered images. In this paper we con-

    sider 3D metamorphosis applied to volume-based representationsof objects. We discuss the issues which arise in volume morphingand present a method for creating morphs. Our morphing methodhas two components: rst a warping of the two input volumes,then a blending of the resulting warped volumes. The warpingcomponent, an extension of Beier and Neelys image warpingtechnique to 3D, is feature-based and allows ne user control, thusensuring realistic looking intermediate objects. In addition, ourwarping method is amenable to an efcient approximation whichgives a 50 times speedup and is computable to arbitrary accuracy.Also, our technique corrects the ghosting problem present in Beierand Neelys technique. The second component of the morphingprocess, blending, is also under user control; this guaranteessmooth transitions in the renderings.

    CR Categories: I.3.5 [Computer Graphics]: ComputationalGeometry and Object Modeling; I.3.7 [Computer Graphics]:Three-Dimensional Graphics and Realism.

    Additional Keywords: Volume morphing, warping, render-ing; sculpting; shape interpolation, transformation, blending;computer animation.

    1 Introduction

    1.1 Image Morphing versus 3D Morphing

    Image morphing, the construction of an image sequence depictinga gradual transition between two images, has been extensively in-

    vestigated [21] [2] [6] [16]. For images generated from 3D models,there is an alternative to morphing the images themselves: 3D mor- phing generates intermediate 3D models, the morphs, directly fromthe givenmodels; the morphs are then rendered to produce an imagesequencedepicting the transformation. 3D morphing overcomesthefollowing shortcomings of 2D morphing as applied to images gen-erated from 3D models:

    Center for Integrated Systems, Stanford University, Stanford, CA 94305

    lerios,cgar,levoy @cs.stanford.eduhttp://www-graphics.stanford.edu/

    In 3D morphing, creating the morphs is independent of theviewing and lighting parameters. Hence, we can create amorph sequenceonce, and then experiment with various cam-era angles and lighting conditions during rendering. In 2Dmorphing, a new morph must be recomputed every time wewish to alter our viewpoint or the illumination of the 3Dmodel.

    2D techniques,lacking information on the models spatialcon-guration, are unable to correctly handle changes in illumina-tion and visibility. Two examples of this type of artifact are:(i) Shadows and highlights fail to match shapechangesoccur-ing in the morph. (ii) When a feature of the 3D object is notvisible in the original 2D image, this feature cannot be madeto appear during the morph; for example,when a singing actorneeds to open hermouth during a morph, pulling her lips apartthickens the lips instead of revealing her teeth.

    1.2 Geometric versus Volumetric 3D ModelsThe modelssubjectedto 3D morphing canbe describedeitherby ge-ometric primitives or by volumes (volumetric data sets). Each rep-resentationrequires different morphing algorithms. This dichotomyparallels the separation of 2D morphing techniques into those thatoperate on raster images [21] [2] [6], and those that assume vector-basedimage representations[16]. We believethat volume-basedde-scriptions are more appropriate for 3D morphing for the followingreasons:

    The quality and applicability of geometric 3D morphing tech-niques [12] is highly dependent on the models geometricprimitives and their topological properties. Volume morphingis independent of object geometries and topologies, and thusimposes no such restrictions on the objects which can be suc-cessfully morphed.

    Volume morphing may be appliedto objectsrepresented eitherby geometric primitives or by volumes. Geometric descrip-tions can be easily converted to high-quality volume represen-tations, as we will see in section 2. The reverse process pro-duces topologically complexobjects, usually inappropriate forgeometric morphing.

    1.3 Volume MorphingThe 3D volume morphing problem can be stated as follows. Giventwo volumes and , henceforth called the source and target vol-umes, we must produce a sequence of intermediate volumes, themorphs , meeting the following two conditions:

    Realism: The morphs should be realistic objects which have plau-sible 3D geometry and which retain the essential features of the source and target.

    Smoothness: The renderings of the morphs must depict a smoothtransition from to .

    From the former condition stems the major challenge in designing a3D morphing system: as automatic feature recognition and match-inghave yetto equalhumanperception,user input is crucialin den-ing the transformation of the objects. The challenge forthe designer

  • 8/8/2019 1_Feature-Based Volume Metamorphosis

    2/8

    Volume

    Source

    Target

    Edit Define feature

    elements

    Volume

    Sculpting

    Interactive

    Procedural

    Definition

    Sample

    convert

    Manual, but done once

    Warp

    Source

    Volume

    Image

    Automatic, repeated for each frame of the morph

    render

    Segment,

    Classify

    Volume

    Warped

    Blend

    Warped

    Target

    Volume

    Volume

    Morphed

    Scan

    Scanning

    GeometricData

    Figure 1: Data ow in a morphing system. Editing comprises retouching and aligning the volumes for cosmetic reasons.

    of a 3D morphing technique is two-fold: the morphing algorithmmust permit ne user control and the accompanying user interface(UI) should be intuitive.

    Our solution to 3D morphing attempts to meet both conditions of the morphing problem, while allowing a simple, yet powerful UI.To this end, we create each morph in two steps (see gure 1):

    Warping: and are warped to obtain volumes and . Ourwarping technique allows the animator to dene quickly theexact shape of objects represented in and , thus meetingthe realism condition.

    Blending: and are combined into one volume, the morph.Our blending technique provides the user with sufcient con-trol to create a smooth morph.

    1.4 Prior WorkPrior work on feature-based 2D morphing [2] will be discussed insection 3.

    Prior work in volume morphing comprises [9], [8], and [5].These approaches can be summarized in terms of our warp-ing/blending framework.

    [5] and [8] have both presented warping techniques. [5] exam-ined the theory of extending selected 2D warping techniques into3D. A UI was not presented, however, and only morphs of simpleobjectswere shown. [8]presentsan algorithm which attempts to au-tomatically identify correspondences between the volumes, withoutthe aid of user input.

    [9] and [8] have suggested using a frequency or wavelet repre-sentation of the volumesto perform the blending, allowing differentinterpolation schedules across subbands. In addition, they have ob-served that isosurfaces of the morphs may move abruptly, or evencompletely disappear and reappear as the morph progresses, de-stroying its continuity. This suggests that volume rendering may besuperior to isosurface extraction for rendering the morphs.

    Our paper is partitioned into the following sections: Section 2covers volume acquisition methods. Sections 3 and 4 present thewarping and blending steps of our morphing algorithm. Section 4.2describes an efcient implementation of our warping method andsection 5 discusses our results. We conclude with suggestions forfuture work and applications in section 6.

    2 Volume AcquisitionVolume data may be acquired in severalways, the most common of which are listed below.

    Scannedvolumes: Some scanning technologies, such as Comput-erized Tomography (CT) or Magnetic Resonance Imaging(MRI) generate volume data. Figures 5(a) and 5(c) show CTscans of a human and an orangutan head, respectively.

    Scan converted geometric models: A geometric model can bevoxelized [10], preferably with antialiasing [20], generating avolume-based representation of the model. Figures 6(a), 6(b),7(a), and 7(b) show examples of scan-converted volumes.

    Interactive sculpting: Interactive modeling, or sculpting [19] [7],can generate volume data directly.

    Procedural denition: Hypertexture volumes [15] can be denedprocedurally by functions over 3D space.

    3 WarpingThe rst step in the volume morphing pipeline is warping the sourceand target volumes and . Volume warping has been the subjectof severalinvestigationsin computergraphics, computer vision, andmedicine. Warping techniques can be coarsely classied into twogroups: (i) Techniques that allow only minimal user control, con-

    sisting of at most a few scalar parameters. These algorithms au-tomatically determine similarities between two volumes, and thenseek the warp which transforms the rst volume to the second one[18]. (ii) Techniques in which user control consists of manuallyspecifyingthe warp fora collection of pointsin the volume. The restof the volume is then warped by interpolating the warping function.This group of algorithms includes free-form deformations [17], aswell as semi-automatic medical data alignment [18].

    As stated in section 1.3, user control over the warps is crucialin designing good morphs. Point-to-point mapping methods [21],in the form of either regular lattices or scattered points [13], haveworked in 2D. However, regular grids provide a cumbersome inter-face in 2D; in 3D they would likely become unmanageable. Also,prohibitively many scattered points are needed to adequately spec-ify a 3D warp.

    Our solution is a feature-based approach extending the work of [2] into the 3D domain. The next two sections will introduce ourfeature-based 3D warping and discuss the UI to feature specica-tion.

    3.1 Feature-Based 3D Warping using FieldsThe purpose of a feature element is to identify a feature of an object.For example,consider the X-29 plane of gure 6(b); an element canbe used to delineate the nose of the plane. In feature-based mor-phing, elements come in pairs , one element in the source volume

    , and its counterpart in the target volume . A pair of elementsidenties corresponding features in the two volumes, i.e. featuresthat should be transformed to one another during the morph. Forinstance, when morphing the dart of gure 6(a) to the X-29 plane,the tip of the dart should turn into the nose of the plane. In orderto obtain good morphs, we need to specify a collection of elementpairs which dene the overall correspondence of the two objects.These element pairs interact like magnets shaping a pliable volume:

  • 8/8/2019 1_Feature-Based Volume Metamorphosis

    3/8

    (a)

    Warp

    (b)

    Warp

    Figure 2: 2D warp artifacts (not to scale). (a) shows the result of squeezing a circle using two feature lines placed on opposite sidesof the circle. The warped circle spills outside the corresponding,closelyspaced, lines. Similarly, in (b), the narrow ellipsoid with twolines on either side does not expand to a circle when the lines aredrawn apart; we get instead three copies of the ellipsoid.

    while a single magnet can only move, turn, and stretch the volume,multiple magnetsgenerate interactingelds, termed inuenceelds ,which combine to shape the volume in complex ways. Sculptingwith multiple magnetsbecomeseasierif we havemagnets of variouskinds in our toolbox, each magnet generating a differently shapedinuence eld. Theelementsin ourtoolkit are points, line segments,rectangles, and boxes.

    In the following presentation, we rst describe individual ele-ments, and discuss how they identify features. We then show howa pair of elements guarantees that corresponding features are trans-formed to one another during the morph. Finally, we discuss howmultiple element pairs interact.

    Individual Feature ElementsIndividual feature elements should be designed in a manner suchthat they can delineate any feature an object may possess. How-ever,expressivenessshould not sacrice simplicity, as complexfea-tures can still be matchedby a group of simple elements. Hence, thedening attributes of our elements encode only the essential charac-teristics of features:

    Spatial conguration: The features position and orientation areencoded in an elements local coordinate system, comprisingfour vectors. These are the position vector of its origin , andthree mutually perpendicular unit vectors , and , deningthe directions of the coordinate axes. The elements scaling factors , , and dene a features extent along each of the principal axes.

    Dimensionality: The dimensionality of a feature depends on thesubjective perceptionof a features relative size in eachdimen-sion: the tip of the planes nose is perceived as a point, theedge of the planes wing as a line, the darts n as a surface,

    and the darts shaft as a volume. Accordingly, our simpliedelements have a type , which can be a point, segment, rectan-gle, or box. In our magnetic sculpting analogy, the elementtype determines the shape of its inuenceeld. For example,abox magnet denes the path of points within and near the box;pointsfurther from the box areinuenced lessas their distanceincreases.

    The reader familiar with the 2D technique of [2] will notice twodifferences between our 3D elements and a direct extention of 2Dfeature lines into 3D; in fact, these are the only differences as far asthe warping algorithm is concerned.

    First, in the 2D technique, the shape of a feature lines inuenceeld is controlled by two manually specied parameters. Instead,we provide four simple types of inuence elds point, segment,rectangle, and box thus allowing for a more intuitive, yet equallypowerful, UI.

    Second, our feature elements encode the 3D extent of a 3D fea-ture via the scaling factors , , and ; by contrast, feature lines

    in [2] capture only the 1D extent of a 2D feature, in the direction of eachfeature line. These scalingfactors introduce additional degreesof freedom for each feature element. In the majority of situations,these extra degrees have a minor effect on the warp and may thusbe ignored. However, under extreme warps, they permit the user tosolve the ghosting problem, documentedin [2] and illustrated in g-ure 2. For instance, in part (b) of this example, the ellipsoid is repli-cated because each feature line requires that an unscaled ellipsoidappear by its side: the feature lines in [2] cannot specify any stretch-ingin the perpendiculardirection. However, in a 2D analogueof ourtechnique, the user would use the lines scalingfactors to stretch theellipsoid. First, the user would encode the ellipsoids width in thescaling factors of the original feature lines. Then, in order to stretchthe ellipsoid into a circle, the user would not only move the featurelines apart, but will also make the lines scaling factors encode thedesired new width of the ellipsoid. In fact, using our technique, asingle feature line sufces to turn the ellipsoid into a circle.

    Element PairsAs in the 2D morphing system of [2], the animator identies twocorresponding features in and , by dening a pair of elements

    . These features shouldbe transformed to oneanotherduringthe morph. Such a transformation requires that the feature of bemoved, turned, and stretched to match respectively the position, ori-entation, and size of the corresponding feature of . Consequently,for each frame of the morph, our warp should generate a volume from with the following property: the feature of shouldpossessan intermediate position, orientation and sizein . This is achievedby computing the warp in two steps:

    Interpolation: We interpolate the local coordinate systems andscaling factors of elements

    and

    to produce an interpo-lated element

    . This element encodes the spatial congura-tion of the feature in .

    Inversemapping: For every point in of , we nd the corre-spondingpoint in in two simple steps (see gure 3): (i) Wend the coordinatesof in the scaled local system of element

    by

    !

    "

    #

    !

    "

    #

    !

    "

    $

    (ii) is the point with coordinates , and inthe scaled local system of element

    , i.e. the point & %

    %

    %

    . '

    Collections of Element PairsIn extending the warping algorithm of the previous paragraph tomultiple element pairs, we adhere to the intuitive mental model of magnetic sculpting used in [2]. Each pair of elements denes a eldthat extends throughout the volume. A collection of element pairsdenes a collection of elds, all of which inuence each point in thevolume. We therefore use a weighted averaging scheme to deter-mine the point in that corresponds to each point of . Thatis, we rst compute to what point ) ( each element pair would map

    in the absence of all other pairs; then, we average the ) ( s usinga weighting function that depends on the distance of to the inter-polated elements

    (.

    Our weightingschemeuses an inverse squarelaw: ) ( is weightedby

    0

    %2 1

    3

    ' where0

    is the distance of from the element

    (; 1 is a

    The axes directions4

    ,5

    , and6

    are interpolated in spherical coordinatesto ensure smooth rotations.' 7

    is warped into7

    in a similar way, the only difference being that 8is used in this last step in place of 8 .

  • 8/8/2019 1_Feature-Based Volume Metamorphosis

    4/8

    Element e Element e

    p

    Volume S Warped volume S

    s

    c

    p

    z

    x

    yz

    c

    xy

    pxpy

    pz

    px

    pzpy

    Figure 3: Single element warp. In order to nd the point in vol-ume that corresponds to in , we rst nd the coordinates

    of in the scaled local system of element

    ; is thenthe point with coordinates

    in the scaled local system of element

    . To simplify the gure, we have assumed unity scalingfactors for all elements.

    small constant usedto avoid division by zero. The type of element

    (

    determines how

    0

    is calculated:Points:

    0

    is the distance between and the origin of the localcoordinate system of element

    (

    . This denition is identicalto [21].

    Segments: The element is treated as a line segment centered at theorigin , aligned with the local -axis and having length ;

    0

    is the distance of from this line segment. This denition isidentical to [2].

    Rectangles: Rectangles have the same center and extent as seg-ments, but also extend into a second dimension, having width

    along the local -axis.0

    is zero if is on the rectangle,otherwise it is the distance of from the rectangle. This def-inition extends segments to area elements.

    Boxes: Boxes add depth to rectangles, thus extending for

    unitsalong the local -axis.0

    is zero if is within the box, other-wise it is the distance of from the boxs surface.

    The reader will notice that the point, segment, and rectangle ele-ment types are redundant, as far as the mathematical formulation of our warp is concerned. However, a variety of element types main-tains best the intuitive conceptual analogy to magnetic sculpting.

    3.2 User InterfaceThe UI to the warping algorithm has to depict the source and tar-get volumes, in conjunction with the feature elements. Hardware-assisted volume rendering [4] makes possible a UI solely based ondirect visualization of the volumes, with the embedded elementsinteractively scan-converted. Using a low-end rendering pipeline,

    however, the UI has to resort to geometric representations of themodels embedded in the volumes. These geometric representationscan be obtained in either of two ways:

    Pre-existing volumes are visualized by isosurface extractionvia marching cubes [14]. Several different isosurfaces can beextracted to visualize all prominent features of the volume, avolume rendering guiding the extraction process.

    For volumes that were obtained by scan converting geometricmodels, the original model can be used.

    Once geometric representations of the models are available, theanimator can use the commercial modeler of his/her choice to spec-ify the elements. Our system, shown in gure 6(d), is based on In-ventor, the Silicon Graphics (SGI) 3D programming environment.

    Models are drawn in user-dened materials, usually translucent, in

    Distance measurements postulate cubical volumes of unit side length.Also, we always set to .

    order to distinguish them from the feature elements. These, in turn,are drawn in such a way that their attributes local coordinate sys-tem, scaling factors, and dimensionality are graphically depictedand altered using a minimal set of widgets.

    4 Blending

    Thewarping stephas produced two warpedvolumes and fromthe source and target volumes and . Any practical warp is likelyto misalign some features of and , possibly because these werenot specically delineated by feature elements. Even if perfectlyaligned, matching features may have different opacities. These ar-eas of the morph, collectively called mismatches , will have to besmoothly faded in/out in the rendered sequence,in order to maintainthe illusion of a smooth transformation. This is the goal of blending.

    We have two alternatives for performing this blending step. Itmay either be done by cross-dissolving images rendered from and , which we call 2.5D morphing, or by cross-dissolving the

    volumes themselves, and rendering the result, i.e. a full 3D morph.The 2.5D approachproduces smooth image sequencesand providesthe view and lighting independence of 3D morphing discussed insection 1.1; however, some disadvantages of 2D morphing are rein-troduced, such as incorrect lighting and occlusions. Consequently,2.5D morphs do not look as realistic as 3D morphs. For example,the missing link of gure 5(f) lacks distinct teeth, and the base of the skull appears unrealistically transparent.

    For this reason, we decided to investigate full 3D morphing,whereby we blend the warped volumes by interpolating their voxelvalues. The interpolation weight

    is a function that varies overtime, where time is the normalized frame number . We have theoption of using either a linear or non-linear

    .

    4.1 Linear Cross-Dissolving

    The pixel cross-dissolving of 2D morphing suggests a linear

    .Indeed, it works well for blending the color information of and

    . However, it fails to interpolate opacities in a manner such thatthe rendered morph sequenceappears smooth. This is due to the ex-ponential dependence of the color of a ray cast through the volumeon the opacities of the voxels it encounters. This phenomenon is il-lustrated in the morph of gure 5. In particular, the morph abruptlysnaps into the source and target volumes if a linear

    is used: g-ure 5(g) shows that at time

    $, very early in the morph, the empty

    space towards the front of the human head has already been lled inby the warped orangutan volume.

    4.2 Non-Linear Cross-Dissolving

    In order to obtain smoothly progressing renderings, we would liketo compensate for the exponential dependenceof rendered color onopacity as we blend and . This can be done by devising anappropriate

    .In principle, there cannot exist an ideal compensating

    . Theexact relationship between rendered color and opacity depends onthedistance theray travels throughvoxelswith this opacity. Hence aglobally applied

    cannot compensate at once for all mismatchessince they have different thickness. Even a locally chosen

    cannot work, as different viewpoints cast different rays through themorph.

    In practice, the mismatches between and are small in num-ber and extent. Hence, the above theoretical objections do not pre-vent us from empirically deriving a successful

    . Our designgoal is to compensate for the exponential relation of rendered color

    In other words, time is a real number linearly increasing from 0 to 1as the morph unfolds.

  • 8/8/2019 1_Feature-Based Volume Metamorphosis

    5/8

    Image IWarpedImage I

    Figure 4: 2D analogue of piecewise linear warping. A warped im-age is rst subdividedby an adaptivegrid of squares, here markedby solid lines. Then, each square vertex is warped into . Finally,pixels in the interior of each grid cell are warped by bilinearly in-terpolating the warped positions of the vertices. The dashed arrowsdemonstrate how the interior of the bottom right square is warped.The dotted rectangles mark image buffer borders.

    to opacity by interpolating opacities at the rate of an inverse expo-nential. The sigmoid curve given by

    3

    $

    3

    %

    satises this requirement. It suppresses the contribution of sopacity in the early part of the morph, the degree of suppressioncontrolled by the blending parameter . Similarly, the contributionof s opacity is enhanced in the latter part of the morph. Fig-ure 5(h), illustrates the application of compensated interpolation tothe morph of gure 5: in contrast to gure 5(g), gure 5(h) looksvery much like the human head, as an early frame in the morph se-quence should.sectionPerformance and Optimization

    A performance drawbackof our feature-basedwarping techniqueis thateachpoint in the warpedvolumeis inuencedby all elements,since the inuenceelds never decayto zero. It follows that the timeto warp a volume is proportional to the number of element pairs.An efcient C++ implementation, using incremental calculations,needs 160 minutes to warp a single volume with 30 elementpairs on an SGI Indigo 2.

    We have implemented two optimizations which greatly acceler-ate the computation of the warped volume , where we henceforthuse to denote either or . First, we approximate the spatiallynon-linear warping function with a piecewise linearwarp [13]. Sec-ond, we introduce an octree subdivision over .

    4.3 Piecewise Linear ApproximationThe 2D form of this optimization, shown in gure 4, illustrates itskey steps within the familiar framework of image warping. In 3D,piecewise linear warping begins by subdividing into a coarse,3D, regular grid, and warping the grid vertices into , using the al-gorithm of section 3.1. The voxels in the interior of each cubic gridcellare thenwarpedby trilinearly interpolatingthe warpedpositionsof the cubes vertices. Using this method, can be computed byscan-converting each cube in turn. Essentially, we treat as a solidtexture, with the warped grid specifying the mapping into texturespace. The expensive computation of section 3.1 is now performedonly for a small fraction of the voxels, and scan-conversion domi-nates the warping time.

    This piecewise linear approximation will not accurately capturethe warp in highly non-linearregions, unlesswe usea very ne grid.However, computing a uniformly ne sampling of the warp defeatsthe efciency gain of this approach. Hence, we use an adaptive gridwhich is subdivided more nely in regions where the warp is highlynon-linear. To determine whether a grid cell requires subdivision,we compare the exactand approximated warpedpositions of several

    pointswithin the cell. If the error is abovea user-specied threshold,thecell is subdividedfurther. In order to reducecomputation,we usethe vertices of the next-higher resolution grid as the points at whichto measure the error. Using this technique, the non-linear warp canbe approximated to arbitrary accuracy.

    Since we are subsamplingthe warping function, it is possible thatthis algorithm will fail to subdivide non-linear regions. Analyticallybounding the variance of the warping function would guarantee con-servative subdivision. However, this is unnecessary in practice, asthe warps used in generating morphs generally do not possess largehigh-frequency components.

    This optimization has been applied to 2D morphing systems, aswell; by using common texture-mapping hardware to warp the im-ages, 2D morphs can be generated at interactive rates [1].

    4.4 Octree Subdivision usually contains large empty regions, that is, regions which arecompletely transparent. The warp will map these parts of into

    empty regions of

    . Scan conversion, as described above, neednottake place when a warped grid cell is wholly contained within sucha region. By constructing an octree over , we can identify manysuch cells, and thus avoid scan converting them.

    4.5 ImplementationOur optimized warping method warps a volume in approxi-mately 3 minutes per frame on an SGI Indigo 2. This representsa speedup of 50 over the unoptimized algorithm, without notice-able loss of quality. The running time is still dominated by scan-conversionand resampling, both of which can be accelerated by theuse of 3D texture-mapping hardware.

    5 Results and Conclusions

    Our color gures show the source volumes, target volumes, andhalfway morphs for three morph sequences we have created.

    The human and orangutan volumesshownin gures 5(a) and 5(c)were warped using 26 element pairs to produce the volumes of g-ures 5(b) and5(d) at the midpoint of the morph. Theblendedmiddlemorph appears in gure 5(e).

    Figures 6 and7 show two examples ofcolor morphs,requiring 37and 29 element pairs, respectively. The UI, displaying the elementsused to control the morph of gure 6, is shown in 6(d).

    The total time it takes to compose a 50-frame morph sequencefor volumes comprises all the steps shown on gure 1. Ourexperience is that about 24 hours are necessary to turn design intoreality on an SGI Indigo 2:

    Hours Task

    10 CT scan segmentation,classication, retouching1 Scan conversionof geometric model

    8 Feature element denition (novice)3 Feature element denition (expert)

    5 Warping

    3 Blending: 1 hourfor each : 2, 4, 6; retain best

    4 Hi-res volumerendering(monochrome)12 Hi-res volumerendering(color)

    We have presented a two step feature-based technique for realis-tic and smooth metamorphosis between two 3D models representedby volumes. In the rst step, our feature-based warping algorithmallows ne user control, and thus ensures realistic morphs. In addi-tion, our warping method is amenable to an efcient, adaptive ap-

    proximation which gives a 50 times speedup. Also, our techniqueWe always use an error tolerance of a single voxel width and an initial

    subdivision of

    cells.

  • 8/8/2019 1_Feature-Based Volume Metamorphosis

    6/8

    corrects the ghosting problem of [2]. In the second step, our user-controlled blending ensures that the rendered morph sequence ap-pears smooth.

    6 Future Work and ApplicationsWe see the potential for improving 3D morphing in three primaryaspects:

    Warping Techniques: Improved warping methodscouldallow forner user control, as well as smoother, possibly spline-based,interpolation of the warpingfunction acrossthe volume. Morecomplex, but more expressive feature elements [11] may alsobe designed.

    User Interface: We envision improving our UI by addingcomputer-assisted feature identication: the computer sug-gesting features by landmark data extraction [18], 3D edgeidentication, or, as in 2D morphing, by motion estimation[6]. Also, we are considering giving the user more exiblecontrol over the movement of feature elements during themorph, i.e. the rule by which interpolated elements areconstructed, perhaps by key-framed or spline-path motion.

    Blending: Blending can be improved by allowing local denitionof the blendingrate, associatingan interpolation schedulewitheach feature element.

    Morphings primary application has been in the entertainmentindustry. However, it can also be used as a general visualizationtool for illustration and teaching purposes [3]; for example, ourorangutan to human morph could be used as a means of visualizingDarwinian evolution. Finally, our feature-based warping techniquecan be used in modeling and sculpting.

    AcknowledgmentsPhilippe Lacroute helped render our morphs, and designed part of the dart to X-29 y-by movie shownon ourvideo. We usedthehorsemesh courtesy of Rhythm & Hues, the color added by Greg Turk.John W. Rick provided the plastic cast of the orangutan head andPaul F. Hemler arranged the CT scan. Jonathan J. Chew and DavidOfelt helped keep our computer resources in operation.

    References[1] T. Beier and S. Neely. Pacic Data Images. Personal communication.[2] T.Beier and S. Neely. Feature-basedimage metamorphosis.In ComputerGraph-

    ics , vol 26(2), pp 3542, New York, NY, July 1992. Proceedingsof SIGGRAPH92.

    [3] B. P. Bergeron. Morphing as a means of generating variation in visual medicalteaching materials. Computers in Biology and Medicine , 24(1):1118,Jan. 1994.

    [4] B. Cabral, N. Cam,and J. Foran. Accelerated volumerenderingand tomographicreconstructionusing texture mappinghardware. In A. Kaufmanand W. Krueger,editors, Proceedingsof the 1994 Symposiumon Volume Visualization , pp 9198,New York, NY, Oct. 1994. ACM SIGGRAPH and IEEE Computer Society.

    [5] M. Chen, M. W. Jones, and P. Townsend. Methods for volume metamorphosis.Toappearin ImageProcessing forBroadcast and VideoProduction ,Y.Paker andS. Wilbur editors, Springer-Verlag,London, 1995.

    [6] M. Covell and M. Withgott. Spanning the gap between motion estimation andmorphing. In Proceedings of IEEE International Conference on Acoustics,Speech and SignalProcessing , vol 5, pp 213216, New York, NY, 1994. IEEE.

    [7] T. A. Galyean and J. F. Hughes. Sculpting: An interactive volumetric modelingtechnique. In Computer Graphics , vol 25(4), pp 267274, New York, NY, July1991. Proceedings of SIGGRAPH 91.

    [8] T. He, S. Wang, and A. Kaufman. Wavelet-based volume morphing. In D. Berg-eron and A. Kaufman, editors, Proceedingsof Visualization 94 , pp 8591, LosAlamitos, CA, Oct. 1994. IEEE Computer Society and ACM SIGGRAPH.

    [9] J. F. Hughes. Scheduled Fourier volumemorphing. In Computer Graphics , vol26(2), pp 4346, New York, NY, July 1992. Proceedings of SIGGRAPH 92.

    [10] A. Kaufman, D. Cohen, and R. Yagel. Volumegraphics. Computer , 26(7):5164,July 1993.

    [11] A. Kaul and J. Rossignac. Solid-interpolating deformations: Construction andanimation of PIPs. In F. H. Post and W. Barth, editors, Eurographics 91 , pp493505, Amsterdam, The Netherlands, Sept. 1991. Eurographics Association,North-Holland.

    [12] J. R. Kent, W. E. Carlson, and R. E. Parent. Shape transformation for polyhedralobjects. In Computer Graphics , vol 26(2), pp 4754, New York, NY, July 1992.Proceedings of SIGGRAPH 92.

    [13] P. Litwinowicz. Efcient techniques for interactive texture placement. In Com- puterGraphicsProceedings , AnnualConferenceSeries,pp 119122,New York,NY, July 1994. Conference Proceedings of SIGGRAPH 94.

    [14] W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3-D sur-faceconstructionalgorithm. In ComputerGraphics , vol21(4), pp163169,NewYork, NY, July 1987. Proceedings of SIGGRAPH 87.

    [15] K. Perlin and E. M. Hoffert. Hypertexture. In Computer Graphics , vol 23(3), pp253262, New York, NY, July 1989. Proceedings of SIGGRAPH 89.

    [16] T. W. Sedeberg, P. Gao, G. Wang, and H. Mu. 2-D shape blending: An intrinsicsolution to the vertexpath problem. In Computer GraphicsProceedings , AnnualConference Series, pp 1518, New York, NY, Aug. 1993. Conference Proceed-ings of SIGGRAPH 93.

    [17] T.W. Sederbergand S. R. Parry. Free-formdeformationsof solid geometricmod-els. In Computer Graphics , vol 20(4), pp 151160, New York, NY, Aug. 1986.Proceedings of SIGGRAPH 86.

    [18] P. A. van den Elsen, E.-J. D. Pol, and M. A. Viergever. Medical image match-ing a review with classication. IEEE Engineering in Medicine and Biology Magazine , 12(1):2639,Mar. 1993.

    [19] S. W. Wang and A. Kaufman. Volume sculpting. In Proceedingsof 1995 Sympo-sium on Interactive 3D Graphics , pp 151156, 214, New York, NY, Apr. 1995.ACM SIGGRAPH.

    [20] S. W. Wangand A. E. Kaufman. Volumesampled voxelizationof geometricprim-itives. In G. M. Nielson and D. Bergeron, editors, Proceedingsof Visualization93 , pp 7884,Los Alamitos, CA, Oct. 1993. IEEE ComputerSociety and ACMSIGGRAPH.

    [21] G. Wolberg. Digital Image Warping . IEEE Computer Society P., Los Alamitos,CA, 1990.

  • 8/8/2019 1_Feature-Based Volume Metamorphosis

    7/8

    Figure 5: Human to orangutan morph.

    (g) Volume morph at time 0.06 using linear interpolation of warped volumes. Due to the exponential dependenceof rendered color on opacity, the empty space towards thefront of the human head has already been filled in by thewarped orangutan volume (red arrows).

    (f) Crossdissolve of figures 5(b) and 5(d) illustrating adrawback of 2.5 D morphing. The base of the skull(indicated by red arrows) appears unrealisticallytransparent, and the teeth are indistinct, compared tothe full 3D morph shown in figure 5(e).

    (h) Volume morph at time 0.06 using nonlinearinterpolation of warped volumes to correct for theexponential dependence of color on opacity. The resultis now nearly identical to the human (see section 4.2).

    (e) 3D volume morph halfway between human head and orangutan head.

    (b) Human head warped to midpoint of morph.

    (d) Orangutan head warped to midpoint of morph.(c) Original CT orangutan head.

    (a) Original CT human head.

  • 8/8/2019 1_Feature-Based Volume Metamorphosis

    8/8

    Figure 6: Dart to X-29 morph.

    Figure 7: Lion to leopard-horse morph.

    (d) User interface showing elements used to establishcorrespondences between models. Points (not shown),segments, rectangles, and boxes are respectively drawnas pink spheres, green cylinders, narrow blue slabs, andyellow boxes. The x, y, and z axes of each element areshown only when the user clicks on an element in orderto change its attributes; otherwise, they remain invisibleto prevent cluttering the work area (see section 3.2).

    (a) Dart volume from scanconverted polygon mesh. (c) Volume morph halfway between dart and X29.(b) X29 volume from scanconverted polygon mesh.

    (a) Lion volume from scanconvertedpolygon mesh.

    (b) Leopardhorse volume from scanconverted polygon mesh.

    (c) Volume morph halfway between lion and leopardhorse.