11
1 Introduction Simulation of postoperative 3D facial morphology using a physics-based head model Yoshimitsu Aoki 1 , Shuji Hashimoto 1 , Masahiko Terajima 2 , Akihiko Nakasima 2 1 Department of Applied Physics, School of Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan e-mail: aoki, [email protected] 2 Department of Orthodontics, Faculty of Dentistry, Kyushu University 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan e-mail: [email protected]; [email protected] We propose a prototype of a facial surgery simulation system for surgical planning and the prediction of facial deformation. We use a physics-based human head model. Our head model has a 3D hierarchical structure that consists of soft tissue and the skull, con- structed from the exact 3D CT patient data. Anatomic points measured on X-ray images from both frontal and side views are used to fire the model to the patient’s head. The purposes of this research is to ana- lyze the relationship between changes of mandibular position and facial morphology after orthognathic surgery, and to simulate the exact postoperative 3D facial shape. In the experiment, we used our model to pre- dict the facial shape after surgery for patients with mandibular prognathism. Comparing the simulation results and the actual facial images after the surgery shows that the pro- posed method is practical. Key words: Head modeling – Physics-based deformation – Surgical simulation – Or- thodontics – X-ray image In recent years, computer-based 3D human represen- tation has come into wide use, in the scientific com- munity, in the entertainment field, in film production and in the games industry. For natural and realistic representation of humans, the quality of facial ani- mation is most important because the face contains information about the individual’s personality, affec- tivity, and so on. Computer facial modeling is an es- sential issue for the generation of facial images and facial animations. The techniques for creating facial models and manipulating facial attributes are impor- tant challenging topics in the face-related research of the past 20 years. A variety of facial modeling techniques has been based on many aspects and methodologies since the parameterized facial model produced by Parke in the early 1970s. He produced several segments of re- alistic facial animation by collecting facial action data derived from real faces. He used photogram- metric techniques and interpolated expressions to create animation (Parke 1974). This is still the basis of parameterized facial models for facial animation production. In order to synthesize facial expressions on facial models, the Facial Action Coding System (FACS), developed by Ekman and Friesen (1978), is fre- quently used because it defines complex facial ac- tions by combining several action units (AUs). Each AU represents tbe corresponding muscle ac- tion and typical facial posture such as the “Brow Raiser” (AU4). With the AUs, facial action can be quickly implemented on geometric facial mod- els. Morishima and his colleagues (1989) applied a FACS-based facial model to video teleconferenc- ing. This arose from the concept of model-based coding schemes. Another facial modeling technique is found in a mus- cle model approach (Waters 1987). This method cre- ates flexible facial expressions by manipulating the underlying musculature of the face in an anatomi- cal facial model. Similarly, Hashimoto et al. (1989) propose a 2D spring frame model for facial image modification. Thalmann et al. (1988) also present a muscle action facial model, and Sera et al. (1996) report techniques for synthesizing speech animation in a physics-based muscle model. Recently, the development of optical range scanners extremely improves the accuracy of facial shape data for facial animations. The Cyberware laser scanner is typical of optical range scanners, which can ob- tain high-resolution 3D data for facial shape and its The Visual Computer (2001) 17:121–131 c Springer-Verlag 2001

Simulation of postoperative 3D facial morphology using a physics-based head model

  • Upload
    akihiko

  • View
    221

  • Download
    3

Embed Size (px)

Citation preview

Page 1: Simulation of postoperative 3D facial morphology using a physics-based head model

1 Introduction

Simulation of postoperative3D facial morphologyusing a physics-basedhead model

Yoshimitsu Aoki1,Shuji Hashimoto1,Masahiko Terajima2,Akihiko Nakasima2

1 Department of Applied Physics, School of Scienceand Engineering, Waseda University, 3-4-1 Okubo,Shinjuku-ku, Tokyo 169-8555, Japane-mail: aoki, [email protected] Department of Orthodontics, Faculty of Dentistry,Kyushu University 3-1-1 Maidashi, Higashi-ku,Fukuoka 812-8582, Japane-mail: [email protected];[email protected]

We propose a prototype of a facial surgerysimulation system for surgical planning andthe prediction of facial deformation. We usea physics-based human head model. Ourhead model has a 3D hierarchical structurethat consists of soft tissue and the skull, con-structed from the exact 3D CT patient data.Anatomic points measured on X-ray imagesfrom both frontal and side views are used tofire the model to the patient’s head.The purposes of this research is to ana-lyze the relationship between changes ofmandibular position and facial morphologyafter orthognathic surgery, and to simulatethe exact postoperative 3D facial shape. Inthe experiment, we used our model to pre-dict the facial shape after surgery for patientswith mandibular prognathism. Comparingthe simulation results and the actual facialimages after the surgery shows that the pro-posed method is practical.

Key words: Head modeling – Physics-baseddeformation – Surgical simulation – Or-thodontics – X-ray image

In recent years, computer-based 3D human represen-tation has come into wide use, in the scientific com-munity, in the entertainment field, in film productionand in the games industry. For natural and realisticrepresentation of humans, the quality of facial ani-mation is most important because the face containsinformation about the individual’s personality, affec-tivity, and so on. Computer facial modeling is an es-sential issue for the generation of facial images andfacial animations. The techniques for creating facialmodels and manipulating facial attributes are impor-tant challenging topics in the face-related research ofthe past 20 years.A variety of facial modeling techniques has beenbased on many aspects and methodologies since theparameterized facial model produced by Parke in theearly 1970s. He produced several segments of re-alistic facial animation by collecting facial actiondata derived from real faces. He used photogram-metric techniques and interpolated expressions tocreate animation (Parke 1974). This is still the basisof parameterized facial models for facial animationproduction.In order to synthesize facial expressions on facialmodels, the Facial Action Coding System (FACS),developed by Ekman and Friesen (1978), is fre-quently used because it defines complex facial ac-tions by combining several action units (AUs).Each AU represents tbe corresponding muscle ac-tion and typical facial posture such as the “BrowRaiser” (AU4). With the AUs, facial action canbe quickly implemented on geometric facial mod-els. Morishima and his colleagues (1989) applieda FACS-based facial model to video teleconferenc-ing. This arose from the concept of model-basedcoding schemes.Another facial modeling technique is found in a mus-cle model approach (Waters 1987). This method cre-ates flexible facial expressions by manipulating theunderlying musculature of the face in an anatomi-cal facial model. Similarly, Hashimoto et al. (1989)propose a 2D spring frame model for facial imagemodification. Thalmann et al. (1988) also presenta muscle action facial model, and Sera et al. (1996)report techniques for synthesizing speech animationin a physics-based muscle model.Recently, the development of optical range scannersextremely improves the accuracy of facial shape datafor facial animations. The Cyberware laser scanneris typical of optical range scanners, which can ob-tain high-resolution 3D data for facial shape and its

The Visual Computer (2001) 17:121–131c© Springer-Verlag 2001

Page 2: Simulation of postoperative 3D facial morphology using a physics-based head model

122 Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model

corresponding texture. Lee et al. (1995) report tech-niques to reconstruct an individual physics-basedmodel from data obtained by an optical laser scan-ner. In this decade, more detailed facial models fromCT or MRI data have been reported for medical ap-plications such as visualization and surgical simula-tion of the face. In the next section, we review con-ventional and current work about facial modeling inmedicine.

2 Facial modeling in medicine

2.1 Previous works

Computer graphics techniques in virtual reality playan important role in medicine. Visualization of hu-man body information and surgical simulation arethe most applicable topics among them. By combin-ing medical image processing and scanning technol-ogy, we can visualize highly realistic medical imagesin two or three dimensions for medical treatment andplanning.Regarding medical appications related to the facialanimation, two main aspects are of interest: the sur-gical simulation of facial tissue and craniofacial sur-gical planning. In both cases, the facial model of thepatient before surgery is reconstructed for preopera-tive surgical simulation.The main purpose of facial tissue simulation is toestimate the response of skin and muscle after theyhave been cut, removed, or rearranged (Larrabee1986). Deng (1988) proposes a finite-element modelof skin tissue to simulate skin response after inci-sions. More recently, Mita et al. (1996) present anelaborate model including underlying facial tissuefrom CT scan data for a facial paralysis simulator.Craniofacial surgical planning and simulation in-volve the rearrangement of the facial bones in orderto set them in the correct position. The surgical pro-cedure includes the cut-and-paste operation on thebones, and the objective is to predict postoperativefacial morphology caused by the operation. In thiscase, models require not only facial surface data, butalso shape data for the skull. The typical way ofmodeling is to use CT or MRI scans of the head.Koch (1996) develops a prototype system for surgi-cal planning and prediction of facial shape after cran-iofacial and maxillofacial surgery for patients withfacial deformities. Finite-element models from MRIdata sets are used for this purpose.

2.2 Our motivation

In medical applications such as the surgical sim-ulation, just mentioned, the main points for facialmodels are summarized as follows. First, we ex-tract shape features from input resources such asCT, MRI, and X-ray images for the head model-ing, and second, we reconstruct an anatomical headmodel that has the ability to estimate facial tis-sue dynamics for the facial surgical planning andsimulation.In this context, we propose a head model reconstruc-tion method using the shape features extracted fromtwo-directional X-ray images and a frontal facialimage. We also describe the facial surgery simula-tion system that uses this anatomical and physics-based head model. The main feature of our headmodel is that it can analyze facial dynamics bycalculating the kinetic equation of the entire headmodel.As a practical example of medical applications,we focus on a surgical simulation for orthodon-tics. Concretely, we investigate the relationshipbetween changes of the mandibular position andthat of the facial morphology after orthognathicsurgery, and we simulate the exact postoperative3D facial deformation with the physics-based headmodel. In terms of the actual treatment and den-tal surgery, predicting the surgical result in advanceis extremely important for both dentists and pa-tients; this facilitates “informed consent”. There-fore, we aim at a facial surgical simulation systemthat assists surgeons visually in surgery planningand in actual surgical procedures. We do this with3D displays of realistic and accurate predicted re-sults. Computer graphics techniques provide thisvisualization.We organize this paper as follows. In Sect. 3, wesummarize the orthognathic surgery concerned andits treatment procedures to make the role of our workclear. In Sect. 4, we present the proposed head modeland its dynamical mechanism. In Sect. 5, we de-scribe our feature extraction method with the helpof X-ray images and a facial image, then explainhow to deform the generic head model to match thepatient’s head. Next, we describe our surgical sim-ulation system in Sect. 6. Then we show some sim-ulation results and compare them with actual surgi-cal cases in Sect. 7. Finally, we conclude this paperwith some discussion and mention prospective futureworks.

Page 3: Simulation of postoperative 3D facial morphology using a physics-based head model

Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model 123

3 Overview of orthognathic surgery

This section explains mandibular prognathism, whichis a typical case of jaw deformity. Next, we view theactual treatment process in order to explain how oursystem can be helpful.

3.1 Mandibular prognathism

Mandibular prognathism is one of the jaw deformi-ties that cause functional and aesthetic disorders. Inserious cases, it hampers chewing and pronuncia-tion, so that orthognathic treatment and surgery arerequired. Figure 1a illustrates an example of a pa-tient with mandibular prognathism. In this case, themandible was transposed posteriorly, with the resultthat the functional and aesthetic disharmony of thefacial morphology was corrected as shown in Fig. 1b.

3.2 Treatment process

Figure 2 shows the flow diagram of a standard treat-ment process for orthognathic surgery, which con-sists of three treatment steps. We describe each stepin the following part of this paragraph.Check-up before treatment. Before treatment, a doc-tor of dentistry checks the condition of the patientand obtains certain information for the treatment,such as facial images and X-ray images of the frontaland side views. In addition, we acquire shape data ofthe dentition for the plaster cast model, called a set-up model in the field of orthodontics.Treatment planning. Using the data from the previ-ous step, the doctor determines the necessary treat-ment and does the surgical planning. Typically, thesurgeon overwrites the facial contour of patient’spredicted 2D profile on the X-ray images. This pre-diction is called “paper surgery”. Additionally, a set-up model is often used for the orthodontic planning.Actual treatment. Orthodontic treatment and surgeryare carried out according to the surgical plan.As already stated, the paper surgery method, whichis the typical way of predicting at present, only has2D information (facial contour) about the facial mor-phology after surgery. Thus, we attempt to simulate3D changes in facial soft tissue following orthog-nathic surgery with our physics-based head model-ing method. In the next section, the head model isintroduced in detail.

Images in the oral cavity

Normalized facial images

Shape of dentition

Check-up

Planning

Treatment

Paper Surgery

Set up Model

Correction before surgery

Orthognathic surgery

Correction after surgery

1a 1b

2Fig. 1a,b. Example of a patient with mandibular prog-nathism: a before treatment; b after treatmentFig. 2. Flow diagram of the orthognathic treatment

4 Initial head model

4.1 Modeling method

We constructed the hierarchical head model that con-sists of a skin layer, a muscle layer, and a skull layer.The skin and skull layers are segmented from 3D CTdata by thresholding, for which we used the segmen-tation software called “Mimics” (Fig. 3a).

Page 4: Simulation of postoperative 3D facial morphology using a physics-based head model

124 Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model

3a

3b

4a 4b

4c 4d

Fig. 3a,b. Extraction of data from a 3D CT scan: a segmentation software “Mimics”; b segmented face and skullFig. 4a. Skull; b muscle; c wire-frame skull; d integrated head model

The skin layer is a wire-frame model constructedfrom CT scanned data, which is regarded as an elas-tic body. All frames composed of the skin are sim-ulated by nonlinear springs that can represent theskin elasticity by their elastic parameters. The fa-cial muscles are also modeled by nonlinear springs,which start from the skull layer and are attached tofacial surface just like the actual facial structure. Themuscles are grouped by their positions, and act withconnected facial tissues in harmony. At present, 14kinds of facial muscles are implemented, and theyare mainly concerned with facial expressions. Eachmuscle has a contraction parameter to generate facialexpressions. For the skull layer, the polygon modelfrom ViewPoint is used. The jaw part of this modelcan simulate realistic jaw movement with six de-grees of freedom, and it is located under facial tis-sues and reflects the relationship between the faceand the skull. Three layers composed of the hierar-chical head model and the integrated head model are

illustrated in Fig. 4. This generic head model is usedas the initial head model for the head reconstructionprocess.

4.2 Dynamic facial mechanism

In generating facial expressions, points on the skinmodel are moved in order to obtain the modifiedface. The energy of the spring system can be changedby the contraction of both muscles and the jaw ac-tion. The new position of each point on the facialsurface is obtained by calculating the energy equilib-rium point of the entire spring system. The kineticequation for feature point i on the facial surface is

mid2rdt2

= −∑

i, j

ki j (cr)(ri − r j)+mig − Rdri

dt, (1)

where ri represents the 3D coordinates of the featurepoint i. The first and second terms represent the to-

Page 5: Simulation of postoperative 3D facial morphology using a physics-based head model

Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model 125

5a

5b

6a 6b

Fig. 5a,b. Facial action synthesizer: a generated image; b facial action editorFig. 6a,b. Facial expression: a muscle action; b facial modification

tal elastic force affected by point i and the attractionof gravity, respectively. The value of ki j is changedgradually by the contraction parameter Cr of eachspring so that it can represent the nonlinear propertyof the skin and muscles. The third term is the viscousterm of the facial model.

4.3 Facial action synthesizer

Using this model, we developed the facial actionsynthesizer, which enables the user to create facialexpressions derived from facial actions (Fig. 5). Re-quired facial parameters in facial modification arethe contraction rate of the muscles and the jaw actionparameters, which can be controlled by the GUI. Fa-cial actions are parameterized relative to typical fa-cial expressions as indicated by anatomical analysis.Figure 6 shows the result of generating the typicalfacial expression “surprise” which is caused by mus-cle contraction around the forehead and jaw. Thus,

our head model can create flexible facial expressionsderived from given parameters, such as contractionrates of muscles and action parameters for the jaw sothat facial dynamics can be simulated.We use this model as the initial head model for thesurgery simulation system.

5 Reconstruction from X-ray images

In this section, we describe more precisely the headmodeling method that uses X-ray images. To obtain3D shape information of a face and a skull, two-directional X-ray images and a frontal facial imageare used in this method.

5.1 Measurement points extraction

Measurement points. Anatomical measurementpoints on a face and a skull are usually utilized

Page 6: Simulation of postoperative 3D facial morphology using a physics-based head model

126 Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model

in order to analyze their shapes quantitatively inorthodontics. Based on this analysis, orthodontistsplan how to treat each patient by reconstructing3D shapes from images. Figure 7 shows the plot-ted measurement points on the sketches of X-rayimages (cephalograms) from the frontal and sideviews. Each position of the measurement points isdefined in detail, especially any point on a con-tour that has a large curvature. We manually plotthe two-directional cephalograms with a mouse, anduse them as the typical control points of the head.The number of measurement points is 22 for a skullmodel, and 21 for a face.Cephalogram mesurement. The 3D coordinates canbe calculated by integrating the 2D coordinates ofeach point on the two-directional cephalograms.However, a target object is usually projected ontothe image planes as expanded and rotated images(Figs. 8, 9). Thus, we have to adjust them for moreaccurate head modeling.In capturing X-ray images, a head is fixed by earrods. Two X-ray sources and cameras are also bound,so we can obtain both normalized facial images andX-ray images of the frontal and side views. The headis projected onto the picture planes as expanded im-ages of actual measurement. Therefore, the expan-sion adjustment should be considered to obtain anaccurate 3D shape of the head. As shown in Fig. 8,the point on the subject Ao(xo, yo, zo) is enlargedand projected as the point ALAT(yLAT, zLAT) on thepicture plane (yz plane). The coordinates of the mea-surement point Ao can be computed by followingadjustment (2). The value of k is the distance from yaxis, so that it depends on each measurement point.

yo = yLAT × L1 + k

L2(2)

zo = zLAT × L1 + k

L2

Although the head is fixed by the ear rods, slighthead rotation around the horizontal axis (Fig. 9) isunavoidable in capturing images, so that the mea-surement point on the subject is projected to the in-correct position (xPA, yPA) on the xy plane. The ro-tation adjustment can be calculated by (3). Then, theline (y = yPA) is drawn on the frontal cephalogram,and the exact value of xPA is obtained by the cross-section with the skull and facial contour using (4).Thus, accurate 3D coordinates of the measurement

7a 7b

8

9

Fig. 7a,b. Measurement points plotted on the cephalo-grams: a frontal view; b side view

Fig. 8. X-ray projection onto the profile image

Fig. 9. Head rotation around the horizontal axis

point (xo, yo, zo) can be prepaired for the face recon-struction process.

yPA = L2(L1 + k)(zLAT sin θ + yLAT cos θ)

L1L2 + (L1 + k)(zLAT cos θ − yLAT sin θ)(3)

xo = xPA × L1 + zo

L2. (4)

Page 7: Simulation of postoperative 3D facial morphology using a physics-based head model

Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model 127

5.2 Facial feature points extraction

From two-directional X-ray images, rough profilesof the head can be extracted. For more detailed fittingof the face shape, we use facial feature points on thecontours of various facial parts (eyes, nose, mouth).The contours of the eyes, nose, mouth and the facialoutline are automatically detected to locate the facialparts of the model. The automatic recognition is re-alized by combining popular techniques developedby IPA (Yokoyama 1999): the skin color extraction,the template matching, and the active contour model.Figure 10 shows the results of facial feature-point ex-traction from normalized frontal images.

5.3 Deformation of the head model

Corresponding points. In order to deform the generichead model according to the 3D coordinates of themeasurement points, we should also set the corre-sponding points on the head model. We selected thecorresponding points as shown in Fig. 11.Deformaion method. Using the width measured onfrontal image (x1, x2) and the corresponding widthof the initial head model (x1(std), x2(std)), rates ofthe width are calculated so that the x coordinate ofpoint i is computed by the linear interpolation (5)(see Figure 12). Sn represents the number of stepsbetween points 1 and 2. After width and height fit-ting, the coordinates of (yo, zo) are used for depthfitting and height directions. The whole head is di-vided into three parts; the back part and the upper andlower frontal parts (Fig. 13) to consider the anatom-ical structure of the head. The movement vector ofeach measurement point is calculated, and the rest ofthe points between two measurement points are com-puted by the linear interpolation (6).

xri = ixr2 + (Sn − i)xr1

Sn(5)

∆ri = i p2 + (Sn − i)p1

Sn, (6)

where ∆ri represents the movement vector of thepoint i, and p1, p2, the movement vectors of themeasurement points.Through these steps, the target head model can be re-constructed with shape features extracted from twoX-ray images and a frontal facial image. Figure 14shows an example of the fitting result for an actual

10

11a 11b

12

13a 13b

Fig. 10. Detected feature points on the frontal facial im-agesFig. 11a,b. Measurement points set on the model: a skull;b faceFig. 12. Fitting of the width

Fig. 13a,b. Height and depth fitting: a area division;b vector interpolation

Page 8: Simulation of postoperative 3D facial morphology using a physics-based head model

128 Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model

patient; Fig. 14a is the result deformed only by infor-mation from X-ray images; Fig. 14b is the final resultof detailed fitting using both X-ray images and a fa-cial image. The improvement of the quality can beseen from this result.

6 Simulation system

Figure 15 shows an overview of our simulation sys-tem. Using the control panel, we can set the valuesof parameters easily and interactively. These param-eters are the stiffness of facial muscles and jaw ac-tion parameters. For the simulation of orthognathicsurgery, the parameters for the jaw treatment are es-sential; they are the backward movement and the ro-tation angle of the jaw to bring it into the desiredposition. This system takes some surgical proceduresinto account, such as cutting and shifting the bone.The changes of facial morphology caused by themare automatically computed and shown in the modelview window. Our system enables surgeons to plansurgery in advance and to indicate the postoperative3D facial images clearly. The proposed model alsogenerates facial expressions caused by muscle con-tractions dynamically.

7 Surgery simulation

In this section, we simulate the predicted facial shapeafter orthognathic surgery for three patients withmandibular prognathism. We use the proposed headmodel for this purpose.

7.1 Surgery method

In the experiment, sagittal splitting ramus osteotomy(Obwegeser-Dal Pont method), which is the mostpopular surgical method of mandibular prognathism,was selected to treat the patients. First, the jaw iscut along certain lines and divided into three parts(Fig. 16). Then the projecting part of the jaw is setin the backward direction in order to normalize itsposition. Our surgery system realizes these surgicalprocedures for surgeons.

7.2 Simulation results

The changes of facial morphology following orthog-nathic surgery were simulated for three patients withmandibular prognathism. In each case, we input sur-

16

14a 14b

15a

15b

Fig. 14a,b. Reconstruction result: a rough fitting; b de-tailed fittingFig. 15a,b. Surgery simulation system: a model overview;b GUI for surgical parametersFig. 16. Obwegeser-Dal Pont method

Page 9: Simulation of postoperative 3D facial morphology using a physics-based head model

Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model 129

17a 17b

17c 17d

18a 18b

19a 19b 19c

2120

Fig. 17a–d. Jaw part of the skull: a, b before and after surgery(actual results); c, d before and after surgery (simulation results)Fig. 18a,b. Prediction of facial morphology: a before surgery;b after surgeryFig. 19a–c. Evaluation of simulation results for three patients:a patient A; b patient B; c patient CFig. 20. Evaluation points on the facial surfaceFig. 21. “Smiling” face after surgery

gical parameters such as the backward distance ofthe lower jaw, the rotation angle around each axis,and so on. The values of the surgical parameters aredetermined by the results of the dentists’ surgicalplanning. Figure 17 shows the X-ray images of theactual surgery result and the results predicted by themodel. Figure 18 illustrates the 3D facial shape pre-dicted before surgery and the shape after surgery.

8 Evaluation and discussion

In order to evaluate the accuracy of the experimen-tal results more obviously, the predicted profiles ofthree patients are compared with the actual profilesextracted from X-ray images after surgery in layers,as shown in Fig. 19. Figure 20 illustrates the pointsfor evaluating the simulated facial surface in com-parison with the actual tracing of X-ray images bydental surgeons.

As shown in Fig. 19, the predicted facial profiles arevery close to actual ones after surgery. Consideringthe results evaluated by the surgeons, the simulatedposition of the points around the lower jaw (Bs, Ps)is approximately close to the right position, althoughpoints around a lower lip (Li) include some error.The main reason for this error is probably the factthat the quality of the mouth of our standard facialmodel is not good enough for this simulation. How-ever, these experimental results were qualitativelyevaluated to be available for surgical simulation andplanning by experienced dental surgeons.In addition to the prediction of facial morphology,the simulation of facial expression is strongly de-sired both by doctors and patients. Our system hasthe ability to generate facial expressions with dy-namic motion, so it can simulate a “smiling” faceafter surgery as shown in Fig. 21, for instance. Thisimage shows a smiling face generated by contractingmuscles around the cheeks.

Page 10: Simulation of postoperative 3D facial morphology using a physics-based head model

130 Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model

9 Conclusion and future work

In this paper we have proposed a prototype of a fa-cial surgery simulation system for surgical planningand prediction of facial deformation using a physics-based human head model. In the experiment, we pre-dicted the 3D facial shape after orthognathic surgeryfor patients with mandibular prognathism. By com-paring the simulation results and the actual postoper-ative facial images, the proposed method can be seento be practically useful.For future works, we should investigate how to fitthe parameters of skin and muscle stiffness for eachpatient in detail to improve the accuracy of the mod-eling. In the near future, we will attempt to evaluatethe difference between the predicted results and theactual facial shapes quantitatively. We will use 3DCT or MRI data about the patient’s head, in order tomake the system more practical.

References1. Aoki Y, Hashimoto S (1998) Physical facial model based

om 3D CT data for facial image analysis and synthe-sis. Third IEEE International Conference on AutomaticFace and Gesture Recognition (FG ’98), IEEE Press, Nara,pp 448–453

2. Aoki Y, Hashimoto S (1999) Physical modeling of face us-ing spring frame based on anatomical data. Trans IEICE (A)82-A:573–582

3. Aoki Y, Hashimoto S, Terajima M, Nakasima A (1999)Simulation of postoperative 3D facial morphology usingphysics-base head model. Proceedings of the InternationalConference on Multi Media Modeling (MMM ’99), WorldScientific, Ottawa, pp 377–387

4. Deng XQ (1988) A finite element analysis of surgery of thehuman facial issue. PhD Thesis, Columbia University, NewYork

5. Ekman P, Friesen WV (1978) Manual for facial action cod-ing system. Consulting Phycologists Press, Palo Alto, Calif

6. Hashimoto S, Sato Y, Oda H (1989) Modification of fa-cial expression using spring frame model. Proceedings ofIASTED ’89, Geneva, Acta Press, pp 9–15

7. Koch RM, Gross MH, Carls FR, von Büren DF, FankhauserG, Parish YIH (1996) Simulating facial surgery using fi-nite element models. Proceedings of SIGGRAPH ’96 ACMSIGGRAPH 30:421–428

8. Larrabee W (1986) A finite element model of skin defor-mation. I. Bopmechanics of skin and soft tissue: a review.Laryngoskop 96:399–405

9. Lee Y, Terzopoulos D, Waters K (1995) Realistic modelingfor facial animation. Proceedings of SIGGRAPH ’95 ACMSIGGRAPH 29:55–62

10. Magnenat-Thalmann N, Primeau NE, Thalmann D (1988)Abstract muscle actions procedures for human face anima-tion. Visual Comput 3:290–297

11. Mita H, Konno T (1996) A facial paralysis simulator forsurgery planning. J Japan Soc Comput Aided Surg 4:1–5

12. Morishima S, Aizawa K, Harashima H (1989) An intelli-gent facial image coding driven by speech and phoneme.Proceedings of IEEE ICASSP ’89, IEEE Computer SocietyPress, Los Alamitos, pp 1795–1798

13. Parke FI (1974) A parametric model for human faces. Mas-ter’s Thesis, University of Utah, Salt Lake City, UT UTEC-CSs-75-047

14. Sera H, Iwasawa S, Morishima S (1996) Mouth shape con-trol with physics based facial muscle model. Technical Re-port of IEICE, IEICE Press MVE95-61:9–16

15. Waters K (1987) A muscle model for animating three-dimensional facial expressions, Proceedings of SIGGRAPH’87 ACM SIGGRAPH 21:17–24

16. Yokoyama T, Tanaka K, Hisatomi K, Yagi Y, Yachida M,Hara F, Hashimoto S (1999) Extracting contours and fea-tures from frontal face images. J Inst Image Inform Televi-sion Eng 53:1605–1614

Photographs of the authors and their biographies are given onthe next page.

Page 11: Simulation of postoperative 3D facial morphology using a physics-based head model

Y. Aoki et al. : Simulation of postoperative 3D facial morphology using a physics-based head model 131

YOSHIMITSU AOKI is a Re-search Assistant and a PhDcandidate in the Department ofApplied Physics, School of Sci-ence and Engineering, WasedaUniversity. His current researchinterests are image processingtechniques such as 3D com-puter graphics related to med-ical applications, facial imageanalysis and synthesis. He isalso interested in man-machine-interaction and human interface.He received his BE and ME de-

grees in Applied Physics from Waseda University in 1995 and1997, respectively. He is a member of the Computer Society ofthe IEEE, IEICE, IPSJ, and IIEEJ.

SHUJI HASHIMOTO is a Pro-fessor in the Department ofApplied Physics, Waseda Uni-versity. He was with the Fac-ulty of Science, Toho University,Funabashi, Japan, from 1979to 1991. His main research in-terests are image processing,music ant humanoid robots. Hereceived his BE, ME, and DrEng degrees in Applied Physicsfrom Waseda University in 1970,1973, and 1977, respectively.He is a member of the ICMA,SICE, ISCIE, IPSJ, IEICE andthe Robotics Society of Japan.

MASAHIKO TERAJIMA iscurrently a resident of the De-partment of Orthodontics, Fac-ulty of Dentistry, Kyushu Uni-versity, Japan. He is a memberof the Japanese Orthodontic So-ciety and the Japanese Societyof Jaw Deformities. His currentresearch concerns the construc-tion of a 3D computer graph-ics image of the human max-illofacial skeleton and soft tissuewith cephalometric radiography.He is also interested in develop-ing 3D computer graphics sim-ulation of the surgical treatmentof jaw deformities.

AKIHIKO NAKASIMA hasbeen a Professor and Chair-man of the Department of Or-thodontics, Faculty of Dentistry,Kyushu University since 1992.In 1976 he was awarded a PhDin Dental Science. He is a certi-fied orthodontist of the JapaneseOrthodontic Society, a memberof the Board of Internal Affairsof the Japanese Orthodontic So-ciety, and a Councilor of theJapanese Society for Jaw Defor-mity. He is also a member of

screening committee for research of the Japanese Ministry ofEducation. His interests include the diagnosis and treatment ofjaw deformities, and morphogenetic analysis.