8
TECHNO BYTES Methods for managing 3-dimensional volumes Asem Awaad Othman, a Amr Ragab El-Beialy, b Sahar Ali Fawzy, c Ahmed Hisham Kandil, c Ahmed Mohammed El-Bialy, c and Yehya Ahmed Mostafa d Cairo, Egypt The introduction of 3-dimensional (3D) volumetric technology and the massive amount of information that can be obtained from it compels the introduction of new methods and new technology for orthodontic diagnosis and treatment planning. In this article, methods and tools are introduced for managing 3D images of orthodon- tic patients. These tools enable the creation of a virtual model and automatic localization of landmarks on the 3D volumes. They allow the user to isolate a targeted region such as the mandible or the maxilla, manipulate it, and then reattach it to the 3D model. For an integrated protocol, these procedures are followed by registration of the 3D volumes to evaluate the amount of work accomplished. This paves the way for the prospective treat- ment analysis approach, analysis of the end result, subtraction analysis, and treatment analysis. (Am J Orthod Dentofacial Orthop 2010;137:266-73) W ith the introduction of 3-dimensional (3D) technology, producing a patient’s 3D virtual image became an achievable reality. How- ever, to make the 3D volume versatile and usable, we need diagnostic tools that enable us to detect defective skeletal and dental areas. We also need tools that allow us to detach, manipulate, and adjust various parts of the dentofacial skeleton, and then reattach them. In this article, a technique for these tasks is introduced. MATERIAL AND METHODS Acquisition of the patient’s 3D virtual model is the foundation. Computed tomography (CT) slices of the pa- tient’s head (soft and hard tissues) are obtained in digital imaging and communication in medicine (DICOM) for- mat. These cuts are then compiled to create a 3D model. By using a ray-casting volume-rendering technique, a dig- ital 3D replica is built. 1 This volume-rendering formula provides more information of the anatomic details of the dentofacial skeleton for better visualization of the 3D model of the head (Fig 1). Surface-rendering formulas are available for additional manipulation. For automatic separation of the mandible from the skull, a consistent in- terocclusal clearance is essential throughout the arch length, to facilitate the training of the artificial intelli- gence. Hence, an important prerequisite of the imaging procedure is to acquire the CT images with the teeth in dis- clusion. This dental separation should be within the inter- occlusal freeway space, where the condyles experience pure rotation around the condylar hinge axis. In such po- sition the condyles are functionally centered in the glenoid fossa (centric relation), hence, the facial pattern of the pa- tient is preserved, and a reproducible posture is obtained, in addition to elimination of functional occlusal shifts due to premature occlusal interferences. Subsequent localiza- tion of the condylar hinge axis allows for mandibular rerotation into maximum interdigitation when necessary. On the contrary, capturing the CT images without interocclusal separation produces slides with blended maxillary and mandibular teeth. This results in loss of anatomic details and in turn, jeopardizes the accuracy of the dental measurements. Through occlusal separa- tion, occlusal details are visible, and the maxillary and mandibular separation is technically precise. For occlusal separation, the patient wears a custom- made mandibular splint during radiographic image ac- quisition. The splint is fabricated with a vacuum-pressing machine. A 2-mm hard plastic sheet is custom made on the patient’s mandibular model. The splint is then tried in the patient’s mouth. The patient is instructed to oc- clude on an articulating paper to mark the points of initial contact. Marks on the splint are accurately and mildly ground to guide the maxillary teeth into their shallow grooves and avoid eccentric occlusions. Such manip- ulations of the splint will reduce its thickness to 1 mm. Hence, a minimal posterior dental separation is obtain- able with the mandible maintained in centric relation. From Cairo University, Cairo, Egypt. a Teaching assistant, Systems and Biomedical Engineering Department, Faculty of Engineering. b Assistant lecturer, Department of Orthodontics and Dentofacial Orthopedics, Faculty of Oral and Dental Medicine. c Assistant professor, Systems and Biomedical Engineering Department, Faculty of Engineering. d Professor and chair, Department of Orthodontics and Dentofacial Orthopedics, Faculty of Oral and Dental Medicine. The authors report no commercial, proprietary, or financial interest in the products or companies described in this article. Reprint requests to: Yehya Ahmed Mostafa, Department of Orthodontics and Dentofacial Orthopedics, Faculty of Oral and Dental Medicine, Cairo University, 52 Arab League St, Mohandesseen, Giza, Egypt; e-mail, [email protected]. Submitted, June 2008; revised and accepted, January 2009. 0889-5406/$36.00 Copyright Ó 2010 by the American Association of Orthodontists. doi:10.1016/j.ajodo.2009.01.024 266

Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

TECHNO BYTES

Methods for managing 3-dimensional volumes

Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy,c Ahmed Hisham Kandil,c

Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad

Cairo, Egypt

The introduction of 3-dimensional (3D) volumetric technology and the massive amount of information that canbe obtained from it compels the introduction of new methods and new technology for orthodontic diagnosisand treatment planning. In this article, methods and tools are introduced for managing 3D images of orthodon-tic patients. These tools enable the creation of a virtual model and automatic localization of landmarks on the3D volumes. They allow the user to isolate a targeted region such as the mandible or the maxilla, manipulate it,and then reattach it to the 3D model. For an integrated protocol, these procedures are followed by registrationof the 3D volumes to evaluate the amount of work accomplished. This paves the way for the prospective treat-ment analysis approach, analysis of the end result, subtraction analysis, and treatment analysis. (Am J OrthodDentofacial Orthop 2010;137:266-73)

With the introduction of 3-dimensional (3D)technology, producing a patient’s 3D virtualimage became an achievable reality. How-

ever, to make the 3D volume versatile and usable, weneed diagnostic tools that enable us to detect defectiveskeletal and dental areas. We also need tools that allowus to detach, manipulate, and adjust various parts of thedentofacial skeleton, and then reattach them. In thisarticle, a technique for these tasks is introduced.

MATERIAL AND METHODS

Acquisition of the patient’s 3D virtual model is thefoundation. Computed tomography (CT) slices of the pa-tient’s head (soft and hard tissues) are obtained in digitalimaging and communication in medicine (DICOM) for-mat. These cuts are then compiled to create a 3D model.By using a ray-casting volume-rendering technique, a dig-ital 3D replica is built.1 This volume-rendering formulaprovides more information of the anatomic details of thedentofacial skeleton for better visualization of the 3Dmodel of the head (Fig 1). Surface-rendering formulas

From Cairo University, Cairo, Egypt.aTeaching assistant, Systems and Biomedical Engineering Department, Faculty

of Engineering.bAssistant lecturer, Department of Orthodontics and Dentofacial Orthopedics,

Faculty of Oral and Dental Medicine.cAssistant professor, Systems and Biomedical Engineering Department, Faculty

of Engineering.dProfessor and chair, Department of Orthodontics and Dentofacial Orthopedics,

Faculty of Oral and Dental Medicine.

The authors report no commercial, proprietary, or financial interest in the

products or companies described in this article.

Reprint requests to: Yehya Ahmed Mostafa, Department of Orthodontics and

Dentofacial Orthopedics, Faculty of Oral and Dental Medicine, Cairo University,

52 Arab League St, Mohandesseen, Giza, Egypt; e-mail, [email protected].

Submitted, June 2008; revised and accepted, January 2009.

0889-5406/$36.00

Copyright � 2010 by the American Association of Orthodontists.

doi:10.1016/j.ajodo.2009.01.024

266

are available for additional manipulation. For automaticseparation of the mandible from the skull, a consistent in-terocclusal clearance is essential throughout the archlength, to facilitate the training of the artificial intelli-gence. Hence, an important prerequisite of the imagingprocedure is to acquire the CTimages with the teeth in dis-clusion. This dental separation should be within the inter-occlusal freeway space, where the condyles experiencepure rotation around the condylar hinge axis. In such po-sition the condyles are functionally centered in the glenoidfossa (centric relation), hence, the facial pattern of the pa-tient is preserved, and a reproducible posture is obtained,in addition to elimination of functional occlusal shifts dueto premature occlusal interferences. Subsequent localiza-tion of the condylar hinge axis allows for mandibularrerotation into maximum interdigitation when necessary.

On the contrary, capturing the CT images withoutinterocclusal separation produces slides with blendedmaxillary and mandibular teeth. This results in loss ofanatomic details and in turn, jeopardizes the accuracyof the dental measurements. Through occlusal separa-tion, occlusal details are visible, and the maxillary andmandibular separation is technically precise.

For occlusal separation, the patient wears a custom-made mandibular splint during radiographic image ac-quisition. The splint is fabricated with a vacuum-pressingmachine. A 2-mm hard plastic sheet is custom made onthe patient’s mandibular model. The splint is then triedin the patient’s mouth. The patient is instructed to oc-clude on an articulating paper to mark the points of initialcontact. Marks on the splint are accurately and mildlyground to guide the maxillary teeth into their shallowgrooves and avoid eccentric occlusions. Such manip-ulations of the splint will reduce its thickness to 1 mm.Hence, a minimal posterior dental separation is obtain-able with the mandible maintained in centric relation.

Page 2: Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

Fig 1. Three-dimensional head volume. Fig 2. Automatic 2D cephalometric landmark identifica-tion.

Fig 3. A, Lateral cephalogram; B, posteroanterior cephalogram; C, automatic 2D landmark identifi-cation of the lateral cephalogram; D, automatic 2D landmark identification of the posteroanteriorcephalogram.

American Journal of Orthodontics and Dentofacial Orthopedics Othman et al 267Volume 137, Number 2

The algorithm used for automatic mandibular separa-tion stems from a previous approach for fully automaticidentification of 2-dimensional (2D) cephalometric land-marks.2-4 This approach was developed by unifying

an active appearance model and simulated annealingfor automatic cephalometric landmarks localizationon 2D lateral radiographic images (Fig 2).5-8 The resultsshowed that the active appearance model followed by

Page 3: Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

Fig 4. Three-dimensional representation of cephalometric landmarks.

Fig 5. Automatic 3D cephalometry.

268 Othman et al American Journal of Orthodontics and Dentofacial Orthopedics

February 2010

simulated annealing can give more accurate results thanan active shape model. This technique was extended toobtain landmarks for both lateral and frontal images.

Our approach depends on determining the land-marks from the 2D model and then processing them togenerate their corresponding landmarks on the 3Dmodel.9 This procedure starts by building digital re-constructed radiography from the patient’s 3D images.This means generating the lateral and posteroanteriorcephalograms from the 3D model of the patient’s head(Fig 3, A and B). These cephalograms are fed into thecomputer for automatic 2D landmark identificationwith the previously mentioned technique (Fig 3, C andD). Identical automatically detected landmarks on the

lateral and frontal cephalograms are processed to gener-ate their equivalent lines on the 3D model of the pa-tient’s head (Fig 4). Because point landmarks on the2D images are represented as lines on the 3D model,the generated lines will be perpendicular on the y-zand x-z planes, respectively. Hence, the intersection ofthe two generated lines is a point on the 3D image.Thus each landmark generated from equivalent frontaland lateral landmarks, is represented as a correspondinglandmark on the 3D skull model (Fig 5).

Using this technique, the mandibular landmarksfrom 3D cephalometry were automatically selected.This represents the initial data to accomplish the man-dibular separation. In addition, a faster operation isguaranteed by automatically determining an imaginaryboundary box for the mandible from the 3D cephalo-metric landmarks. Hence, a fully automated approachis developed for mandibular separation (Fig 6).9

Likewise, the fully automatic parting of the maxillawith its attached dental structures begins with the train-ing of the artificial intelligence. In this approach, thelandmarks allocated on the 3D image with the boundarypoints of the maxilla are used to separate the maxilla inthe 3D model.9 The boundary points of the maxilla arelearned from the lateral cephalometric image generatedfrom the 3D image. The generated maxillary contourfed into the computer makes it possible to trace themaxillary border with subsequent maxillary separation(Fig 7).

Both techniques used for automatic separation of themandible and the maxilla can be used for symmetric andasymmetric patients, since the technique is boundaryoriented.

Separation of the dentition from the adjoined skele-tal base facilitates the dental manipulation. The diffi-culty of extracting the teeth from CT images is due tothe similarity in intensity with the surrounding bone.

Page 4: Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

Fig 6. Automatic mandibular separation.

Fig 7. Automatic maxillary separation.

American Journal of Orthodontics and Dentofacial Orthopedics Othman et al 269Volume 137, Number 2

The technique depends on the fact that the dentalenamel can be allocated easily because of its maximumintensity in the image and is automatically extracted bythe threshold segmentation technique.9,10 The crownsare used to complete the segmentation of the roots by

checking the connectivity for each pixel in the rootdata with the crown pixel in the tooth data. The segmen-tation technique depends on manually assigninga centroid for each tooth. Using K-means clustering(a method for clustering objects into arbitrary number

Page 5: Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

Fig 8. Separated dentition.

Fig 9. Individual tooth separation and coloring.

270 Othman et al American Journal of Orthodontics and Dentofacial Orthopedics

February 2010

of classes [K], where classes are defined by their means)and the connected component algorithms, tooth bound-aries are identified. By using region-growing tech-niques, the whole dentition can be separated from therest of the skeletal base (Fig 8).1 Subsequent separationof individual teeth and the ability to color each toothseparately facilitates implementation and simulationof the various orthodontic applications (Fig 9). Further

maneuvering of the dental units separately, simulationof the extraction procedure, and virtual aligning of theteeth are the beginning of the virtual digital computer-based 3D diagnostic setup (Fig 10).

For virtual orthognathic surgical planning, a cuttingtool is constructed. This knife-like tool allows tailoredcutting in the 3D volume (Fig 11).10 This procedure per-mits the disconnection of any skeletal unit from the rest

Page 6: Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

Fig 10. Virtual simulation of the diagnostic setup.

Fig 11. Knife-like tool.

American Journal of Orthodontics and Dentofacial Orthopedics Othman et al 271Volume 137, Number 2

of the skull in the desired customized osteotomy lines(Fig 12). Subsequent spatial handling and manipulationof the separated 3D skeletal unit are feasible. Reattach-ment of the skeletal unit to the rest of the skull in thedesired spatial position simulating the orthognathicsurgical protocol is then possible. This regimen couldbe applied to simulate many orthognathic surgicalprocedures.

The superimposition of the 3D volumes is calledregistration. Because separation and manipulation ofthe maxilla, mandible, and dentition is possible by usingthe above-mentioned protocols, simulation of the ortho-dontic treatment is completed on the separated areas.Reattachment of the previously separated and manipu-lated skeletal and dental regions permits visualizationof the final outcome of the treatment.11 Registration ofthe corrected 3D volume on the original unmodified3D volume of the patient is done. The 3D skull volumeswill automatically fit on most skull regions that have notbeen manipulated (occipital bone, frontal bone, andanterior cranial base) irrespective of the dental misfit

in orthodontic patients or the skeletal and dental misfitin orthognathic patients.12 An attempt to use the unma-nipulated regions of the skull in the registration procedurewas done, comparing various registration protocols. Theprincipal curvatures technique, used for the automaticregistration procedure, yielded the best automatic regis-tration procedure (Fig 13).12 The difference betweenthe modified and unmodified virtual models showsthe amount of work the orthodontist must complete (sub-traction analysis).11 Moreover, the capability of superim-position of the 3D volumes before and after treatment isused to assess the treatment results.

DISCUSSION

The introduction of virtual 3D volumes and the mas-sive amount of information that could be extracted fromit imposes the necessity for new vision and new technol-ogy for orthodontic diagnosis. Because treatment of the3D virtual images of the orthodontic patients is possible,introducing new 3D handling tools is timely.

We described many methods that have been devel-oped. The ultimate aim is to create an integrated processfor virtual orthodontic treatment. The process enablesthe creation of a virtual 3D model and automatic 3Dlandmark identification, thus identifying the problemsin each area. In addition, this diversity of tools enables,through various algorithms, the separation of the maxil-lary, mandibular, and dental units. Accordingly, the de-fective regions are extracted from the 3D volume.Correction of the defect of the skeletal or dental unitsis executed, until the best orthodontic or orthognathicoutcome is achieved. Reattachment of the skeletal unitto the virtual 3D of the patient in the desired position re-builds the volume. This facilitates execution of the treat-ment plan.

The treatment of the patient’s virtual 3D image al-lows building up a view of the final treatment result

Page 7: Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

Fig 12. A, Separation and coloring of the mandibular body; B, reattachment of the mandible in themodified position.

Fig 13. Registration procedure.

272 Othman et al American Journal of Orthodontics and Dentofacial Orthopedics

February 2010

rather than pondering the outcome. Hence, the treat-ment decision is based on correcting the defect withinthe limits of the surrounding environment (alveolarbone, soft tissue, and muscular boundaries). Therefore,the orthodontist’s expectations can be virtual reality be-fore treatment. The time-saving benefit of this proce-dure is priceless.

The capability to visualize the end result beforehandin a 3D format paves the way for a focused approach touse current orthodontic tools to achieve the planned re-sult. With the end result in mind, communication be-tween the orthodontist and the patient, especiallybefore orthodontic treatment, is much simpler, not tomention the gain in patient cooperation.

Previous attempts have been made to use 3D CT im-ages to simulate orthognathic surgical procedures.

However, the consensus of opinion is that the CT imagewith its inherent limited special resolution and partialvolume averaging effects produces distorted occlusalsurface details, occlusal configuration, and intercuspa-tion. This limitation motivated the introduction of the3D surface-scanning system of orthodontic modelswith a slit laser surface scanning system to obtain occlu-sal surface details.13 This procedure resulted in separateskull and dental models, and started a chain of technicaldifficulties. A problem of image fusion of different 3Dmodalities emerged. An accurate but complicated ap-proach has been applied for intermodality registrationtechniques to register the virtual dentition model tothe virtual skull model. However, this approach is con-fined to mandibular jaw surgeries, whereas the maxillais used as the reference for registration. Untested

Page 8: Methods for managing 3-dimensional volumes · Asem Awaad Othman,a Amr Ragab El-Beialy,b Sahar Ali Fawzy, cAhmed Hisham Kandil, Ahmed Mohammed El-Bialy,c and Yehya Ahmed Mostafad Cairo,

American Journal of Orthodontics and Dentofacial Orthopedics Othman et al 273Volume 137, Number 2

modification of the technique is advocated when maxil-lary surgery is needed with no fixed reference for orien-tation of the fiducial markers.13 Even though thisprotocol is efficient and essential for research work, itis inapplicable for routine clinical implementation.

As a solution to this limitation, we offer a simplerapproach with minimal intermaxillary separation of 1mm. We believe that this modification does not disruptfacial esthetics and condylar position. Moreover, maxi-mum intercuspation can be achieved by localizing thecondylar hinge axis and subsequent rerotation of themandible into the intercuspal position, a research pointthat is beyond the scope of this article, not to mentionthe elimination of functional occlusal shifts and theease of automatic mandibular separation. However,a limitation to our approach is the need for massivetraining of the artificial intelligence to refine the results,and the fewer occlusal details that are acquired incomparison with laser scanning.

In an analogous registration approach, 3D model su-perimposition was performed by Cevidanes et al14 toevaluate condylar position after 1-jaw and 2-jaw surger-ies. The surface of the cranial base was used as the reg-istration guide, since it is unaltered by these surgeries,unlike the maxilla and the mandible. Alterations in the3D position of the mandibular rami and condyles weremeasured. Color-map tool on the 3D display wereused to facilitate eyeballing the differential magnitudeand direction of mandibular displacement.

Similarly, Terajima et al15 used the 3D models of 10normal Japanese women. They established 3D standardvalues of the maxillofacial skeletal and facial soft-tissuemorphology. The 3D spatial coordinate system was de-fined on the 3D CT image. They created a 3D analysissystem based on linear measurements to compare pre-operative and postoperative patient coordinates withstandard values. The superimposed images were regis-tered by matching parts that were not altered by thesurgery (supraorbital and forehead regions).

There is a significant benefit in sharing visual andquantitative 3D information from this simulation sys-tem among orthodontists and surgeons.

CONCLUSIONS

The proposed process involves detaching, manipu-lating, and reattaching targeted regions of a 3D image.The registration techniques show the changes neededto arrive at the best outcome, in addition to evaluationof the treatment outcome. Achieving this will enableoptimal use of the Prospective Treatment Analysis ap-proach and the end-result analysis, simulated annealing,and treatment analysis.11

We thank Earlene Gentry, technical editor and free-lance writer (Cairo Foreign Press Association), and for-mer editor of the Fulbright Chronicle (Egypt), for herassistance in revising and editing this manuscript.

REFERENCES

1. Hansen CD, Johnson CR. The visualization handbook. San Diego:

Academic Press; 2005.

2. Saad AA, El-Bialy AM, Kandil AH, Ahmed AS. Active appearan-

cemodel and simulated annealing for automatic cephalometric

analysis. Proceedings of 2nd Cairo International Biomedical Engi-

neering Conference; 28-29 December, Cairo, Egypt, CD-ROM;

2004.

3. Saad AA. Developing a complete automatic 2D and 3D cephalo-

metric analysis system [thesis]. Cairo, Egypt: Cairo University;

2005.

4. Saad AA, El-Bialy AM, Kandil AH, Ahmed AS. Automatic ceph-

alometric analysis using active appearance model and simulated

annealing. Proceedings of the International Conference on

Graphics, Vision and Image Processing; 19-21 December, Cairo,

Egypt, ICGST-AMC, Internet Journal; 2005.

5. Cootes TF, Edwards GJ, Taylor CJ. Active appearance models.

Proceedings of European Conference on Computer Vision;

Eccv ’98: 5th European Conference on Computer Vision, Frei-

burg, Germany, 2-6 June, 1998 Proceedings, vol. 2, 484-498; 1998.

6. Cootes TF, Taylor CJ. Diffeomorphic Statistical Shape Models.

Proceedings of British Machine Vision Conference (BMVC),

British Machine Vision Conference. 7-9 September, Kingston

University, London; 2004.

7. Cootes TF, Taylor CJ. Statistical models of appearance for medi-

cal image analysis and computer vision. Proceedings of SPIE

Medical Imaging; 18-20 February 2001, San Diego, CA; 2001.

8. Stegmann MB. Active appearance models—theory, extensions

and cases Master’s thesis, Informatics and Mathematical Model-

ing, Technical University of Denmark; 2000.

9. Othman AA. A novel approach for developing a complete auto-

matic 3D cephalometric analysis system and a mandible, maxilla

and dentition separation system [thesis]. Systems and Biomedical

Engineering Department, Faculty of Engineering, Cairo, Egypt:

Cairo University; 2007.

10. Omran LN. 3D computerized system for simulating dental treat-

ment [thesis]. Cairo, Egypt: Cairo University; 2007.

11. Mostafa YA. Before we continue: une pause. Proceedings of

Congress of WFO; Paris, France; 2005;(Suppl):27-30.

12. El-Bakry OM, El-Bialy AM, Kandil AH, Fawzy SA, Mostafa YA,

El-Beialy AR. Registration of 3D pre-post operative skulls.

Proceedings of 3rd Cairo International Biomedical Engineering

Conference; 21-24 December, CD-ROM, 2006.

13. Uechi J, Okayama M, Shibata T, Muguruma T, Hayashi K,

Endo K, et al. A novel approach for the 3-dimensional simulation

of orthognathic surgery by using a multimodal image-fusion tech-

nique. Am J Orthod Dentofacial Orthop 2006;130:786-98.

14. Cevidanes LH, Bailey LJ, Tucker SF, Styner MA, Mol A,

Phillips CL, et al. Three-dimensional cone-beam computed

tomography for assessment of mandibular changes after

orthognathic surgery. Am J Orthod Dentofacial Orthop 2007;131:

44-50.

15. Terajima M, Yanagita N, Ozeki K, Hoshino Y, Mori N, Goto TK,

et al. Three-dimensional analysis system for orthognathic surgery

patients with jaw deformities. Am J Orthod Dentofacial Orthop

2008;134:100-11.