100
Medical Visualization with ITK How to integrate the Insight Toolkit into Visualization Applications Lydia Ng Josh Cates Yarden Livnat Luis Ib ´ nez and the Insight Consortium August 18, 2003 http://www.itk.org Email: [email protected]

ITK Handout

Embed Size (px)

Citation preview

Page 1: ITK Handout

Medical Visualization with ITKHow to integrate the Insight Toolkit

into Visualization Applications

Lydia NgJosh Cates

Yarden LivnatLuis Ibanez

and the Insight Consortium

August 18, 2003

http://www.itk.orgEmail: [email protected]

Page 2: ITK Handout
Page 3: ITK Handout

The purpose of computing is Insight, not numbers.

Richard Hamming

Page 4: ITK Handout

CONTENTS

1 Welcome 11.1 Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 How to Learn ITK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Downloading ITK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3.1 Downloading Packaged Releases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.2 Downloading from CVS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.3 Join the Mailing List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.4 Directory Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.5 Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3.6 Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3.7 Additional Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 A Brief History of ITK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.5 Software Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5.1 CVS Source Code Repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.5.2 DART Regression Testing System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.5.3 Working The Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.5.4 The Effectiveness of the Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.6 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.6.1 Configuring ITK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Preparing CMake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Configuring ITK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.6.2 Getting Started With ITK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Hello World ! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Segmentation 132.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2 Region Growing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2.1 Confidence Connected. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 Watershed Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3.1 Using the Insight Watershed Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.4 Level-Set Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.4.1 Threshold Level Set Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.4.2 Geodesic Active Contours Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 Registration 353.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2 Registration Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.3 Hello World Registration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.4 Monitoring Registration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Page 5: ITK Handout

3.5 Transforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.5.1 Transform General Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.5.2 Identity Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.5.3 Translation Transform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.5.4 Scale Transform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.5.5 Euler2DTransform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.5.6 CenteredRigid2DTransform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.5.7 Similarity2DTransform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.5.8 QuaternionRigidTransform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.5.9 VersorTransform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453.5.10 VersorRigid3DTransform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.5.11 AffineTransform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.6 Interpolators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.7 Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.7.1 Mean Squares Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.7.2 Normalized Correlation Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483.7.3 Mean Reciprocal Square Differences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483.7.4 Mutual Information Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Parzen Windowing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Viola and Wells Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Mattes et al Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.8 Optimizier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.9 Medical Imaging Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.9.1 Multi-modality Multi-resolution Example. . . . . . . . . . . . . . . . . . . . . . . . . . . 513.9.2 Deformable Registration Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4 Integrating ITK with GUI Toolkits 594.1 FLTK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59

4.1.1 Installing the software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.1.2 Configuring with CMake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.1.3 Writing a simple example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.2 Qt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 644.2.1 Installing the Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.2.2 Configuring with CMake. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.2.3 Writing a simple example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5 Case Study IIntegrating ITK with Volview 735.1 Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735.2 VolView Plugins Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735.3 Plugin Data Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745.4 Plugin Life Cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745.5 Writing a Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5.5.1 Define the plugin name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765.5.2 The initialization function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765.5.3 The ProcessData function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775.5.4 Refreshing the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

6 ITK Integration with SCIRun 816.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816.2 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .816.3 SCIRun. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826.4 Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836.5 The XML description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836.6 Wrapping ITK filters in SCIRun. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Page 6: ITK Handout

6.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866.7.1 itk ReflectImageFilter.xml. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866.7.2 sci ReflectImageFilter.xml. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876.7.3 Configuring SCIRun. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886.7.4 Adding a Specific GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Page 7: ITK Handout
Page 8: ITK Handout

LIST OF FIGURES

1.1 Dart Quality Dashboard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2 Cmake user interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1 ConnectedThreshold segmentation results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 ConfidenceConnected segmentation results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 Watershed Catchment Basins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.4 Watersheds Hierarchy of Regions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.5 Watershed segmentation output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.6 Zero Set Concept. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.7 Grid position of the embedded level-set surface.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.8 ThresholdSegmentationLevelSetImageFilter collaboration diagram. . . . . . . . . . . . . . . . . . . 26

2.9 Propagation term for threshold-based level-set segmentation . . . . . . . . . . . . . . . . . . . . . . 26

2.10 ThresholdSegmentationLevelSet segmentations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.11 GeodesicActiveContourLevelSetImageFilter collaboration diagram. . . . . . . . . . . . . . . . . . . 29

2.12 GeodesicActiveContourLevelSetImageFilter intermediate output . . . . . . . . . . . . . . . . . . . . 32

2.13 GeodesicActiveContourImageFilter segmentations. . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3.1 Registration Framework Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.2 Fixed and Moving images in registration framework. . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.3 Pipeline structure of the registration example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.4 HelloWorld registration output images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.5 Trace of translations and metrics during registration. . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.6 Command/Observer and the Registration Framework. . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.7 Parzen windowing in Mutual Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.8 3D CT image of the head. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.9 3D MR-T1 image of the head. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.10 Pyramid of downsampled versions of a CT image of the head. . . . . . . . . . . . . . . . . . . . . . 53

Page 9: ITK Handout

2 List of Figures

3.11 Pyramid of downsampled version of a MR-T1 image of the head. . . . . . . . . . . . . . . . . . . . . 54

3.12 Multi-modality registration initialization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.13 Multi-modality registration results.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.14 Joint intensity histogram before and after registration. . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.15 3D contrast-enhanced breast MRI Images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.16 Contrast-enhanced breast MRI deformable registration results. . . . . . . . . . . . . . . . . . . . . . 58

4.1 Command-Observer configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.2 Command-Event-Object communication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.3 FLTK ProgressBar and ITK Command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.4 Qt-ITK Adaptor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.5 Qt-ITK Signal Adaptor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.6 Qt-ITK Slot Adaptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

5.1 VolView plugin data flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

5.2 VolView plugin life cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

5.3 VolView screen shot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

6.1 The SCIRun problem solving environment.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

6.2 SCIRun network.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

6.3 SCIRun XML.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

6.4 A SCIRun network with an ITK ReflectImageFilter and a default GUI. . . . . . . . . . . . . . . . . . 88

6.5 An updated version of the ReflectImage GUI.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Page 10: ITK Handout

LIST OF TABLES

2.1 ConnectedThreshold example parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 ThresholdSegmentationLevelSet segmentation parameters . . . . . . . . . . . . . . . . . . . . . . . 28

2.3 GeodesicActiveContour segmentation example parameters . . . . . . . . . . . . . . . . . . . . . . . 31

3.1 Multi-resolution registration parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Page 11: ITK Handout
Page 12: ITK Handout

CHAPTER

ONE

Welcome

Welcome to the courseMedical Visualization with ITK. The objective of this course is to introduce you to the tech-niques used for integrating the segmentation and registration methods of the Insight Toolkit into your visualizationapplications.

ITK is an open-source, object-oriented software system forimage processing, segmentation, and registration. Al-though it is large and complex, ITK is designed to be easy to use once you learn about its basic object-oriented andimplementation methodology. The purpose of this course is to help you familiarize with important algorithms and datarepresentations found throughout the toolkit.

ITK is a large system. As a result, it is not possible to completely document all ITK objects and their methods inthis text. Instead, we will focus on the most commonly used segmentation and registration methods, and on theissues involved in integrating ITK with visualization techniques with the aim of building a complete medical imageapplication.

The Insight Toolkit is an open-source software system. What this means is that the community of ITK users anddevelopers has great impact on the evolution of the software. Users and developers can make significant contributionsto ITK by providing bug reports, bug fixes, tests, new classes, and other feedback. Please feel free to contribute yourideas to the community (the ITK mailing list is the preferredmethod).

1.1 Organization

This document is divided into four chapters. Chapter 1 is a general introduction to segmentation methods in ITK,Chapter 2 describes the framework for performing image registration, Chapter 3 describes the mechanisms for usingITK along with GUI libraries, Chapter 4 presents a case studyof the integration of ITK segmentation methods intoVolView, a visualization application developed by KitwareInc.1, Chapter 5 presents the case study of integrating ITKinto SCIRun a visualization application developed at the SCI Institute at University of Utah.

1.2 How to Learn ITK

The key to learning how to use ITK is to become familiar with its palette of objects and the ways of combining them.If you are a newInsight Toolkituser, begin by installing the software. Then, download theITK Software Guidefromwww.itk.org/ItkSoftwareGuide.pdf . The SoftwareGuidewill guide you through the essential elements of thetoolkit and use practical examples to introduce you to the main functionalities of the toolkit. All the coding examplesillustrated in theSoftware Guideare available as part of the ITK source tree.

1A free version of Volview is now available as part of a projectsponsored by the National Library of Medicine (NLM)

Page 13: ITK Handout

2 Chapter 1. Welcome

1.3 Downloading ITK

ITK can be downloaded without cost from the following web site:

http://www.itk.org/HTML/Download.php

In order to track the kind of applications for which ITK is being used, you will be asked to complete a form prior todownloading the software. The information you provide in this form will help developers to get a better idea of theinterests and skills of the toolkit users. It also assists infuture funding requests to sponsoring agencies.

Once you fill this form you will have access to the download page where two options for obtaining the software will befound. (This page can be bookmarked to facilitate subsequent visits to the download site without having to completeany form again.) You can get the tarball of a stable release oryou can get the development version through CVS. Therelease version is stable and dependable but may lack the latest features of the toolkit. The CVS version will have thelatest additions but is inherently unstable and may containcomponents with work in progress. The following sectionsdescribe the details of each one of these two alternatives.

1.3.1 Downloading Packaged Releases

Please read theGettingStarted.txt 2 document first. It will give you an overview of the download and installationprocesses. Then choose the tarball that better fits your system. The options are.zip and.tgz files. The first type isbetter suited for MS-Windows while the second one is the preferred format for UNIX systems.

Once you unzip or untar the file a directory calledInsight will be created in your disk and you will be ready forstarting the configuration process described in Section1.6.1on page9.

1.3.2 Downloading from CVS

The Concurrent Versions System (CVS) is a tool for software version control [?]. Generally only developers shouldbe using CVS, so here we assume that you know what CVS is and howto use it. For more information about CVSplease see Section1.5.1on page6. (Note: please make sure that you access the software via CVSonly when the ITKQuality Dashboard indicates that the code is stable. Learn more about the Quality Dashboard at1.5.2on page6.)

Access ITK via CVS using the following commands (under UNIX and Cygwin):

cvs -d :pserver:[email protected]:/cvsroot/Insigh t login(respond with password "insight")

cvs -d :pserver:[email protected]:/cvsroot/Insigh t co Insight

This will trigger the download of the software into a directory namedInsight . Any time you want to update yourversion, it will be enough to change into this directoryInsight and type:

cvs update -d -P

Once you obtain the software you are ready to configure and compile it (see Section1.6.1on page9). First, however,we recommend that you join the mailing list and read the following sections describing the organization of the software.

1.3.3 Join the Mailing List

It is strongly recommended that you join the users mailing list. This is one of the primary resources for guidance andhelp regarding the use of the toolkit. You can subscribe to the users list on-line at

2http://www.itk.org/HTML/GettingStarted.txt

Page 14: ITK Handout

1.3. Downloading ITK 3

http://www.itk.org/HTML/MailingLists.htm

The insight-users mailing list is also the best mechanism for expressing your opinions about the toolkit and to letdevelopers know about features that you find useful, desirable or even unnecessary. ITK developers are committed tocreating a self-sustained open-source ITK community. Feedback from users is fundamental to achieve this goal.

1.3.4 Directory Structure

To begin your ITK odyssey, you will first need to know something about ITK’s software organization and directorystructure. Even if you are installing pre-compiled binaries, it is helpful to know enough to navigate through the codebase to find examples, code, and documentation.

ITK is organized into several different modules, or CVS checkouts. If you are using an official release or CD release,you will see three important modules: theInsight , InsightDocuments andInsightApplications modules. Thesource code, examples and applications are found in theInsight module; documentation, tutorials, and materialrelated to the design and marketing of ITK are found inInsightDocuments ; and fairly complex applications usingITK (and other systems such as VTK, Qt, and FLTK) are available from InsightApplications . Usually you willwork with the Insight module unless you are a developer, are teaching a course, or are looking at the details ofvarious design documents. TheInsightApplications module should only be downloaded and compile once theInsight module is functioning properly.

The Insight module contains the following subdirectories:

� Insight/Auxiliary —code that interfaces packages to ITK.� Insight/Code —the heart of the software; the location of the majority of the source code.� Insight/Documentation —a compact subset of documentation to get ITK users started.� Insight/Examples —a suite of simple, well-documented examples used by thisSoftware Guideand to illustrateimportant ITK concepts.� Insight/Testing —a large number of small programs used to test ITK. These examples tend to be minimallydocumented but may be useful to demonstrate various system concepts. These tests are used by DART toproduce the ITK Quality Dashboard (see Section1.5.2on page6.)� Insight/Utilities —supporting software for the ITK source code. For example, DART and Doxygen sup-port, as well as libraries such aspng andzlib .� Insight/Validation —a series of validation case studies including the source code used to produce the results.� Insight/Wrapping —support for the CABLE wrapping tool. CABLE is used by ITK to build interfaces be-tween the C++ library and various interpreted languages (currently Tcl is supported).

The source code directory structure—found inInsight/Code —is important to understand since other directory struc-tures (such as theTesting andWrapping directories) shadow the structure of theInsight/Code directory.

� Insight/Code/Common —core classes, macro definitions, typedefs, and other software constructs central toITK.� Insight/Code/Numerics —mathematical library and supporting classes. (Note: ITK’s mathematical library isbased on the VXL/VNL software packagehttp://vxl.sourceforge.net .)� Insight/Code/BasicFilters —basic image processing filters.� Insight/Code/IO —classes that support the reading and writing of data.� Insight/Code/Algorithms —the location of most segmentation and registration algorithms.

Page 15: ITK Handout

4 Chapter 1. Welcome

� Insight/Code/SpatialObject —classes that represent and organize data using spatial relationships (e.g., theleg bone is connected to the hip bone, etc.)� Insight/Code/Patented —any patented algorithms are placed here. Using this code incommercial applicationrequires a patent licence.� Insight/Code/Local —an empty directory used by developers and users to experiment with new code.

The InsightDocuments module contains the following subdirectories:

� InsightDocuments/CourseWare —material related to teaching ITK.� InsightDocuments/Developer —historical documents covering the design and creation of ITK includingprogress reports and design documents.� InsightDocuments/Latex —LATEX styles to produce thisUser’s Guideas well as other documents.� InsightDocuments/Marketing —marketing flyers and literature used to succinctly describe ITK.� InsightDocuments/Papers —papers related to the many algorithms, data representations, and software toolsused in ITK.� InsightDocuments/SoftwareGuide —LATEX files used to create thisSoftware Guide. (Note that the codefound in Insight/Examples is used in conjunction with these LATEX files.)� InsightDocuments/Validation —validation case studies using ITK.� InsightDocuments/Web —the source HTML and other material used to produce the Web pages found athttp://www.itk.org .

Similar to theInsight module, access to theInsightDocuments module is also available via CVS using the follow-ing commands (under UNIX and Cygwin):

cvs -d :pserver:[email protected]:/cvsroot/Insigh t co InsightDocuments

The InsightApplications module contains large, relatively complex examples of ITK usage. See the web pages athttp://www.itk.org/HTML/Applications.htm for a description. Some of these applications require GUI’ssuchas Qt and FLTK and use other packages such as VTK (The Visualization Toolkithttp://www.vtk.org ). Do notattempt to compile and build this module until you have successfully built the coreInsight module.

Similar to theInsight andInsightDocuments module, access to theInsightapplications module is also avail-able via CVS using the following commands (under UNIX and Cygwin):

cvs -d :pserver:[email protected]:/cvsroot/Insigh t co InsightApplications

1.3.5 Documentation

Besides theSoftwareGuide, there are other documentation resources that you should beaware of.

Doxygen Documentation. The Doxygen documentation is an essential resource when working with ITK. These ex-tensive Web pages describe in detail every class and method in the system. The documentation also containsinheritance and collaboration diagrams, listing of event invocations, and data members. The documentation isheavily hyper-linked to other classes and to the source code. The Doxygen documentation is available on-lineathttp://www.itk.org . Make sure that you have the right documentation for your version of the source code.

Header Files. Each ITK class is implemented with a .h and .cxx/.txx file (.txx file for templated classes). All methodsfound in the .h header files are documented and provide a quickway to find documentation for a particularmethod. (Indeed, Doxygen uses the header documentation to produces its output.)

Page 16: ITK Handout

1.4. A Brief History of ITK 5

1.3.6 Data

The Insight Toolkitwas designed to support the Visible Human Project and its associated data. This data is availablefrom the National Library of Medicine athttp://www.nlm.nih.gov/research/visible/visible_hum an.html .

Another source of data can be obtained from the ITK Web site athttp://www.itk.org/HTML/Data.htm or via ftpfrom ftp://public.kitware.com/pub/itk/Data/ .

1.3.7 Additional Resources

For more information about theInsight Toolkitwe recommend the following resources.

� The Web pageshttp://www.itk.org contain pointers to many other resources such as on-line manual pages,a FAQ, and an archive of the insight-users mailing list (see below). In particular, the Doxygen manual pages areabsolutely wonderful. They are available online athttp://www.itk.org/Doxygen/html/index.html .� The insight-users mailing list allows users and developersto ask questions and receive answers; post up-dates, bug fixes, and improvements; and offer suggestions for improving the system. There are instructionsathttp://www.itk.org/mailman/listinfo/insight-users describing how to join this list.� Research partnerships with members of the Insight SoftwareConsortium are encouraged. Please see the webpages for more information.� Commercial support and consulting are available from Kitware athttp://www.kitware.com .

1.4 A Brief History of ITK

In 1999 the US National Library of Medicine of the National Institutes of Health awarded a three-year contract todevelop an open-source registration and segmentation toolkit, that eventually came to be known as the Insight Toolkit(ITK). ITK’s NLM Project Manager was Dr. Terry Yoo, who coordinated the six prime contractors who made up theInsight consortium. These consortium members included three commercial partners—GE Corporate R&D, Kitware,Inc., and MathSoft (the company name is now Insightful)—andthree academic partners—University of North Carolina(UNC), University of Tennessee (UT) (Ross Whitaker subsequently moved to University of Utah), and University ofPennsylvania (UPenn). The Principle Investigators for these partners were, respectively, Bill Lorensen at GE CRD,Will Schroeder at Kitware, Vikram Chalana at Insightful, Stephen Aylward with Luis Ibanez at UNC (Luis is now atKitware), Ross Whitaker with Josh Cates at UT (both now at Utah), and Dimitri Metaxas at UPenn (now at Rutgers).In addition, several subcontractors rounded out the consortium including Peter Raitu at Brigham & Women’s Hospital,Celina Imielinska and Pat Molholt at Columbia University, Jim Gee at UPenn’s Grasp Lab, and George Stetton at theUniversity of Pittsburgh.

In 2002 the first official public release of ITK was made available. In addition, the National Library of Medicineawarded thirteen contracts to several organizations to extend ITK’s capabilities. NLM funding ofInsight Toolkitdevelopment is continuing through 2003, with additional application and maintenance support anticipated beyond2003. If you are interested in potential funding opportunities, we suggest that you contact Dr. Terry Yoo at theNational Library of Medicine for more information.

1.5 Software Process

An outstanding feature of ITK is the software process used todevelop, maintain, and test the toolkit. The InsightToolkit software continues to evolve rapidly due to the efforts of developers and users located around the world, sothe software process is essential to maintaining its quality. If you are planning to contribute to ITK, or use the CVSsource code repository or the daily releases, you need to know something about this process (see1.3.2on page2 to

Page 17: ITK Handout

6 Chapter 1. Welcome

learn more about obtaining ITK using CVS). This informationwill help you know when and how to update and workwith the software as it changes. The following sections describe key elements of the process.

1.5.1 CVS Source Code Repository

The Concurrent Versions System (CVS) is a tool for version control [?]. It is a very valuable resource for softwareprojects involving multiple developers. The primary purpose of CVS is to keep track of changes to software. CVSdate and version stamps every addition to the repository—also providing for special user-specified tags—so that itis possible to return to a particular state or point of time whenever desired. The differences between any two pointsis represented by a “diff” file, that is a compact, incremental representation of change. CVS supports concurrentdevelopment so that two developers can edit the same file at the same time, that are then (usually) merged togetherwithout incident (and marked if there is a conflict). In addition, branches off of the main development trunk provideparallel development of software.

Developers and users can check out the software from the CVS repository. When developers introduce changes in thesystem, CVS facilitates to update the local copies of other developers and users by downloading only the differencesbetween their local copy and the version on the repository. This is an important advantage for those who are interestedin keeping up to date with the leading edge of the toolkit. Bugfixes can be obtained in this way as soon as they havebeen checked into the system.

ITK source code, data, and examples are maintained in a CVS repository. The principal advantage of a system likeCVS is that it frees developers to try new ideas and changes without fear of losing a previous working version of thesoftware. It also provides a simple way to incrementally update code as new features are added to the repository.

1.5.2 DART Regression Testing System

One of the unique features of the ITK software process is its use of the DART regression testing system(http://public.kitware.com/Dart ). In a nutshell, what DART does is to provide quantifiable feedback to devel-opers as they check in new code and make changes. The feedbackconsists of the results of a variety of tests, and theresults are posted on a publically-accessible Web page (to which we refer as adashboardas shown in Figure1.1and ac-cessible fromhttp://www.itk.org/Testing/Dashboard/MostRecentResu lts-Nightly/Dashboard.html ). Allusers and developers of ITK can view the dashboard, that produces considerable peer-pressure on developers whocheck in code with problems. The Dart dashboard serves as thevehicle for developer communication, and should beviewed whenever you consider updating software via CVS or a daily release.

Note that DART is independent of ITK and can be used to manage quality control for any software project. It is itselfan open-source package and can be obtained from

http://public.kitware.com/Dart/HTML/Index.shtml

DART supports a variety of test types. These include the following.

Compilation. All source code is compiled and linked. Any resulting errorsand warnings are reported.

Regression. Most ITK tests produce images as output. Testing requires comparing each tests output against a validimage. If the images match then the test passes. The comparison must be performed carefully since many 3Dgraphics systems (e.g., OpenGL) produce slightly different results on different platforms.

Memory. One of the nastiest of problems to find in any computer programare those related to memory. Memoryleakage, uninitialized memory, and reads and writes beyondallocated space are all examples of this sort ofproblem. ITK checks memory using Purify, a commercial package produced by Rational. (Other memorychecking programs will be added in the future.)

PrintSelf. All classes in ITK are expected to print out all their instance variables correctly. This test checks to makesure that this is the case.

Page 18: ITK Handout

1.5. Software Process 7

Figure 1.1:On-line presentation of the Quality Dashboard generated by Dart

SetGet. Often developers make assumptions about the values of instance variables; i.e., they assume that they arenon-NULL, etc. The SetGet tests perform a Get on all instancevariables with a Get () method, followed by aSet method on the instance variable with the value returned from the Get () method. It’s surprising how manytimes this test identifies problems.

TestEmptyInput. This deceptively simple test catches many problems due to developers assuming that the input to aprocess object is non-NULL, or that the input data object contains some data. TestEmptyInput simply exercisesthese two conditions on each subclass of itkProcessObject and reports problems if encountered.

Coverage. There is a saying among ITK developers:If it isn’t covered, then it’s broke.What this means is that codethat is not executed during testing is likely to be wrong. Thecoverage tests identify lines that are not executedin the Insight Toolkit test suite, reporting a total percentage covered at the end of the test. While it is nearlyimpossible to bring the coverage to 100% because of error handling code and similar constructs that are rarelyencountered in practice, the coverage numbers should be 75%or higher. Code that is not covered well enoughrequires additional tests.

Figure1.1shows the top-level Dashboard Web page. Each row in the Dashboard corresponds to a particular platform(hardware + operating system + compiler). The data on the rowindicates the number of compile errors and warningsas well as the results of running hundreds of small test programs. In this way the toolkit is tested both at compile timeand run time.

When users decide to download a daily release or a CVS versionof ITK it is important for them to verify first thatthe current dashboard is in good shape. This can be rapidly judged by the general coloration of the dashboard. Agreen state means that the software is building correctly and it is a good day to start with ITK or to get an upgrade.A red state, on the other hand, is an indication of instability on the system and hence users should better refrain fromchecking out or upgrading.

Page 19: ITK Handout

8 Chapter 1. Welcome

Another nice feature of DART is that it maintains a history ofchanges to the source code (by coordinating with CVS)and summarizes the changes as part of the dashboard. This is useful for tracking problems and keeping up to date withnew additions to ITK.

1.5.3 Working The Process

The ITK software process functions across three cycles—thecontinuous cycle, the daily cycle, and the release cycle.

The continuous cycle revolves around the actions of developers as they check code into CVS. When changed or newcode is checked into CVS, the DART continuous testing process kicks in. A small number of tests are performed(including compilation), and if something breaks, email issent to all developers who checked code in during thecontinuous cycle. Developers are expected to fix the problemimmediately.

The daily cycle occurs over a 24-hour period. Changes to the source base made during the day are extensively testedby the nightly DART regression testing sequence. These tests occur on different combinations of computers andoperating systems located around the world, and the resultsare posted every day to the DART dashboard. Developerswho checked in code are expected to visit the dashboard and ensure their changes are acceptable—that is, they donot introduce compilation errors or warnings, or break any other tests including regression, memory, print self, andSet/Get. Developers are expected to fix problems immediately.

The release cycle occurs a small number of times a year. This requires tagging and branching the CVS repository,updating documentation, and producing new release packages. Although additional testing is performed to insure theconsistency of the package, keeping the daily releases error free minimizes the work required to cut a release.

ITK users typically work with releases, since they are the most stable. Developers work with the CVS repository, orsometimes with periodic snapshots (a particular daily release) in order to take advantage of a newly-added feature. It isextremely important that developers watch the dashboard carefully, andupdate their software (via CVS or a new dailyrelease install) only when the dashboard is in good condition (i.e., is “green”). Failure to do so can cause significantdisruption if a particular day’s software release is unstable.

1.5.4 The Effectiveness of the Process

The effectiveness of this process is profound. By providingimmediate feedback to developers through email and Webpages (e.g., the dashboard), the quality of ITK is exceptionally high, especially considering the complexity of thealgorithms and system. Errors, when accidently introduced, are caught quickly, as compared to catching them at thepoint of release. To wait to the point of release is to wait toolong, since the causal relationship between a code changeor addition and a bug is lost. The process is so powerful that it routinely catches errors in vendor’s graphics drivers(e.g., OpenGL drivers) or changes to external subsystems such as the Mesa OpenGL software library. All of thesetools that make up the process (CMake, CVS, and DART are open-source). Many large and small systems such asVTK (The Visualization Toolkithttp://www.vtk.org ) use the same process with similar results. We encourage theadoption of the process in your environment.

1.6 Installation

This section describes the process for installing ITK on your system. Keep in mind that ITK is a toolkit, as such,once it is installed in your computer there will be no application to run. Rather, you will use ITK to build your ownapplications. What ITK does provide—besides the toolkit proper—is a large set of test files and examples that willintroduce you to ITK concepts and will show you how to use ITK in your own projects.

Some of the examples distributed with ITK require third party libraries that you may have to download. For aninitial installation of ITK you may want to ignore these extra libraries and just built the toolkit itself. In the past, alarge fraction of the traffic on the insight-users mailing list originates from difficulties in getting third party librariescompiled and installed rather than with actual problems building ITK.

Page 20: ITK Handout

1.6. Installation 9

ITK has been developed and tested across different combinations of operating systems, compilers, and hardwareplatforms including MS-Windows, Linux on Intel-compatible hardware, Solaris, IRIX and recently the Mac. Popularcompilers like Visual Studio 6.0, Visual Studio 7.0, gcc 2.95.2, gcc 2.96, gcc 3.04, gcc 3.1, gcc 3.2, gcc 3.3, Borland5.5 and SGI-CC 6.5 are currently supported. Given the advanced usage of C++ features in the toolkit, some compilersmay have difficulties processing the code. If you are currently using an outdated compiler this may be an excellentexcuse for upgrading this old piece of software!

1.6.1 Configuring ITK

The challenge of supporting ITK across platforms has been solved through the use of CMake, a cross-platform, open-source build system. CMake is used to control the software compilation process using simple platform and compilerindependent configuration files. CMake generates native makefiles and workspaces that can be used in the compilerenvironment of your choice. CMake is quite sophisticated, it supports complex environments requiring system config-uration, code pre-processing, code generation, and template instantiation.

CMake generates Makefiles under UNIX and Cygwin systems and generates Visual Studio workspaces under Win-dows (and appropriate build files for other compilers like Borland). The information used by CMake is provided byCMakeLists.txt files that are present in every directory of the ITK source tree. These files contains information thatthe user provides to CMake at configuration time. Typical information includes paths to utilities in the system and theselection of software options specified by the user.

Preparing CMake

CMake can be downloaded at no cost from

http://www.cmake.org

ITK require the latest release of CMake3. You can download binary versions for most of the popular platformsincluding Windows, Solaris, IRIX, HP, Mac and Linux. Alternatively you can download the source code and buildCMake on your system. It is very important to avoid having several different versions of CMake simultaneously. Thereason is that CMake searches your system in order to find its components, and mixing components from differentversions will produce inconsistent executables. Follow the instructions in the CMake Web page for downloading andinstalling the software.

Running CMake initially requires that you provide two pieces of information: where the source code directory islocated (ITK SOURCE DIR), and where the object code is to be produced (ITKBINARY DIR). These are referredto as thesource directoryand thebinary directory. On Unix, the binary directory is created by the user and CMake isinvoked with the path to the source directory. For example:

mkdir Insight-binarycd Insight-binaryccmake ../Insight

Note that the source directory and the build directory can bethe same—this is known as anin-sourcebuild. OnWindows, the CMake GUI is used to specify the source and builddirectories (Figure1.2).

CMake runs in an interactive mode in that you iteratively select options and configure according to these options. Theiteration proceeds until no more options are selected. At this point, a generation step produces the appropriate buildfiles for your configuration.

This interactive configuration process can be better understood if you imagine that you are walking through a decisiontree. Every option that you select introduces the possibility that new, dependent options may become relevant. These

3The current version at the time of writing this document is CMake 1.8.

Page 21: ITK Handout

10 Chapter 1. Welcome

new options are presented by CMake at the top of the options list in its interface. Only when no new options appearafter a configuration iteration can you be sure that the necessary decisions have all been made. At this point build filesare generated for the current configuration.

Configuring ITK

Figure 1.2:CMake interface. Left) ccmake , the UNIX version based on curses . Right) CMakeSetup , the MS-Windows version

based on MFC

Figure1.2shows the CMake interface for UNIX and MS-Windows. In order to speed up the build process you maywant to disable the compilation of the testing and examples.This is done with the variablesBUILD TESTING=OFFandBUILD EXAMPLES=OFF. The examples distributed with the toolkit are a helpful resource for learning how to use ITKcomponents but are not essential for the use of the toolkit itself. The testing section includes a large number of smallprograms that exercise the capabilities of ITK classes. Dueto the large number of tests, enabling the testing optionwill considerable increase the buliding time. It is not desirable to enable this option for a first built of the toolkit.

An additional resource is available at theInsightApplications module, which contains multiple applications incor-porating GUIs and different levels of visualization. However, due to the large number of applications and the fact thatsome of them rely on third party libraries, building this module should be postponed until you are familiar with thebasic structure of the toolkit and the building process.

Begin running CMake by usingccmake on Unix, andCMakeSetup on Windows. Remember to runccmake from thebinary directory on Unix. On Windows, specify the source andbinary directories in the GUI, then begin to set thebuild variables in the GUI as necessary. Most variables should have default values that are sensible. Each time youchange a set of variables in CMake, it is necessary to proceedto another configuration step. In the Windows versionthis is done by clicking on the ”Configure” button. In the UNIXversion this is done in acurses interface where youcan select to configure by hitting the ”c” key.

When no new options appear in CMake, you can proceed to generate Makefiles or Visual Studio projects. This isdone in Windows by clicking on the ”Ok” button. In the UNIX version this is done by hitting the ”g” key. After thegeneration process CMake will quit silently. To initiate the build process on UNIX, simply typemake in the binarydirectory. Under Windows, load the workspace namedITK.dsw from the binary directory you specified in the CMakeGUI.

The build process will typically take anywhere from 15 to 30 minutes depending on the performance of your system.If you decide to enable testing as part of the normal built process, about 500 small test programs will be compiled.This will verify that the basic components of ITK have been correctly built on your system.

Page 22: ITK Handout

1.6. Installation 11

1.6.2 Getting Started With ITK

The simplest way to create a new project with ITK is to create anew directory somewhere in your disk and createtwo files in it. The first one is aCMakeLists.txt file that will be used by CMake to generate a Makefile (if you areusing UNIX) or a Visual Studio workspace (if you are using MS-Windows). The second file is an actual C++ programthat will exercise some of the large number of classes available in ITK. The details of these files are described in thefollowing section.

Once both files are in your directory you can run CMake in orderto configure your project. Under UNIX, you cancd to your newly created directory and type"ccmake . " . Note the ”.” in the command line for indicating that theCMakeLists.txt file is in the current directory. Thecurses interface will require you to provide the directory whereITK was built. This is the same path that you indicated for theITK BINARY DIR variable at the time of configuringITK. Under Windows you can runCMakeSetup and provide your newly created directory as being both the sourcedirectory and the binary directory for your new project (i.e., an in-source build). Then CMake will require you toprovide the path to the binary directory where ITK was built.The ITK binary directory will contain a file namedUseITK.cmake generated during the configuration process at the time ITK was built. From this file, CMake willrecover all the information required to configure your new ITK project.

Hello World !

Here is the content of the two files to write in your new project. These two files can be found in theInsight/Examples/Installation directory. TheCMakeLists.txt file contains the following lines:

PROJECT(HelloWorld)

FIND_PACKAGE(ITK)IF(ITK_FOUND)

INCLUDE(${ITK_USE_FILE})ELSE(ITK_FOUND)

MESSAGE(FATAL_ERROR"ITK not found. Please set ITK_DIR.")

ENDIF(ITK_FOUND)

ADD_EXECUTABLE(HelloWorld HelloWorld.cxx )

TARGET_LINK_LIBRARIES(HelloWorld ITKCommon)

The first line defines the name of your project as it appears in Visual Studio (it will have no effect under UNIX). Thesecond line loads a CMake file with a predefined strategy for finding ITK 4. If the strategy for finding ITK fails, CMakewill prompt you for the directory where ITK is installed in your system. In that case you will write this informationin the ITK BINARY DIR variable. The line INCLUDE($USE ITK FILE) loads theUseITK.cmake file containingall the configuration information from ITK. The lineADD EXECUTABLEdefines as its first argument the name of theexecutable that will be produced as result of this project. The remaining arguments ofADD EXECUTABLEare the namesof the source files to be compiled and linked. Finally, theTARGET LINK LIBRARIES line specifies which ITK librarieswill be linked against this project.

The source code of this section can be found in the fileExamples/Installation/HelloWorld.cxx

The following code is an implementation of a smallInsight program. It tests including header files and linking withITK libraries.

#include "itkImage.h"

4Similar files are provided in CMake for other commonly used libraries, all of them namedFind*.cmake

Page 23: ITK Handout

12 Chapter 1. Welcome

#include <iostream>

int main(){

typedef itk::Image< unsigned short, 3 > ImageType;

ImageType::Pointer image = ImageType::New();

std::cout << "ITK Hello World !" << std::endl;

return 0;}

This code instantiates a 3D image5 whose pixels are represented with typeunsigned short . The image is thenconstructed and assigned to aSmartPointer .

At this point you have successfully installed and compiled ITK, and created your first simple program. If you havedifficulties, please join the insight-users mailing list (Section1.3.3on page2) and pose questions there.

5Also known as avolume

Page 24: ITK Handout

CHAPTER

TWO

Segmentation

2.1 Introduction

Segmentation of medical images is a challenging task. A myriad of different methods have been proposed and im-plemented in recent years. In spite of the huge effort invested in this problem, there is no single approach that couldgenerally solve the problem of segmentation for the large variety of image modalities existing today.

The most effective segmentation algorithms are obtained bycarefully customizing combinations of components. Theparameters of these components are tuned for the characteristics of the image modality used as input and the featuresof the anatomical structure to be segmented. A graphical interface and visualization system that provides the user withreal-time feedback can be very effective tool for expoloring the parameter space of a segmentation algorithm.

The Insight toolkit provides a basic set of algorithms that can be used to develop and customize a full segmentationapplication. To develop application code, it is important to understand how to interpret and visualize the output ofthe segmentation filters, how to set the filter parameters, and how to properly format the input to the filters. Someof the more commonly used segmentation components are described in the following sections. Note that there aremany more segmentation algorithms in the Insight toolkit than are described here. Refer to the complete ITK SoftwareGuide online at http://www.itk.org for more information. All source code referred to in this text can be found in theInsight code repository, freely downloadable from the sameURL.

2.2 Region Growing

Region growing algorithms have proven to be a very effectiveapproach for image segmentation. The basic regiongrowing algorithm is to first select a set of seed pixels contained in the target object then iteratively add neighboringpixels to the set according to similarity criteria. Probably the simplest criterion used for including pixels in a growingregion is to check whether those pixels have intensity values within a specific interval. This section describes two suchintensity-based region growing algorithms that are found in ITK.

The source code for this section can be found in the fileExamples/Segmentation/ConnectedThresholdImageFilter .cxx .

The following example illustrates the use of theitk::ConnectedThresholdImageFilter . This filter is based onthe use of the flood fill iterator. Most of the algorithmic complexity of a region growing method comes from thestrategy used for visiting the neighbor pixels. The flood filliterator assumes this responsibility and greatly simplifiesthe implementation of a region growing approach. The work left to the algorithm is to establish a criterion for decidingwhether a particular pixel should be included in the currentregion or not.

The criterion used by theitk::ConnectedThresholdImageFilter is based on an interval of intensity values pro-vided by the user. Values of lower and upper threshold shouldbe provided. The region growing algorithm will then

Page 25: ITK Handout

14 Chapter 2. Segmentation

include in the region only those pixels whose intensities are inside the interval.

I�X� � �lower�upper

�(2.1)

Let’s look at the minimal code required to use this algorithm. First, the following header defining theitk::ConnectedThresholdImageFilter class must be included.

#include "itkConnectedThresholdImageFilter.h"

Noise present in the image can reduce the capacity of this filter to grow large regions. When faced withnoisy images, it is usually convenient to pre-process the image by using an edge-preserving smoothing filter.Any of the filters discussed in section?? could be used to this end. In this particular example we use theitk::CurvatureFlowImageFilter , hence we need to include its header file.

#include "itkCurvatureFlowImageFilter.h"

We declare now the image type using a pixel type and a particular dimension. In this case thefloat type is used forthe pixels due to the requirements of the smoothing filter.

typedef float InternalPixelType;const unsigned int Dimension = 2;

typedef itk::Image< InternalPixelType, Dimension > Inter nalImageType;

The smoothing filter type is instantiated using the image type as a template parameter.

typedef itk::CurvatureFlowImageFilter<InternalImageType,InternalImageType > CurvatureFlowImageFilterType;

Then, the filter is created by invoking theNew() method and assigning the result to aitk::SmartPointer .

CurvatureFlowImageFilterType::Pointer smoothing =CurvatureFlowImageFilterType::New();

We now declare the type of the region growing filter. In this case it is the itk::ConnectedThresholdImageFilter .

typedef itk::ConnectedThresholdImageFilter<InternalImageType,InternalImageType > ConnectedFilterType;

Then, we construct one filter of this class using theNew() method.

ConnectedFilterType::Pointer connectedThreshold = Conn ectedFilterType::New();

Now it is time to connect the pipeline. This is pretty linear in our example. A file reader is added at the beginning ofthe pipeline and a caster filter and writer are added at the end. The caster filter is required here to convertfloat pixeltypes to integer types since only a few image file formats support float types.

Page 26: ITK Handout

2.2. Region Growing 15

smoothing->SetInput( reader->GetOutput() );

connectedThreshold->SetInput( smoothing->GetOutput() );

caster->SetInput( connectedThreshold->GetOutput() );

writer->SetInput( caster->GetOutput() );

The itk::CurvatureFlowImageFilter requires a couple of parameters to be defined. The following are typicalvalues for 2D images. However they may have to be adjusted depending on theamount of noise present in the inputimage.

smoothing->SetNumberOfIterations( 5 );

smoothing->SetTimeStep( 0.125 );

The itk::ConnectedThresholdImageFilter has two main parameters to be defined. They are the lower and upperthresholds of the interval in which intensity values shouldfall in order to be included in the region. Setting these twovalues too close will not allow enough flexibility for the region to grow. Setting them too far apart will result in aregion that engulfes the image.

connectedThreshold->SetLower( lowerThreshold );connectedThreshold->SetUpper( upperThreshold );

The output of this filter is a binary image with zero-value pixels everywhere except on the extracted region. Theintensity value to be put inside the region is selected with the methodSetReplaceValue()

connectedThreshold->SetReplaceValue( 255 );

The initialization of the algorithm requires the user to provide a seed point. It is convenient to select this point to beplaced in atypicalregion of the anatomical structure to be segmented. The seedis passed in the form of aitk::Indexto theSetSeed() method.

connectedThreshold->SetSeed( index );

The invocation of theUpdate() method on the writer triggers the execution of the pipeline.It is usually wise to putupdate calls in atry/catch block in case errors occur and exceptions are thrown.

try{writer->Update();}

catch( itk::ExceptionObject & excep ){std::cerr << "Exception caught !" << std::endl;std::cerr << excep << std::endl;}

Let’s now run this example using as input the imageBrainProtonDensitySlice.png provided in the directoryExamples/Data . We can easily segment the major anatomical structures by providing seeds in the appropriate loca-tions and defining values for the lower and upper thresholds.Figure2.1 illustrates several examples of segmentation.The parameters used are presented in Table2.1.

Page 27: ITK Handout

16 Chapter 2. Segmentation

Structure Seed Index Lower Upper Output ImageWhite matter

�60�116� 150 180 Second from left in Figure2.1

Ventricle�81�112� 210 250 Third from left in Figure2.1

Gray matter�107�69� 180 210 Fourth from left in Figure2.1

Table 2.1: Parameters used for segmenting some brain structures shown in Figure 2.1 using the filter

itk::ConnectedThresholdImageFilter .

Figure 2.1:Segmentation results of the ConnectedThreshold filter for various seed points.

It can be noticed that the gray matter is not being completelysegmented. This illustrates the vulnerability of the regiongrowing methods when the anatomical structures to be segmented do not have a homogeneous statistical distributionover the image space. You may want to experiment with different values of the lower and upper thresholds in order toverify how the accepted region will extend.

Another option for completing regions is to take advantage of the functionality provided by theitk::ConnectedThresholdImageFilter for managing multiple seeds. The seeds can be passed one by one tothe filter using theAddSeed() method. You could imagine a User Interface in which an operator clicks on multiplepoints of the object to be segmented and each selected point is passed as a seed to this filter.

2.2.1 Confidence Connected

The source code for this section can be found in the fileExamples/Segmentation/ConfidenceConnected.cxx .

The following example illustrates the use of theitk::ConfidenceConnectedImageFilter . This filter is based onthe use of the flood fill iterator. Most of the algorithmic complexity of a region growing method comes from thestrategy used for visiting the neighbor pixels. The flood filliterator assumes this responsibility and greatly simplifiesthe implementation of a region growing approach. The work left to the algorithm is to establish a criterion for decidingwhether a particular pixel should be included in the currentregion or not.

The criterion used by theitk::ConfidenceConnectedImageFilter is based on simple statistics of the currentregion. First, the algorithm computes the mean and standarddeviation of intensity values for all the pixels currentlyincluded in the region. A user-provided factor is used to multiply the standard deviation and define a range around themean. Neighbor pixels whose intensity values fall inside the range are accepted to be included in the region. Whenno more neighbor pixels are found that can satisfy the criterion, the algorithm is considered to have finished its firstiteration. At that point, the mean and standard deviation ofthe intensity levels are recomputed using all the pixelscurrently included in the region. This mean and standard deviation define a new intensity range that is used for visitingthe current neighbors in search of pixels whose intensity falls inside the range. This iterative process is repeated a

Page 28: ITK Handout

2.2. Region Growing 17

number of times as defined by the user. The following equationillustrates the inclusion criterion used by this filter,

I�X� � �m� f σ �m� f σ

�(2.2)

wheremandσ are the mean and standard deviation of the region intensities, f is a factor defined by the user,I�� is the

image andX is the position of the particular neighbor pixel being considered for inclusion in the region.

Let’s look at the minimal code required to use this algorithm. First, the following header defining theitk::ConfidenceConnectedImageFilter class must be included.

#include "itkConfidenceConnectedImageFilter.h"

Noise present in the image can reduce the capacity of this filter to grow large regions. When faced withnoisy images, it is usually convenient to pre-process the image by using an edge-preserving smoothing filter.Any of the filters discussed in section?? could be used to this end. In this particular example we use theitk::CurvatureFlowImageFilter , hence we need to include its header file.

#include "itkCurvatureFlowImageFilter.h"

We now declare the image type using a pixel type and a particular dimension. In this case thefloat type is used forthe pixels due to the requirements of the smoothing filter.

typedef float InternalPixelType;const unsigned int Dimension = 2;

typedef itk::Image< InternalPixelType, Dimension > Inter nalImageType;

The smoothing filter type is instantiated using the image type as a template parameter.

typedef itk::CurvatureFlowImageFilter<InternalImageType,InternalImageType > CurvatureFlowImageFilterType;

Then, the filter is created by invoking theNew() method and assigning the result to aitk::SmartPointer .

CurvatureFlowImageFilterType::Pointer smoothing =CurvatureFlowImageFilterType::New();

We now declare the type of the region growing filter. In this case it is theitk::ConfidenceConnectedImageFilter .

typedef itk::ConfidenceConnectedImageFilter<InternalImageType,InternalImageType > ConnectedFilterType;

Then, we construct one filter of this class using theNew() method.

ConnectedFilterType::Pointer confidenceConnected = Con nectedFilterType::New();

Now it is time to connect the pipeline. This is pretty linear in our example. A file reader is added at the beginning ofthe pipeline and a caster filter and writer are added at the end. The caster filter is required here to convertfloat pixeltypes to integer types since only a few image file formats support float types.

Page 29: ITK Handout

18 Chapter 2. Segmentation

smoothing->SetInput( reader->GetOutput() );

confidenceConnected->SetInput( smoothing->GetOutput( ) );

caster->SetInput( confidenceConnected->GetOutput() );

writer->SetInput( caster->GetOutput() );

The itk::CurvatureFlowImageFilter requires a couple of parameters to be defined. The following are typicalvalues for 2D images. However they may have to be adjusted depending on theamount of noise present in the inputimage.

smoothing->SetNumberOfIterations( 5 );

smoothing->SetTimeStep( 0.125 );

The itk::ConfidenceConnectedImageFilter has two parameters to be defined. First, the factorf that the defineshow large the range of intensities will be. Small values of the multiplier will restrict the inclusion of pixels to thosehaving very similar intensities to those in the current region. Larger values of the multiplier will relax the acceptingcondition and will result in more generous growth of the region. Values that are too large will make the region ingestneighbor regions in the image that may actually belong to separate anatomical structures.

confidenceConnected->SetMultiplier( 2.5 );

The number of iterations may be decided based on the homogeneity of the intensities of the anatomical structure to besegmented. Highly homogeneous regions may only require a couple of iterations. Regions with ramp effects, like MRIimages with inhomogenous fields, may require more iterations. In practice, it seems to be more relevant to carefullyselect the multiplier factor than the number of iterations.However, keep in mind that there is no reason to assume thatthis algorithm should converge to a stable region. It is possible that by letting the algorithm run for more iterations theregion will end up engulfing the entire image.

confidenceConnected->SetNumberOfIterations( 5 );

The output of this filter is a binary image with zero-value pixels everywhere except on the extracted region. Theintensity value to be put inside the region is selected with the methodSetReplaceValue()

confidenceConnected->SetReplaceValue( 255 );

The initialization of the algorithm requires the user to provide a seed point. It is convenient to select this point to beplaced in atypical region of the anatomical structure to be segmented. A small neighborhood around the seed pointwill be used to compute the initial mean and standard deviation for the inclusion criterion. The seed is passed in theform of a itk::Index to theSetSeed() method.

confidenceConnected->SetSeed( index );

The size of the initial neighborhood around the seed is defined with the methodSetInitialNeighborhoodRadius() .The neighborhood will be defined as anN-Dimensional rectangular region with 2r �1 pixels on the side, wherer isthe value passed as initial neighborhood radius.

confidenceConnected->SetInitialNeighborhoodRadius( 2 );

The invocation of theUpdate() method on the writer triggers the execution of the pipeline.It is usually wise to putupdate calls in atry/catch block in case errors occur and exceptions are thrown.

Page 30: ITK Handout

2.3. Watershed Segmentation 19

Figure 2.2:Segmentation results of the ConfidenceConnected filter for various seed points.

try{writer->Update();}

catch( itk::ExceptionObject & excep ){std::cerr << "Exception caught !" << std::endl;std::cerr << excep << std::endl;}

Let’s now run this example using as input the imageBrainProtonDensitySlice.png provided in the directoryExamples/Data . We can easily segment the major anatomical structures by providing seeds in the appropriate loca-tions. For example,

Structure Seed Index Output Image

White matter�60�116� Second from left in Figure2.2

Ventricle�81�112� Third from left in Figure2.2

Gray matter�107�69� Fourth from left in Figure2.2

It can be noticed that the gray matter is not being completelysegmented. This illustrates the vulnerability of the regiongrowing methods when the anatomical structures to be segmented do not have a homogeneous statistical distributionover the image space. You may want to experiment with differnt numbers of iterations to verify how the acceptedregion will extend.

2.3 Watershed Segmentation

Watershed segmentation classifies pixels into regions using gradient descent on image features and analysis of weakpoints along region boundaries. Imagine water raining ontoa landscape topology and flowing with gravity to collect inlow basins. The size of those basins will grow with increasing amounts of precipitation until they spill into one another,causing small basins to merge together into larger basins. Regions (catchment basins) are formed by using localgeometric structure to associate points in the image domainwith local extrema in some feature measurement such ascurvature or gradient magnitude. This technique is less sensitive to user-defined thresholds than classic region-growingmethods, and may be better suited for fusing different typesof features from different data sets. The watershedstechnique is also more flexible in that it does not produce a single image segmentation, but rather a hierarchy ofsegmentations from which a single region or set of regions can be extracted a-priori, using a threshold, or interactively,with the help of a graphical user interface [?, ?].

Page 31: ITK Handout

20 Chapter 2. Segmentation

Wat

ersh

ed D

epth

Intensity profile of input image Intensity profile of filtered image Watershed Segmentation

Figure 2.3:A fuzzy-valued boundary map, from an image or set of images, is segmented using local minima and catchment basins.

Node

Threshold of

Watershed depth

Image

Leaf

Boolean Operations

on Sub−Trees

(e.g. User Interaction)

Node Node Node

Node

Node

Node

Node

Node

Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf LeafLeaf

Figure 2.4: A watershed segmentation combined with a saliency measure (watershed depth) produces a hierarchy of regions.

Structures can be derived from images by either thresholding the saliency measure or combining subtrees within the hierarchy.

The strategy of watershed segmentation is to treat an imagef as a height function, i.e., the surface formed by graphingf as a function of its independent parameters,x �U . The imagef is often not the original input data, but is derivedfrom that data through some filtering, graded (or fuzzy) feature extraction, or fusion of feature maps from differentsources. The assumption is that higher values off (or �f ) indicate the presence of boundaries in the original data.Watersheds may therefore be considered as a final or intermediate step in a hybrid segmentation method, where theinitial segmentation is the generation of the edge feature map.

Gradient descent associates regions with local minima off (clearly interior points) using the watersheds of the graphof f , as in Figure2.3. That is, a segment consists of all points inU whose paths of steepest descent on the graph off terminate at the same minimum inf . Thus, there are as many segments in an image as there are minima in f . Thesegment boundaries are “ridges” [?, ?, ?] in the graph off . In the 1D case (U ℜ), the watershed boundaries are thelocal maxima off , and the results of the watershed segmentation is trivial. For higher-dimensional image domains,the watershed boundaries are not simply local phenomena; they depend on the shape of the entire watershed.

The drawback of watershed segmentation is that it produces aregion for each local minimum; that is often, in practice,too many regions—an over segmentation. To alleviate this, we can establish a minimum watershed depth. Thewatershed depth is the difference in height between the watershed minimum and the lowest boundary point. In otherwords, it is the maximum depth of water a region could hold without flowing into any of its neighbors. Thus, awatershed segmentation algorithm can sequentially combine watersheds whose depths fall below the minimum until allof the watersheds are of sufficient depth. This depth measurement can be combined with other saliency measurements,such as size. The result is a segmentation containing regions whose boundaries and size are significant. Because themerging process is sequential, it produces a hierarchy of regions, as shown in Figure2.4. Previous work has shown thebenefit of a user-assisted approach that provides a graphical interface to this hierarchy, so that a technician can quickly

Page 32: ITK Handout

2.3. Watershed Segmentation 21

move from the small regions that lie within a area of interestto the union of regions that correspond to the anatomicalstructure [?].

In order to interpret the output of the Insight watersheds algorithm, it is important to understand what the outputrepresents and how it is formatted. The itk::WatershedImageFilter produces an image of unsigned long integers. Eachinteger number is a label for a unique segmented region (catchment basin) from the original input. The output is thesame size and dimensionality of the input.

Because the segmented image may have potentially many thousands of labels, some care must be taken when visual-izing the data or information may be lost. One effective way to visualize the output is to map the integer labels intodistinct RGB colors. Because labels close in value tend to also be close spatially in the image, it is helpful to spreadsequential label values far apart in the RGB range. A hashingscheme that puts more weight on the least-significantinteger bits is a good way to accomplish this. The following example makes use of a special ITK color mapping objectto convert labels in the segmentation to RGB pixels that can be visualized directly as a color image.

2.3.1 Using the Insight Watershed Filter

The source code for this section can be found in the fileExamples/Segmentation/WatershedSegmentation1.cxx .

The following example illustrates how to preprocess and segment images using theitk::WatershedImageFilter .Note that the care with which the data is prepared will greatly affect the quality of your result. Typically, the bestresults are obtained by preprocessing the original image with an edge-preserving diffusion filter, such as one of theanisotropic diffusion filters, or with the bilateral image filter. As noted in Section??, the height function used as inputshould be created such that higher positive values correspond to object boundaries. A suitable height function formany applications can be generated as the gradient magnitude of the image to be segmented.

Theitk::VectorGradientMagnitudeAnisotropicDiffusionIma geFilter class is used to smooth the image andthe itk::VectorGradientMagnitudeImageFilter to generate the height function. We begin by including all pre-processing filter header files and the header file for theitk::WatershedImageFilter . We use the vector versions ofthese filters because the input data is a color image.

#include "itkVectorGradientAnisotropicDiffusionImage Filter.h"#include "itkVectorGradientMagnitudeImageFilter.h"#include "itkWatershedImageFilter.h"

We now declare the image and pixel types to use for instantiation of the filters. All of these filters expect real-valuedpixel types in order to work properly. The preprocessing stages are done directly on the vector-valued data and thesegmentation is done using floating point scalar data. Images are converted from RGB pixel type to numerical vectortype usingitk::VectorCastImageFilter .

typedef itk::RGBPixel<unsigned char> RGBPixelType;typedef itk::Image<RGBPixelType, 2> RGBImageType;typedef itk::Vector<float, 3> VectorPixelType;typedef itk::Image<VectorPixelType, 2> VectorImageType ;typedef itk::Image<unsigned long, 2> LabeledImageType;typedef itk::Image<float, 2> ScalarImageType;

The various image processing filters are declared using the types created above in the order that they will be used inthe pipeline.

typedef itk::ImageFileReader<RGBImageType> FileReader Type;typedef itk::VectorCastImageFilter<RGBImageType, Vect orImageType> CastFilterType;typedef itk::VectorGradientAnisotropicDiffusionImage Filter<VectorImageType,

VectorImageType> DiffusionFilterType;

Page 33: ITK Handout

22 Chapter 2. Segmentation

typedef itk::VectorGradientMagnitudeImageFilter<Vect orImageType>GradientMagnitudeFilterType;

typedef itk::WatershedImageFilter<ScalarImageType> Wa tershedFilterType;

Next we instantiate the filters and set their parameters. Thefirst step in the image processing pipeline is diffusion ofthe color input image using an anisotropic diffusion filter.For this class of filters, the CFL condition requires that thetime step be no more than 0.25 for two-dimensional images, and no more than 0.125 for three-dimensional images.The number of iterations and the conductance term will be taken from the command line. See Section?? for moreinformation on the ITK anisotropic diffusion filters.

DiffusionFilterType::Pointer diffusion = DiffusionFilt erType::New();diffusion->SetNumberOfIterations( atoi(argv[4]) );diffusion->SetConductanceParameter( atof(argv[3]) );diffusion->SetTimeStep(0.125);

The ITK gradient magnitude filter for vector-valued images can optionally take several parameters. Here we allowonly enabling or disabling of principle component analysis.

GradientMagnitudeFilterType::Pointer gradient = Gradie ntMagnitudeFilterType::New();gradient->SetUsePrincipleComponents(atoi(argv[7]));

Finally we set up the watershed filter. There are two parameters. “Level” controls watershed depth, and “Threshold”controls the lower thresholding of the input. Both parameters are set as a percentage (0.0 - 1.0) of the maximum depthin the input image.

WatershedFilterType::Pointer watershed = WatershedFilt erType::New();watershed->SetLevel( atof(argv[6]) );watershed->SetThreshold( atof(argv[5]) );

The output ofitk::WatershedImageFilter is an image of unsigned long integer labels, where a label denotes mem-bership of a pixel in a particular segmented region. This format is not practical for visualization, so for the purposesof this example, we will convert it to RGB pixels. RGB images have the advantage that they can be saved as a simplepng file and viewed using any standard image viewer software.The itk::Functor::ScalarToRGBPixelFunctorclass is a special function object designed to hash a scalar value into anitk::RGBPixel . Plugging this functor intothe itk::UnaryFunctorImageFilter creates an image to image filter for converting scalar to RGB images.

typedef itk::Functor::ScalarToRGBPixelFunctor<unsign ed long>ColorMapFunctorType;

typedef itk::UnaryFunctorImageFilter<LabeledImageTyp e,RGBImageType, ColorMapFunctorType> ColorMapFilterType ;

ColorMapFilterType::Pointer colormapper = ColorMapFilt erType::New();

The filters are connected into a single pipeline, with readers and writers at each end.

caster->SetInput(reader->GetOutput());diffusion->SetInput(caster->GetOutput());gradient->SetInput(diffusion->GetOutput());watershed->SetInput(gradient->GetOutput());colormapper->SetInput(watershed->GetOutput());writer->SetInput(colormapper->GetOutput());

Tuning the filter parameters for any particular applicationis a process of trial and error. Thethresholdparametercan be used to great effect in controlling oversegmentationof the image. Raising the threshold will generally reduce

Page 34: ITK Handout

2.3. Watershed Segmentation 23

Figure 2.5:Segmented section of Visible Human female head and neck cryosection data. At left is the original image. The image

in the middle was generated with parameters: conductance = 2.0, iterations = 10, threshold = 0.0, level = 0.05, principle components

= on. The image on the right was generated with parameters: conductance = 2.0, iterations = 10, threshold = 0.001, level = 0.15,

principle components = off.

computation time and produce output with fewer and larger regions. The trick in tuning parameters is to consider thescale level of the objects you are trying to segment in the image. The best time/quality trade-off will be achieved whenthe image is smoothed and thresholded to eliminate featuresjust below the desired scale.

Figure2.5shows output from the example code. The input image is taken from the Visible Human female data aroundthe right eye. The images on the right are colorized watershed segmentations with parameters set to capture objectssuch as the optic nerve and lateral rectus muscles, which canbe seen just above and to the left and right of the eyeball.Note that a critical difference between the two segmentations is the mode of the gradient magnitude calculation.

A note on the computational complexity of the watershed algorithm is warranted. Most of the complexity of the ITKimplementation lies in generating the hierarchy. Processing times for this stage are non-linear with respect to thenumber of catchment basins in the initial segmentation. This means that the amount of information contained in animage is more significant than the number of pixels in the image. A very large, but very flat input take less time tosegment than a very small, but very detailed input.

For volumetric data, it is often interesting to create a surface rendering of one or more regions in the output. Thiscan be done by thresholding the region(s) of interest from the output image and exporting the result to a visualizationpackage capable of isosurface rendering. Thresholding canbe done either by explicit manipulation of the image valuesthrough an ITK image iterator, or using one of the several Insight image thresholding filters.

Page 35: ITK Handout

24 Chapter 2. Segmentation

2.4 Level-Set Methods

Zero Set f(x,y)=0

Exterior f(x,y) < 0

Interiorf(x,y) > 0

Figure 2.6:Concept of Zero Set in a Level Set.

Level-set techniques are numerical methods fortracking the evolution of contours and surfaces. In-stead of manipulating the contour directly, the con-tour is embedded as the zero level set of a higherdimensional function called the level-set function,ψ�X �t�. The level-set function is then evolved un-

der the control of a differential equation. At anytime, the evolving contour can be obtained by ex-tracting the zero level-setΓ

��X��t� � �ψ�

X �t� �0 from the output. The main advantages of usinglevel sets is that arbitrarily complex shapes can bemodeled and topological changes such as mergingand splitting are handled implicitly.

Level sets can be used for image segmentation byusing image-based features such as mean intensity,gradient and edges in the governering differentialequation. In a typical approach, a contour is initialized bya user and is then evolved until it fits the form of ananatomical structure in the image. Many different implementations and variants of this basic concept have beenpublished in the literature. An overview of the field has beenmade by Sethian [?].

Most level-set segmentation filters in ITK derive from a common framework developed for solving partial differentialequations. The remainder of this section describes the relevant details of this framework.

Each filter makes use of a generic level-set equation to compute the update to the solutionψ of the partial differentialequation.

ddt

ψ � �αA�x� �∇ψ �βP

�x� �∇ψ � �γZ

�x�κ (2.3)

whereA is an advection term,P is a propagation (expansion) term, andZ is a spatial modifier term for the meancurvatureκ. The scalar constantsα, β, andγ weight the relative influence of each of the terms on the movement of theinterface. A segmentation filter may use all of these terms inits calculations, or it may omit one or more terms. If aterm is left out of the equation, then setting the corresponding scalar constant weighting will have no effect.

All of the level-set based segmentation filtersmustoperate with floating point precision to produce valid results. Thethird, optional template parameter is thenumerical typeused for calculations and as the output image pixel type. Thenumerical type isfloat by default, but can be changed todouble for extra precision. A user-defined, signed floatingpoint type that defines all of the necessary arithmetic operators and has sufficient precision is also a valid choice. Youshould not use types such asint or unsigned char for the numerical parameter. If the input image pixel types donot match the numerical type, those inputs will be cast to an image of appropriate type when the filter is executed.

Most filters require two images as input, an initial modelψ�X �t � 0�, and afeature image, which is either the image

you wish to segment or some preprocessed version therof. Youmust specify the isovalue that represents the surfaceΓ in your initial model. The single image output of each filter is the functionψ at the final time step. It is importantto note that the contour representing the surfaceΓ is the zero level-set of the output image, and not the isovalue youspecified for the initial model. To representΓ using the original isovalue, simply add that value back to the output.

The solutionΓ is calculated to subpixel precision. The best discrete approximation of the surface is there-fore the set of grid positions closest to the zero-crossingsin the image, as shown in figure2.7. Theitk::ZeroCrossingImageFilter operates by finding exactly those grid positions and can be used to extract thesurface.

There are two important considerations when analyzing the processing time for any particular level-set segmentationtask: the surface area of the evolving interface and the total distance that the surface must travel. Because the level-set

Page 36: ITK Handout

2.4. Level-Set Methods 25

−0.4

−0.3

−1.3

−1.4

−1.4

−0.2−1.2

−1.1 −0.1

−0.6

0.6

0.4 0.3

−0.7

1.31.6

0.8

−0.3

0.3

−0.8

−0.7

0.7

−0.4−1.3

0.4

1.3 0.3 0.4 −0.6

−0.6

0.2

1.3

0.2 −0.8

−0.8

1.2

2.3

1.2

1.4

−0.6

0.4−0.5−1.5

0.9

−0.6

0.2

−0.8

0.7

−0.6 −1.7

−1.6

−0.7

−1.8

−1.8

−1.8−2.4

−2.4

−2.4

−2.5

−2.5 −1.5

−1.6

−1.6

2.4

1.7

1.8

Ψ(x, t)

Figure 2.7: The implicit level set surface Γ is the black line superimposed over the image grid. The location of the surface is

interpolated by the image pixel values. The grid pixels closest to the implicit surface are shown in gray.

equations are usually solved only at pixels near the surface(fast marching methods are an exception), the time takenat each iteration depends on the number of points on the surface. This means that as the surface grows, the solver willslow down proportionally. Because the surface must evolve slowly to prevent numerical instabilities in the solution,the distance the surface must travel in the image dictates the total number of iterations required.

Some level-set techniques are relatively insensitive to initial conditions and are therefore suitable for region-growingsegmentation. Other techniques, likeitk::LaplacianSegmentationLevelSetImageFilter , can easily become“stuck” on image features close to their initialization andshould be used only when a reasonable prior segmentation isavailable as the initialization. For best efficiency, your initial model of the surface should be the best guess possibleforthe solution. When extending the example applications given here to higher dimensional images, for example, you canimprove results and dramatically decrease processing timeby using a multi-scale approach. Start with a downsampledvolume and work back to the full resolution using the resultsat each intermediate scale as the initialization for the nextscale.

Finally, it is important to note that, unless otherwise documented in a particular filter, the solver and equations used inthe level-set segmentation filters do not factor the image spacing into distance transform or derivative calculations.Ingeneral, you should resample data to isotropic pixels before passing it to these filters.

The following sections introduce two of the many level-set based segmentation filters available in ITK. See the ITKSoftware Guide for more information.

2.4.1 Threshold Level Set Segmentation

The source code for this section can be found in the fileExamples/Segmentation/ThresholdSegmentationLevelSet ImageFilter.cxx .

The itk::ThresholdSegmentationLevelSetImageFilter is an extension of threshold connected-component seg-mentation to the level-set framework. The goal is to define a range of intensity values that classify the tissue typeof interest and then base the propagation term of the level-set equation on that intensity range. Using the level-setapproach, the smoothness of the evolving surface can be constrained to prevent some of the “leaking” that is commonin connected-component schemes.

The propagation termP from equation2.3is calculated from theFeatureImage inputg with UpperThreshold U and

Page 37: ITK Handout

26 Chapter 2. Segmentation

Fast

MarchingInput

LevelSetDistance

Seeds

Binary

Threshold

Binary

Image

Input

itk::Image

LevelSet

OutputThreshold

Level−set

Segmentation

Weight

Curvature

Weight

Feature

Figure 2.8:Collaboration diagram for the ThresholdSegmentationLevelSetImageFilter applied to a segmentation task.

LowerThreshold L according to the following formula.

P�x� � �

g�x� �L if g

�x� � �

U �L��2�LU �g

�x� otherwise

(2.4)

Figure2.9 illustrates the propagation term function. Intensity values ing betweenL andH yield positive values inP,while outside intensities yield negative values inP.

ExpandsModelModel

Contracts ContractsModel

UL

g(x)

P

P=0

Figure 2.9: Propagation term for threshold-based

level-set segmentation. From equation 2.4.

TheThresholdSegmentation filter expects two inputs. The firstis an initial level set in the form of anitk::Image . The sec-ond input is the feature imageg. For many applications, this filterrequires little or no preprocessing of its input. Smoothingthe in-put image is not usually required to produce reasonable solutions,though it may still be warranted in some cases.

Figure 2.8 shows how the image processing pipeline is con-structed. The initial surface is generated using the fast march-ing filter. The output of the segmentation filter is passed to aitk::BinaryThresholdImageFilter to create a binary repre-sentation of the segmented object. Let’s start by includingtheappropriate header file.

#include "itkThresholdSegmentationLevelSetImageFilte r.h"

We declare the image type using a pixel type and a particular dimension. In this case we will use 2Dfloat images.

typedef float InternalPixelType;const unsigned int Dimension = 2;

typedef itk::Image< InternalPixelType, Dimension > Inter nalImageType;

The following lines instantiate aitk::ThresholdSegmentationLevelSetImageFilter and create an object of thistype using theNew() method.

typedef itk::ThresholdSegmentationLevelSetImageFilte r<

Page 38: ITK Handout

2.4. Level-Set Methods 27

InternalImageType,InternalImageType >

ThresholdSegmentationLevelSetImageFilterType;

ThresholdSegmentationLevelSetImageFilterType::Point er thresholdSegmentation =ThresholdSegmentationLevelSetImageFilterType::New() ;

For the itk::ThresholdSegmentationLevelSetImageFilter , scaling parameters are used to balance the influenceof the propagation (inflation) and the curvature (surface smoothing) terms from equation2.3. The advection term isnot used in this filter. Set the terms with methodsSetPropagationScaling() andSetCurvatureScaling() . Bothterms are set to 1.0 in this example.

thresholdSegmentation->SetPropagationScaling( 1.0 );if ( argc > 8 )

{thresholdSegmentation->SetCurvatureScaling( atof(arg v[8]) );}

else{thresholdSegmentation->SetCurvatureScaling( 1.0 );}

The convergence criteriaMaximumRMSError andMaximumIterations are set as in previous examples. We now setthe upper and lower threshold valuesU andL, and the isosurface value to use in the initial model.

thresholdSegmentation->SetUpperThreshold( ::atof(arg v[7]) );thresholdSegmentation->SetLowerThreshold( ::atof(arg v[6]) );thresholdSegmentation->SetIsoSurfaceValue(0.0);

The filters are now connected in a pipeline indicated in Figure2.8. Remember that before callingUpdate() on the filewriter object, the fast marching filter must be initialized with the seed points and the output from the reader object.See previous examples and the source code for this section for details.

thresholdSegmentation->SetInput( fastMarching->GetOu tput() );thresholdSegmentation->SetFeatureImage( reader->GetO utput() );thresholder->SetInput( thresholdSegmentation->GetOut put() );writer->SetInput( thresholder->GetOutput() );

Invoking theUpdate() method on the writer triggers the execution of the pipeline.As usual, the call is placed in atry/catch block should any errors ocurr or exceptions be thrown.

try{

reader->Update();fastMarching->SetOutputSize(

reader->GetOutput()->GetBufferedRegion().GetSize() ) ;

writer->Update();

}catch( itk::ExceptionObject & excep )

{std::cerr << "Exception caught !" << std::endl;std::cerr << excep << std::endl;}

Page 39: ITK Handout

28 Chapter 2. Segmentation

Figure 2.10:Images generated by the segmentation process based on the ThresholdSegmentationLevelSet. From left to right:

segmentation of the left ventricle, segmentation of the right ventricle, segmentation of the white matter, attempt of segmentation of

the gray matter. The parameters used in this segmentations are presented in Table 2.2

Structure Seed Index Lower Upper Output ImageWhite matter

�60�116� 150 180 Second from left

Ventricle�81�112� 210 250 Third from left

Gray matter�107�69� 180 210 Fourth from left

Table 2.2:Segmentation results of ThresholdSegmentationLevelSetImageFilter for various seed points. The resulting images are

shown in Figure 2.10

.

Let’s run this application with the same data and parametersas the example given foritk::ConnectedThresholdin section2.2. We will use a value of 5 as the initial distance of the surfacefrom the seed points. The algorithm isrelatively insensitive to this initialization. Compare the results in figure2.10with those in figure2.1. Notice how thesmoothness constraint on the surface prevents leakage of the segmentation into both ventricles, but also localizes thesegmentation to a smaller portion of the gray matter.

2.4.2 Geodesic Active Contours Segmentation

The source code for this section can be found in the fileExamples/Segmentation/GeodesicActiveContourImageFil ter.cxx .

the use of theitk::GeodesicActiveContourLevelSetImageFilter . is illustrated in the following example. Theimplementation of this filter in ITK is based on the paper by Caselles [?]. This implementation extends the functionalityof the itk::ShapeDetectionLevelSetImageFilter by the addition of a third avection term which attracts the levelset to the object boundaries.

itk::GeodesicActiveContourLevelSetImageFilter expects two inputs. The first is an initial level set in theform of an itk::Image . The second input is a feature image. For this algorithm, thefeature image is an edgepotential image that basically follows the same rules used for the itk::ShapeDetectionLevelSetImageFilterdiscussed in section??. The configuration of this example is quite similar to the example on the use of theitk::ShapeDetectionLevelSetImageFilter . We omit most of the redundant description. A look at the codewill reveal the great degree of similarity between both examples.

Figure 2.11 shows the major components involved in the application of theitk::GeodesicActiveContourLevelSetImageFilter to a segmentation task. This pipeline is quite similarto the one used by theitk::ShapeDetectionLevelSetImageFilter in section??.

The pipeline involves a first stage of smoothing using theitk::CurvatureAnisotropicDiffusionImageFilter .The smoothed image is passed as the input to theitk::GradientMagnitudeRecursiveGaussianImageFilter and

Page 40: ITK Handout

2.4. Level-Set Methods 29

SigmoidFilter

GradientMagnitude

AnisotropicDiffusion

Inputitk::Image

Iterations Sigma Alpha,Beta

EdgeImage

FastMarching

InputLevelSet

Distance

Seeds

BinaryThreshold

BinaryImage

OutputLevelSet

GeodesicActive

ContoursLengthPenalty

InflationStrength

Figure 2.11:Collaboration diagram for the GeodesicActiveContourLevelSetImageFilter applied to a segmentation task.

Page 41: ITK Handout

30 Chapter 2. Segmentation

then to theitk::SigmoidImageFilter in order to produce the edge potential image. A set of user-provided seeds ispassed to aitk::FastMarchingImageFilter in order to compute the distance map. A constant value is subtractedfrom this map in order to obtain a level set in which thezero setrepresents the initial contour. This level set is alsopassed as input to theitk::GeodesicActiveContourLevelSetImageFilter .

Finally, the level set generated by theitk::GeodesicActiveContourLevelSetImageFilter is passed to aitk::BinaryThresholdImageFilter in order to produce a binary mask representing the segmentedobject.

Let’s start by including the headers of the main filters involved in the preprocessing.

#include "itkImage.h"#include "itkGeodesicActiveContourLevelSetImageFilte r.h"

We now declare the image type using a pixel type and a particular dimension. In this case thefloat type is used forthe pixels due to the requirements of the smoothing filter.

typedef float InternalPixelType;const unsigned int Dimension = 2;

typedef itk::Image< InternalPixelType, Dimension > Inter nalImageType;

In the following lines we instantiate the type of theitk::GeodesicActiveContourLevelSetImageFilter andcreate an object of this type using theNew() method.

typedef itk::GeodesicActiveContourLevelSetImageFilte r<InternalImageType,InternalImageType > GeodesicActiveContourFilterType;

GeodesicActiveContourFilterType::Pointer geodesicAct iveContour =GeodesicActiveContourFilterType::New();

For the itk::GeodesicActiveContourLevelSetImageFilter , scaling parameters are used to trade off between thepropagation (inflation), the curvature (smoothing) and theadvection terms. These parameters are set using methodsSetPropagationScaling() , SetCurvatureScaling() andSetAdvectionScaling() . In this example, we will setthe curvature and advection scales to one and let the propagation scale be a command-line argument.

geodesicActiveContour->SetPropagationScaling( propag ationScaling );geodesicActiveContour->SetCurvatureScaling( 1.0 );geodesicActiveContour->SetAdvectionScaling( 1.0 );

The filters are now connected in a pipeline indicated in Figure2.11using the following lines:

smoothing->SetInput( reader->GetOutput() );

gradientMagnitude->SetInput( smoothing->GetOutput() ) ;

sigmoid->SetInput( gradientMagnitude->GetOutput() );

geodesicActiveContour->SetInput( fastMarching->GetOu tput() );geodesicActiveContour->SetFeatureImage( sigmoid->Get Output() );

thresholder->SetInput( geodesicActiveContour->GetOut put() );

writer->SetInput( thresholder->GetOutput() );

Page 42: ITK Handout

2.4. Level-Set Methods 31

Structure Seed Index Distance σ α β Propag. Output ImageLeft Ventricle

�81�114� 5.0 1.0 -0.5 3.0 2.0 First

Right Ventricle�99�114� 5.0 1.0 -0.5 3.0 2.0 Second

White matter�56�92� 5.0 1.0 -0.3 2.0 10.0 Third

Gray matter�40�90� 5.0 0.5 -0.3 2.0 10.0 Fourth

Table 2.3: Parameters used for segmenting some brain structures shown in Figure 2.13 using the filter

itk::GeodesicActiveContourLevelSetImageFilter .

The invocation of theUpdate() method on the writer triggers the execution of the pipeline.As usual, the call is placedin a try/catch block should any errors occur or exceptions be thrown.

try{writer->Update();}

catch( itk::ExceptionObject & excep ){std::cerr << "Exception caught !" << std::endl;std::cerr << excep << std::endl;}

Let’s now run this example using as input the imageBrainProtonDensitySlice.png provided in the directoryExamples/Data . We can easily segment the major anatomical structures by providing seeds in the appropriate loca-tions. Table?? presents the parameters used for some structures.

Figure 2.12 presents the intermediate outputs of the pipeline illustrated in Figure 2.11. They are fromleft to right: the output of the anisotropic diffusion filter, the gradient magnitude of the smoothed im-age and the sigmoid of the gradient magnitude which is finallyused as the edge potential for theitk::GeodesicActiveContourLevelSetImageFilter .

Segmentations of the main brain structures are presented inFigure 2.13. The results are quite similar to those obtainedwith the itk::ShapeDetectionLevelSetImageFilter in section??.

Note that a relatively larger propagation scaling value wasrequired to segment the white matter. This is due to twofactors: the lower contrast at the border of the white matterand the complex shape of the structure. Unfortunatelythe optimal value of these scaling parameters can only be determined by experimentation. In a real application wecould imagine an interactive mechanism by which a user supervises the contour evolution and adjusts these parametersaccordingly.

Page 43: ITK Handout

32 Chapter 2. Segmentation

Figure 2.12: Images generated by the segmentation process based on the GeodesicActiveContourLevelSetImageFilter. From

left to right and top to bottom: input image to be segmented, image smoothed with an edge-preserving smoothing filter, gradient

magnitude of the smoothed image, sigmoid of the gradient magnitude. This last image, the sigmoid, is used to compute the speed

term for the front propagation

Page 44: ITK Handout

2.4. Level-Set Methods 33

Figure 2.13:Images generated by the segmentation process based on the GeodesicActiveContourImageFilter. From left to right:

segmentation of the left ventricle, segmentation of the right ventricle, segmentation of the white matter, attempt of segmentation of

the gray matter.

Page 45: ITK Handout
Page 46: ITK Handout

CHAPTER

THREE

Registration

3.1 Introduction

Registration is the process of finding the spatial transformthat maps points from one image to the correspondingpoints in another image. Medical image registration has many clinical and research applications [?, ?, ?, ?]. Forexample, repeated image acquisition of a subject is often used to obtain time series information that captures diseasedevelopment, treatment progress and contrast bolus propagation. Although gross changes in the serial images can bedetected by a visual comparison of the images at different time points, image registration enables the detection ofsubtle changes by eliminating the effect of patient placement and motion artifacts. Once the serial images have beenaligned, subtraction can be used for visualization and quantification.

Registration can also be a valuable tool for correlating information obtained from different imaging modalities. Forexample, magnetic resonance (MR) images have good soft tissue discrimination for lesion identification, while CTimages provides bone localization useful for surgical guidance. On the other hand, PET (positron emission tomog-raphy) and SPECT (single photon emission computed tomography) images provide functional information that canbe used to locate abnormalities such as tumors. Example of multi- (or intra-) modality applications include the studyof brain tumors (PET/MR), radiation treatment planning (PET/planning X-ray CT), or as a preprocessing step beforeperforming multi-channel segmentation/classification.

This chapter introduces the functionalities offered by theInsight toolkit for performing image to image registration.In the toolkit, registration is performed within a framework of pluggable components that can easily be interchanged.This flexibility means that a combinatorial variety of registration methods can be created, allowing the user to pickand choose the right tools for the application.

3.2 Registration Framework

The components of the registration framework and their interconnections are shown in Figure3.1. The basic inputdata to the registration process are two images: one is defined as theFixed image f

�X� and the other defined as the

Moving imagem�X�. Registration is treated as an optimization problem with the goal of finding the spatial mapping

that will bring the moving image into alignment with the fixedor target image.

The TransformcomponentT�X� represents the spatial mapping of points from the fixed imagespace to points in

the moving image space. Note that this definition is inverse to the usual view of image transformation. Using theinversetransform is typical in registration as it avoids the potential problems of “holes” with forward transform. TheInterpolator is used to evaluate moving image intensity at non-grid positions. TheMetric componentS

�f �m�T �

provides a measure of how well the fixed image is matched by thetransformed moving image. This measure formsthe quantitative criterion to be optimized by theOptimizerover the search space defined by the parameters of theTransform.

The various components available in the toolkit will be described briefly in later sections. First we begin with a simpleregistration example.

Page 47: ITK Handout

36 Chapter 3. Registration

Optimizer

Transform

Interpolator

Metric

Moving Image

Fixed Imagefitness value

points

pixels

pixels

pixels

Transformparameters

Figure 3.1:The basic components of the registration framework are two input images, a transform, a metric, an interpolator and an

optimizer.

3.3 Hello World Registration

This simple example provides and overview of the typical elements involved in solving an image registration problemwithin the Insight registration framework.

A registration method requires the following components: two input Images, a Transform, a Metric, an Interpolatorand an Optimizer. Some of these components are parametrizedby the image type for which the registration is intended.The following header files provide declarations for common types of these components.

#include "itkImageRegistrationMethod.h"#include "itkTranslationTransform.h"#include "itkMeanSquaresImageToImageMetric.h"#include "itkLinearInterpolateImageFunction.h"#include "itkRegularStepGradientDescentOptimizer.h"#include "itkImage.h"

The types of each one of the components in the registration methods should be instantiated. First we select the imagedimension and the type for representing image pixels.

const unsigned int Dimension = 2;typedef float PixelType;

The types of the input images are instantiated by the following lines.

typedef itk::Image< PixelType, Dimension > FixedImageTyp e;typedef itk::Image< PixelType, Dimension > MovingImageTy pe;

The Transform that will map one image space into the other is defined below.

typedef itk::TranslationTransform< double, Dimension > T ransformType;

An optimizer is required to explore the parameter space of the transform in search of optimal values of the metric.

typedef itk::RegularStepGradientDescentOptimizer Opti mizerType;

The metric will compare how well the two images match each other. Metric types are usually parametrized by theimage types as can be seen in the following type declaration.

typedef itk::MeanSquaresImageToImageMetric<FixedImageType,MovingImageType > MetricType;

Page 48: ITK Handout

3.3. Hello World Registration 37

Finally, the type of the interpolator is declared. This interpolator will evaluate the moving image at non-grid positions.

typedef itk:: LinearInterpolateImageFunction<MovingImageType,double > InterpolatorType;

The registration method type is instantiated using the types of the fixed and moving images. This class is responsiblefor interconnecting all the components we have described sofar.

typedef itk::ImageRegistrationMethod<FixedImageType,MovingImageType > RegistrationType;

Each one of the registration components are created using their New() method and are assigned to their respectiveitk::SmartPointer .

MetricType::Pointer metric = MetricType::New();TransformType::Pointer transform = TransformType::New( );OptimizerType::Pointer optimizer = OptimizerType::New( );InterpolatorType::Pointer interpolator = InterpolatorT ype::New();RegistrationType::Pointer registration = RegistrationT ype::New();

The components are connected to the instance of the registration method.

registration->SetMetric( metric );registration->SetOptimizer( optimizer );registration->SetTransform( transform );registration->SetInterpolator( interpolator );

In this example, the fixed and moving images are read from files. This requires theitk::ImageRegistrationMethodto connect its inputs to the output of the respective readers.

registration->SetFixedImage( fixedImageReader->GetOu tput() );registration->SetMovingImage( movingImageReader->Get Output() );

The registration can be restricted to consider only a particular region of the fixed image as input to the metric com-putation. This region is defined by theSetFixedImageRegion() method. You could use this feature to reduce thecomputational time of the registration or to avoid unwantedobjects present in the image affecting the registration out-come. In this example we use the full available content of theimage. This region is identified by theBufferedRegionof the fixed image. Note that for this region to be valid the reader must first invoke itsUpdate() method.

fixedImageReader->Update();

registration->SetFixedImageRegion(fixedImageReader->GetOutput()->GetBufferedRegion() ) ;

The parameters of the transform are initialized by an array of floating point numbers. This initialization can be usedto setup an initial known correction to the misalignment. Inthis particular case, a translation transform is being usedfor the registration. The array of parameters for this transform is simply composed of the values of translation alongeach dimension. Setting the values of the parameters to zeroleads to initializing the transform as anidentitytransform.Note that the array constructor requires the number of elements as argument.

Page 49: ITK Handout

38 Chapter 3. Registration

typedef RegistrationType::ParametersType ParametersTy pe;ParametersType initialParameters( transform->GetNumbe rOfParameters() );

initialParameters[0] = 0.0; // Initial offset in mm along XinitialParameters[1] = 0.0; // Initial offset in mm along Y

registration->SetInitialTransformParameters( initial Parameters );

At this point the registration method is ready to be executed. The optimizer is the component that drives the executionof the registration. However, theitk::ImageRegistrationMethod class orchestrates the ensemble in order to makesure that everything is in place before the control is passedto the optimizer.

It is usually desirable to fine tune the parameters of the optimizer. Each optimizer has particular parameters that mustbe interpreted in the context of the optimization strategy it implements. The optimizer used in this example is a variantof gradient descent that attempts to prevent it from taken steps which are too large. At each iteration this optimizerwill take a step along the direction of theitk::ImageToImageMetric derivative. The initial length of the step isdefined by the user. Each time that the direction of the derivative changes abruptly, the optimizer assumes that a localextrema has been passed and reacts by reducing the step length by a half. After several reductions of the step lengththe optimizer may be moving in a very restricted area of the transform parameters space. The user can define howsmall the step length should be to consider convergence has been reached. This is equivalent to defining the precisionwith which the final transform is to be known.

The initial step length is defined with the methodSetMaximumStepLength() , while the tolerance for convergence isdefined with the methodSetMinimumStepLength() .

optimizer->SetMaximumStepLength( 4.00 );optimizer->SetMinimumStepLength( 0.01 );

In case the optimizer never succeed in reaching the desired precision tolerance it is prudent to establish alimit on the number of iterations to be performed. This maximum number is defined with the methodSetNumberOfIterations() .

optimizer->SetNumberOfIterations( 200 );

The registration process is triggered by an invokation of theStartRegistration() method. If something goes wrongduring the initialization or execution of the registrationan exception will be thrown. We should henceforth place theStartRegistration() method in atry/catch block as illustrated in the following lines.

try{registration->StartRegistration();}

catch( itk::ExceptionObject & err ){std::cout << "ExceptionObject caught !" << std::endl;std::cout << err << std::endl;return -1;}

In a real application you may attempt to recover from the error in the catch block. Here we are simply printing out amessage and then terminate the execution of the program.

The result of the registration process is an array of parameters that defines the spatial transformation in an unique way.This final result is obtained using theGetLastTransformParameters() method.

ParametersType finalParameters = registration->GetLast TransformParameters();

Page 50: ITK Handout

3.3. Hello World Registration 39

Figure 3.2:Fixed and Moving image provided as input to the registration method.

In the case of theitk::TranslationTransform , there is a straightforward interpretation of the parameters. Eachelement of the array corresponds to a translation along one of the dimension of space.

const double TranslationAlongX = finalParameters[0];const double TranslationAlongY = finalParameters[1];

The optimizer can be queried for the actual number of iterations performed to reach convergence. TheGetCurrentIteration() method returns this value. A large number of iterations may be an indication that themaximum step length has been set too small, which is undesirable since it results in long computational times.

const unsigned int numberOfIterations = optimizer->GetCu rrentIteration();

The value of the image metric corresponding to the last set ofparameters can be obtained with theGetValue() methodof the optimizer.

const double bestValue = optimizer->GetValue();

Let’s execute this example over some of the images provided in Insight/Examples/Data , for example:

� BrainProtonDensitySliceBorder20.png� BrainProtonDensitySliceShifted13x17y.png

The second image is the result of intentionally translatingthe first image by�13�17� millimeters. Both images have

unit-spacing and are shown in Figure3.2. The registration takes 18 iterations and produce as resultthe parameters:

Translation X = 12.9903Translation Y = 17.0001

As expected, these values match pretty well the misalignment intentionally introduced in the moving image.

It is common, as a last step of a registration task, to use the resulting transform to map the moving image into the fixedimage space. This is achieve with aitk::ResampleImageFilter . The fixed image and the transformed moving

Page 51: ITK Handout

40 Chapter 3. Registration

Optimizer

Transform

Interpolator

MetricFixed Image

Reader

Reader

Moving ImageTransform

SquaredDifferences

SquaredDifferences Writer

Writer

FilterResample

Registration Method

Parameters

Figure 3.3:Pipeline structure of the registration example.

Figure 3.4:Mapped moving image and its difference with the fixed image before and after registration

Page 52: ITK Handout

3.4. Monitoring Registration 41

0

2

4

6

8

10

12

14

16

18

0 2 4 6 8 10 12 14 16

Y T

rans

latio

n (m

m)

X Translation (mm)

0

500

1000

1500

2000

2500

3000

3500

4000

4500

0 2 4 6 8 10 12 14 16 18

Mea

n S

quar

es M

etric

Iteration No.

Metric valueLog(Metric)

Figure 3.5:Sequence of translations and metric values at each iteration of the optimizer.

image can easily be compared using theSquaredDifferenceImageFilter , which computes the squared value of thedifference between homologous pixels of its input images.

The complete pipeline structure of the current example is presented in Figure3.3, the components of the registrationmethod are exposed as well. Figure3.4left shows the result of resampling the moving image in orderto map it onto thefixed image space. The center image shows the squared differences between the fixed image and the moving image.The right image shows the squared differences between the fixed image and the transformed moving image. Bothdifference images are displayed negated in order to accentuate pixels where differences exist.

It is always useful to keep in mind that registration is essentially an optimization problem. Figure3.5helps to reinforcethis notion by showing the trace of translations and values of the image metric at each iteration of the optimizer. Itcan be seen from the left figure that the step length is progressively reduced as the optimizer gets closer to the metricextrema. The right plot shows clearly how the metric value isdecreasing as the optimization advances. The log plothelps to highlight the normal oscilations of the optimizer around the extrema value.

3.4 Monitoring Registration

Given the numerous parameters involved in tuning a registration method for a particular application it is common to befaced with a registration process that runs for several minutes and ends up with a useless result. In order to avoid thissituation it is quite helpful to track the evolution of the registration as it progresses. The following section illustratesthe mechanisms provided in ITK for monitoring the activity of the itk::ImageRegistrationMethod class.

Insight implements theObserver/Commandpattern design [?]. The classes involved in this implementation are theitk::Object , itk::Command and itk::EventObject classes. Theitk::Object class is the base class of mostITK objects. This class holds a linked list of pointers to event observers. The role of observers is played by theitk::Command class. Observer register themselves with anitk::Object declaring that they are interested in receiv-ing notice when a particular event happens. A set of events are represented by the hierarchy of theitk::Event class.Typical events areStart , End, Progress andIteration .

Registration is controlled by an itk::Optimizer which in general executes an iterative process. Mostitk::Optimizer classes invoke anitk::IterationEvent at the end of each iteration. When an event is invokedby an object, this object goes through its list of registeredobservers (itk::Command s) and checks whether any oneof them declared to be interested in the current event type. Whenever such an observer is found, its correspondingExecute() method is invoked. In this context,Execute() methods should be considered ascallbacks. As such, someof the common sense rules of callbacks should be respected. For example,Execute() methods should not performheavy computational tasks. They are supposed to execute rapid and short pieces of code like printing out a message orupdating a value in a GUI.

Page 53: ITK Handout

42 Chapter 3. Registration

Transform

Interpolator

MetricFixed Image

Moving Image

Optimizer CommandUpdateIteration

itk::CommandRegistration Method

Invoke( IterationEvent )

AddObserver()

Execute()

Figure 3.6:Interaction between the Command/Observer and the Registration Method.

A newly created command is registered as observer on the optimizer. The methodAddObserver() is used to that end.Figure3.6 illustrates the interaction between the Command/Observerclass and the registration method.

Let’s now review the main characteristics of the registration framework components.

3.5 Transforms

In the toolkit, itk::Transform objects encapsulate the mapping of points and vectors from an input space to anoutput space. If a transform is invertible, back transform methods are also provided. Currently, ITK provides a varietyof transforms from simple translation, rotation and scaling to general affine and kernel transforms. Note that, althoughin this section we discuss transforms in the context of registration, transforms are general and can be used for otherapplications.

3.5.1 Transform General Properties

Typically each transform class have several methods for setting its parameters. For example,itk::Euler2DTransform provide methods for separately setting the offset, the angle, and the entire rotationmatrix. However, for use in the registration framework, theparameters must also be represented by a flatitk::Arrayof double to allow communication with generic optimizers. In the caseof itk::Euler2DTransform , the transformis also defined by three doubles: the first representing the angle and the last two the offset. The flat array of parametersis defined usingSetParameters() . A description of the parameters and their ordering is documented in the followingsections.

In the context of registration, the transform parameters define the search space for optimizers. That is, the goal of theoptimization is to find the set of parameters defining a transform that results in the best possible value of an imagemetric. The more parameters a transform has, the longer its computational time will be when used in a registrationmethod since the dimension of the search space will be equal to the number of transform parameters.

Another requirement that the registration framework imposes on the transform classes is the computation of theirJacobians. In general, metrics require the knowledge of theJacobian in order to compute the metric derivatives. TheJacobian is a matrix whose element are the partial derivatives of the output point with respect to the array of parametersthat defines the transform1:

J �������

∂x1∂p1

∂x1∂p2

� � � ∂x1∂pm

∂x2∂p1

∂x2∂p2

� � � ∂x2∂pm

......

. . ....

∂xn∂p1

∂xn∂p2

� � � ∂xn∂pm

������ (3.1)

1Note that the termJacobianis also commonly used for the matrix representing the derivatives of output point coordinates with respect to inputpoint coordinates. Sometimes the term is loosely used to refer to the determinant of such matrix.

Page 54: ITK Handout

3.5. Transforms 43

Where �pi are the transform parameters and�xi are the coordinates of the output point. Within this frame-work, the Jacobian is represented by anitk::Array2D of double s and is obtained from the transform by methodGetJacobian() . The Jacobian can be interpreted as a matrix that indicates for a point in the input space how much itsmapping on the output space will change as a response to a small variation in one of the transform parameters. Notethat the values of the Jacobian matrix depend on the point in the input space. So actually the Jacobian can be noted asJ�X�. The use of transform Jacobians allows to compute metric derivatives in a very efficient way. When Jacobians

are not available, metrics derivatives have to be computed using finite difference at a price of 2M evaluations of themetric value, whereM is the number of transform parameters.

The following sections describe the main characteristics of the transform classes available in ITK.

3.5.2 Identity Transform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Maps every point to itself,every vector to itself and ev-ery covariant vector to itself.

0 Only defined when the in-put and output space has thesame number of dimensions.

3.5.3 Translation Transform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Represents a simple transla-tion of points in the inputspace and has no effect onvectors or covariant vectors.

Same as theinput spacedimension.

The i-th parameter repre-sents the translation in thei-th dimension.

Only defined when the in-put and output space has thesame number of dimensions.

3.5.4 Scale Transform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Points are transformed bymultiplying each one of theircoordinates by the corre-sponding scale factor for thedimension. Vector are trans-formed as points. Covariantvectors are transformed bydividing their componentsby the scale factor of the cor-responding dimension.

Same as theinput spacedimension.

The i-th parameter repre-sents the scaling in thei-thdimension.

Only defined when the in-put and output space has thesame number of dimensions.

Page 55: ITK Handout

44 Chapter 3. Registration

3.5.5 Euler2DTransform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Represents a 2D rotation anda 2D translation. Note thatthe translation componenthas no effect on the trans-formation of vectors and co-variant vectors.

3 The first parameter is the an-gle in radians and the lasttwo parameters are the trans-lation in each dimension.

Only defined for two-dimensional input andoutput spaces.

Euler2DTransform implements a rigid transformation in 2D. It is composed of a plane rotation and a two-dimensionaltranslation. The rotation is applied first, followed by the translation.

The most common difficulty of this transform is the difference in units used for rotations and translations. Rotationsare measured in radians, hence their values are in the range��π �π�. Translations are measured in millimeters andtheir actual values vary depending on the image modality being considered. In practice, translations have values onthe order of 10 to 100. This scale difference between rotation and translation parameters is undesirable for gradient-descent optimizers because they deviate the trajectories of descent making optimization slower and more unstable. Inorder to compensate for these differences, ITK optimizers accept an array of scale values that are used to normalizethe parameter space.

Registrations involving angles and translations should take advantage of the scale normalization functionality in orderto get the best performance out of the optimizers.

3.5.6 CenteredRigid2DTransform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Represents a 2D rotationaround an arbitrary centerfollowed by a 2D transla-tion.

5 The first parameter is the an-gle in radians. Second andthird are the center of rota-tion coordinates and the lasttwo parameters are the trans-lation in each dimension.

Only defined for two-dimensional input andoutput spaces.

CenteredRigid2DTransform implements a rigid transformation in 2D. The main difference between this trans-form and the itk::Euler2DTransform is that here we can specify an arbitrary center of rotation, while theitk::Euler2DTransform always use the origin of the coordinate system as rotation center. This distintion is quiteimportant in image registration since ITK images usually have their origin in the corner of the image rather than themiddle. Rotational mis-registrations however usually exist as rotations around the center of the image, or around apoint in the middle of the anatomical structure captured by the image. Using gradient descent optimizers, it is almostimposible to solve non-origin rotations using origin rotations since the deep basin of the real solution is across a highridge in the topography of the cost function.

Page 56: ITK Handout

3.5. Transforms 45

3.5.7 Similarity2DTransform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Represents a 2D rotation,homogeneous scaling and a2D translation. Note that thetranslation component hasno effect on the transforma-tion of vectors and covariantvectors.

4 The first parameter is the an-gle in radian, the second thescaling factor for all dimen-sions and the last two pa-rameters are the translationin each dimension.

Only defined for two-dimensional input andoutput spaces.

3.5.8 QuaternionRigidTransform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Represents a 3D rotation anda 3D translation. The rota-tion is specified as a quater-nion, defined by a set of fournumbersq. The relationshipbetween quaternion and ro-tation about vectorn by an-gleθ is as follows:

q � �nsin

�θ�2��cos

�θ�2��

Note that if the quaternionis not of unit length, scalingwill also result.

7 The first four parametersdefines the quaternion andthe last three parameters thetranslation in each dimen-sion.

Only defined for three-dimensional input andoutput spaces.

This class implements a rigid transformation in 3D space. The rotational part of the transform is represented using aquaternion while the translation is represented with a vector. Quaternions components do not form a vector space andhence issues arises when used with gradient-descent optimizers.

As a solution, theitk::QuaternionRigidTransformGradientDescentOptimiz er was introduced in the toolkit.This specialized optimizer implements a variation of a gradient-descent algorithm adapted for a quaternion space.This class makes sure that after advancing in any direction on the parameter space the resulting new set of transformparameters are mapped back to the permissible set of parameters. In practice this comes down to normalizing thenewly computed quaternion to make sure that the transformation remains rigid and no scaling is applied.

3.5.9 VersorTransform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Represents a 3D rotation.The rotation is specified bya versor or unit quaternion.

3 The three parameters definethe versor.

Only defined for three-dimensional input andoutput spaces.

A Versoris by definition the rotational part of a quaternion. It can also be defined as aunit-quaternion[?, ?]. Versorsonly have three independent components since they are restricted to reside in the space of unit-quaternions. Theimplementation of Versors in the toolkit uses a set of three numbers. These three numbers correspond to the first three

Page 57: ITK Handout

46 Chapter 3. Registration

components of a quaternion. The fourth component of the quaternion is computed internally such that the quaternionis of unit length.

The space formed by versor parameters is not a vector space. Standard gradient-descent algorithms are not appropriatefor exploring this parameter space. An optimizer specialized for the versor space is available in the toolkit under thename of itk::VersorTransformOptimizer . This optimizer implements Versor derivatives as defined byoriginalHamilton [?].

3.5.10 VersorRigid3DTransform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Represents a 3D rotation anda 3D translation. The rota-tion is specified by a versoror unit quaternion, while thetranslation is represented bya vector.

6 The first three parametersdefine the versor and the lastthree parameters the transla-tion in each dimension.

Only defined for three-dimensional input andoutput spaces.

This transform is a close variant of the itk::QuaternionRigidTransform . It can be seen as anitk::VersorTransform plus a translation defined by a vector. It general terms it implements a rigid transform in 3D.The advantage of this class with respect to theitk::QuaternionRigidTransform is that it exposes only 6 parame-ters, three for the versor components and three for the translational components. This makes the search space for theoptimizer to be reduced to dimension 6 instead of the dimension 7 used byitk::QuaternionRigidTransform .

3.5.11 AffineTransform

Behavior Number of pa-rameters

Parameter Ordering Restrictions

Represents an affine trans-form composed of rotation,scaling, shearing and trans-lation. The transform isspecified by aN �N matrixand aN �1 vector whereNis space dimension.

�N �1� �N The first N �N parame-

ters defines the matrix incolumn-major order (wherethe column index varies thefastest). The lastN parame-ters defines the translate foreach dimension.

Only defined when the inputand output space have thesame dimension.

The coefficients of theN �N matrix can represent rotations, anisotropic scaling and shearing. These coefficients areusually of a very different dynamic range compared to the translation coefficients. Coefficients in the matrix tend tobe in the range��1 : 1

�but are not restricted to this interval. Translation coefficients on the other hand can be on the

order of 10 to 100 and are basically related to the image size and pixel spacing.

This difference in scale makes it necessary again to take advantage of the functionality offered by optimizer forrescaling the parameter space. This is particularly relevant for optimizers based on gradient-descent approaches.

3.6 Interpolators

During the registration process, a metric typically compares the intensity values in the fixed image with the corre-sponding values in the transformed moving image. When a point is mapped from one image space to another, it willgenerally be mapped to a non-grid position. Thus, an interpolation method is needed to obtain a intensity value for themapped point using the information from the neighboring grid positions. Within the registration software framework,

Page 58: ITK Handout

3.7. Metric 47

the interpolatorcomponent has two functions: to compute the interpolated intensity value at a requested position andto detect whether or not a requested position lies within themoving image domain.

In the context of registration, the interpolation method affects the smoothness of the metric space. However, sinceinterpolations are evaluated thousands of times in a singleoptimization cycle. The user has to trade-off the efficiencyof computation versus the ease of optimization when selecting the interpolation scheme. Three of the most popularinterpolation methods:nearest-neighbor, linear andB-spline are available in ITK.

3.7 Metric

In ITK, itk::ImageToImageMetric objects measure quantitatively how well the transformed moving image fits thefixed image by comparing the gray-scale intensity of the images. These metrics are very flexible and can work withany transform or interpolation method and do not require thereduction of the gray-scale images to sparse extractedinformation such as edges.

The metric component is perhaps the most critical element ofthe registration framework. The selection of which metricto use is highly dependent on the registration problem to be solved. For example, some metrics have a large capturerange while others require initialization close to the optimal position; some metrics are only suitable for comparingimages obtained from the same imaging modality while otherscan handle inter-modality comparisons. Unfortunately,there are no clear-cut rules as to how to choose a metric.

The basic inputs to a metric are: the fixed and moving images, atransform and an interpolator. The methodGetValue() can then be used to evaluate the quantitative criterion at the transform parameters specified in the ar-gument. Typically, the metric samples points within a defined region of the fixed image. For each point, the corre-sponding moving image position is computed using the transform with the specified parameters, then the interpolatoris used to compute the moving image intensity at the mapped position.

As well as the measure value, gradient-based optimization schemes also require derivatives of the measure with respectto each transform parameter. The methodsGetDerivatives() andGetValueAndDerivatives() can be used toobtain the gradient information.

In the following sections, we present an overview of the metrics available in ITK. For ease of notation, we will referto the fixed imagef

�X� and transformed moving imagem�T

�X� as imagesA andB.

3.7.1 Mean Squares Metric

itk::MeanSquaresImageToImageMetric computes the mean square pixel-wise difference in intensity between im-ageA andB over a user defined region:

MS�A�B� � 1

N

N

∑i

�Ai �Bi �2 (3.2)

Ai is the i-th pixel of Image ABi is the i-th pixel of Image B

N is the number of pixels considered

The optimal value of the metric is zero. Poor matches betweenimagesA andB results in large values of the metric.This metric is simple to compute and have a relatively large capture radius.

This metric relies on the assumption that intensity representing the same homologous point must be the same in bothimages. Hence, its use is restricted to images of the same modality. Additionally, any linear changes in the intensityresult in a poor match value.

Page 59: ITK Handout

48 Chapter 3. Registration

3.7.2 Normalized Correlation Metric

itk::NormalizedCorrelationImageToImageMetric computes pixel-wise cross-correlation and normalize it bythesquare root of the autocorrelation of the images:

NC�A�B� � �1 � ∑N

i�Ai �Bi ��

∑Ni A2

i �∑Ni B2

i

(3.3)

Ai is the i-th pixel of Image ABi is the i-th pixel of Image B

N is the number of pixels considered

Note the�1 factor in the metric computation. This factor is used to make the metric be optimal when its minimumis reached. The optimal value of the metric is then minus one.Misalignment between the images results is smallmeasure values. The use of this metric is limited to images obtained using the same imaging modality. The metric isinsensitive to multiplicative factors between the two images. This metric produces a cost function with sharp peaksand well defined minima. On the other hand, it has a relativelysmall capture radius.

3.7.3 Mean Reciprocal Square Differences

itk::MeanReciprocalSquareDifferenceImageToImageMetr ic computes pixel-wise differences and add them af-ter passing them through a bell-shaped function1

1�x2 :

PI�A�B� � N

∑i

1

1� �Ai�Bi �2λ2

(3.4)

Ai is the i-th pixel of Image ABi is the i-th pixel of Image B

N is the number of pixels consideredλ controls the capture radius

The optimal values isN and poor matches results is small measure values. The characteristics of this metric have beenstudied by Penney and Holden [?][?]

This image metric has the advantage of producing poor valueswhen few pixels are considered, this makes it consistentwhen its computation is subject the size of the overlap region between the images. The capture radius of the metriccan be regulated with the parameterλ. The profile of this metric is very peaky. The sharp peaks of the metric helps tomeasure spatial misalignment with high precision. Note that the notion of capture radius is used here in terms of theintensity domain, not the spatial domain. In that regard,λ should be given in intensity units and be associated to thedifference in intensity that will make drop the metric by 50%.

The metric is limited to be used with images of the same image modality. The fact that its derivative is large at thecentral peak is a problem for some optimizers that rely on thederivative to decrease as the extrema are reached. Thismetric is also sensitive to linear changes in intensity.

3.7.4 Mutual Information Metric

This metric computes the mutual information between imageA and imageB. Mutual information (MI) measures howmuch information one random variable (image intensity in one image) tells about another random variable (imageintensity in the other image). The major advantage of using MI is that the actual form of the dependency does not haveto be specified. Therefore, complex mapping between two images can be modeled. This flexibility makes MI wellsuited as a criterion of multi-modality registration.

Page 60: ITK Handout

3.7. Metric 49

sigma

gray levels

Figure 3.7: In Parzen windowing, a continuous density function is constructed by superimposing kernel functions (Gaussian

function in this case) centered on the intensity samples obtained from the image.

Mutual information is defined in terms of entropy. Let

H�A� � � pA

�a�logpA

�a�da (3.5)

be the entropy of random variableA, H�B� entropy of random variableB and

H�A�B� � pAB

�a�b�logpAB

�a�b�dadb (3.6)

be the joint entropy ofA andB. If A andB are independent then

pAB�a�b� � pA

�a�pB

�b� (3.7)

andH�A�B� �H

�A��H

�B�! (3.8)

However, if there is any dependency then

H�A�B� �H

�A��H

�B�! (3.9)

The difference is called Mutual Information :I�A�B�

I�A�B� �H

�A��H

�B� �H

�A�B� (3.10)

Parzen Windowing

In a typical registration problem, direct access to the marginal and joint probability densities is not available and hencethe densities must be estimated from the image data. Parzen windows (also known as kernel density estimators) canbe used for this purpose. In this scheme, the densities are constructed by taking intensity samplesS from the imageand super-positioning kernel functionsK

��� centered on the elements ofSas illustrated in Figure3.7:

p�a� "P#�a� � 1

N ∑sj $S

K�a�sj � (3.11)

A variety of functions can be used as the smoothing kernel with the requirement that it be smooth, symmetric, haszero mean and integrates to one. For example, boxcar, Gaussian and B-spline functions are suitable candidates. Asmoothing parameter is used to scale the kernel function. The larger the smoothing parameter, the wider the kernelfunction used and hence the smoother the density estimate. If the parameter is too large, features such as modes in thedensity will get smoothed out. On the other hand, if the smoothing parameter is too small, the resulting density maybe too noisy. Choosing the optimal smoothing parameter is a difficult research problem and beyond the scope of thissoftware guide. Typically, the optimal value of the smoothing parameter will depend on the data and the number ofsamples used.

Page 61: ITK Handout

50 Chapter 3. Registration

Viola and Wells Implementation

ITK has two mutual information metric implementations. Thefirst follows the method specified by Viola and Wellsin [?].

In this implementation, two separate intensity samplesS andR are drawn from the image: the first to compute thedensity and the second to approximate the entropy as a samplemean:

H�A� � 1

N ∑r j $R

logP#�r j �! (3.12)

Gaussian density is used as smoothing kernel where the standard deviationσ acts as the smoothing parameter.

The number of spatial samples used for computation is definedusing methodSetNumberOfSpatialSamples() . Typ-ical values range from 50 to 100. Note that computation involves anN �N loop and hence the computation burdenbecomes very expensive when a large number of samples is used.

Quality of the density estimates depends on the choice of theGaussian kernel standard deviation. Optimal choicewill depend on the content of the images. In our experience with the toolkit, we have found that a standard de-viation of 0.4 works well for images that have been normalized to have a mean of zero and standard deviationof 1.0. The standard deviation of the fixed image and moving image kernel can be set separately using methodsSetFixedImageStandardDeviation() andSetMovingImageStandardDeviation() .

Mattes et al Implementation

The second form of mutual information metric available in the toolkit follows the method specified by Mattes et al in[?].

In this implementation, only one set of intensity samples isdrawn from the image. Using this set, the marginal andjoint probability density function (PDF) is evaluated as discrete positions or bins uniformly spread within the dynamicrange of the images. Entropy values are then computed by summing over the bins.

The number of spatial samples used is set using methodSetNumberOfSpatialSamples() . The number of bins useto compute the entropy values is set viaSetNumberOfHistogramBins() .

Since the fixed image PDF does not contribute to the metric derivatives, it does not need to be smooth. Hence, a zeroorder (boxcar) B-spline kernel is used for computing the PDF. On the other hand, to ensure smoothness, a third orderB-spline kernel is used to compute the moving image intensity PDF. The advantage of using a B-spline kernel overa Gaussian kernel is that the B-spline kernel has a finite support region. This is computationally attractive, as eachintensity sample only affect a small of bins and hence does not require aN �N loop to compute the metric value.

During the PDF calculations, the image intensity values arelinearly scaled to have a minimum of zero and maximumof one. This rescaling means that a fixed B-spline kernel bandwidth of one can be used to handle image data witharbitrary magnitude and dynamic range.

3.8 Optimizier

Within the registration framework, the role of theoptimizercomponent is to optimize the qualitative measure providedby themetric with respect to the parameters of thetransformcomponent. Starting from an initial set of parametrs,the optimization procedure iteratively search for the optimal solution by evaluating the metric at different positionofthe transform parameter search space. Optimization algorithms can broadly divided into those which use derivativeinformation and those which do not. In the registration framework, the metric derivative is also provided by the metriccomponent.

Since transform parameters can have vastly different dynamic ranges, particular attention should be paid in the scaleof parameters when solving registratin problems. For example, a 2D rigid transform is specified by a rotation (in

Page 62: ITK Handout

3.9. Medical Imaging Examples 51

radians) and two translation parameters (in mm). A unit change in angle have a much greater impact on an imagethan a unit change in translation. This difference in scale appears as elongated valleys in the parameter search spacecausing difficulties for some optimization algorithms. In some cases, it may be neccessary to rescale the parameters toover come this problem. A simple implementation of rescaling is to divide or multiply the metric gradient by weightschosen to balance the parameters.

Detailed discussion and analysis of optimization algorithms can be found in various monographs such as [?, ?]).

Optimization algorithms are encapsulated asitk::Optimizer objects within ITK. Optimizers are generic and canbe used for applications other than registration. The typesof itk::SingleValuedNonLinearOptimizer currentlyavailable in ITK are:

� Amoeba: Nelder-Meade downhill simplex. This optimizer is actually implemented in theVxL/vnl numericstoolkit. The ITK classitk::AmoebaOptimizer is merely an adaptor class.� Conjugate Gradient: Fletcher-Reeves form of conjugate gradient with or without preconditioning. Also anadaptor to an optimizer invnl .� Gradient Descent: Advance parameters in the direction of the gradient where the step size is governed by alearning rate.� Quaternion Rigid Transform Gradient Descent: A specialized version ofGradientDescentOptimizer forQuaternionRigidTransform parameters, where the parameters representing the quaternion is normalize to amagnitude to one at each iteration to represent a pure rotation.� LBFGS: Limited memory Broyden, Fletcher, Goldfarb and Shannon minmization. It is an adaptor to an opti-mizer invnl .� One Plus One Evolutionary: Strategy that simulates the biological evolution of a set of samples in the searchspace. This optimizer is mainly used in the process of bias correction for MRI images.� Regular Step Gradient Descent: Advance parameters in the direction of the gradient where abipartitionscheme is used to compute the step size.� Versor Transform Optimizer: A specialized version ofRegularStepGradientDescentOptimizer forVersorTransform parameters where the current rotation is composed with the gradient rotation to producethe new rotation vector. It follows the definition of versor gradients defined by Hamilton [?].

3.9 Medical Imaging Examples

3.9.1 Multi-modality Multi-resolution Example

In this example, the problem is to register the 3D CT image of the head in Figure3.8 to the MR T1-weighted imageof the same subject in Figure3.9 2. The results of this type of registration could be used to aidneurosurgery orradiotherapy planning.

The two images are of different size and resolution: the CT image is of size 512�512�44 with pixel size of 0!41�0!41mm, the MR-T1 image is 256�256�52 with pixel size of 0!78�0!78mm. Both images have 3mm slice thickness.There is also a small difference in the field of view with the MR-T1 image extending further down past the nose thanthe CT image.

To solve this problem we need to select the underlying algorithm for each of the components in Figure3.1. In thisexample, the MR-T1 image is thefixed imageand the CT is themoving image. Since the CT and MR-T1 images havevery different distributions, the use of simple similaritymeasures such as mean squares and normalized correlation

2The images were provided as part of the “Retrospective ImageRegistration Evaluation” project, National Institutes ofHealth, Project Number8R01EB002124-03, Principal Investigator, J. Michael Fitzpatrick, Vanderbilt University, Nashville, TN.

Page 63: ITK Handout

52 Chapter 3. Registration

Figure 3.8: 3D CT image of the head used in the multi-modality, multi-resolution registration example in section3.9.1.The image is of size 512�512�44 with pixel size of 0!41�0!41mm and 3mm slice thickness.

Figure 3.9: 3D MR-T1 image of the head used in the multi-modality, multi-resolution registration example in sec-tion 3.9.1. The image is of size 256�256�52 with pixel size of 0!78�0!78mm and 3mm slice thickness.

Page 64: ITK Handout

3.9. Medical Imaging Examples 53

Figure 3.10: Pyramid of downsampled images. The coarest image (left) is generated by downsampling the originalCT by a factors of�8�8�1 . The next two images are obtaineded using downsample factors of �4�4�1 and�2�2�1 .The full resolution CT image is used at the finest level (right).

are not applicable. Mutual information, on the other hand, is well suited to this problem. In particular we will use theimplementation in [?] as ourmetriccomponent. Since the human skull prevents any non-rigid deformation, we willuse a rigidtransform. In particular we will use a quaternion to represent the 3D rotation. In this illustrative example,we will also use a tri-linearinterpolatorand simple steepest descent as theoptimizer.

It is important to note that the transformation is rigid withrespect to physical co-ordinates and not image index co-ordinates. The registration components should be implemented such that communication between the components iswith respect to physical co-oridnates, allowing images of different resolution to be registered without resampling theimages to a common isotropic resolution.

Performing image registration using a multi-resolution strategy helps to improve the computational speed, accuracyand robustness. In this example, we will perform registration in a coarse-to-fine manner where the transformationcomputed at one resolution level is used to initialize the registration at the next finer resolution level.

The first step is to generate a pyramid of downsampled images for each of the images to be registered. In this example,we will use four resolution levels. At the coarest level, theCT image will be downsampled by a factor of 8 for eachof the in-plane dimensions. Since the slice thickness is large, we will not donwsample in the slice direction. At thecoarest level, the image is approximately isotropic and of size 64�64�44 pixels. The images at the subsequenttwo levels are formed by downsampling in-plane by factors of4 and 2. The full resolution image is used at the lastlevel. The coarest level of the MR-T1 image pyramid is such that the image matches the resolution of the coarest CTimage. That is, downsample by a factor of 4 in-plane. The nextlevel is obtained by downsampling by a factor of 2.The full resolution MR-T1 image is used for the last two levels. Note that before downsampling, the images are firstblurred using a Gaussian kernel with variance of 0!5 times the shrink factor. The downsampled CT images are shownin Figures3.10with the downsampled MR-T1 images in Figure3.11.

The location of the center of rotation has a great impact on the performance of the optimization procedure. In practice,the center of rotation will be closer to the center of the image than at the image origin (center of the first pixel). Withthis in mind, we will place the center of rotation at the center of image and start the registration process with the twoimages aligned at their centers. This initialization is shown in Figure3.12where the center of the CT image has beenaligned with the center of the MR-T1 image and resampled to the same resolution as the MR-T1 image. The CT imageis displayed as a color overlay on top of a gray-scale MR-T1 image. The yellow color represents the bright part of the

Page 65: ITK Handout

54 Chapter 3. Registration

Figure 3.11: Pyramid of downsample images. The coarest image (left) is generated downsampling the original MR-T1 image by a factors of�4�4�1 . The next image is obtained by using downsample factors of�2�2�1 . The fullresolution MR-T1 image is used for the last two resolution levels.

Level 1 2 3 4CT Shrink Factors 8�8�1 4�4�1 2�2�1 1�1�1

MR-T1 Shrink Factors 4�4�1 2�2�1 1�1�1 1�1�1No. of Spatial Samples 1000 8000 64000 512000No. of Histogram Bins 50 50 50 50

No. of Iterations 250 250 250 250Step Size 2 �10�3 4 �10�4 8 �10�5 1!6 �10�5

Table 3.1: Parameters used for performing the multi-modality, multi-resolution registration example.

CT image in3.8corresponding to the skull. From the Figure, the initial misalignment is quite clear.

In this example, 3D rigid transformation is defined by seven parameters: the first four representing the quaternion andthe last three the 3D translation. As previously discussed it is important to address the scale differences between thequaternion and translation parameters. For this example, we scale the metric derivative for the translation parametersby a factor of 4�104.

Using the parameters listed in Table3.1 registration was performed in a coarse to fine manner and the final result ofthe registration is shown in Figure3.13. In the sagittal view the sinus cavity is clearly aligned.

The joint intensity histogram of the CT and MR-T1 images before and after registration are shown in Figure3.14. Theeffect of maximizing the mutual information is to concentrate the histogram to have fewer non-zero elements and tohave a small number of elements with very high values. If the joint histogram is interpreted as the joint probabilitydensity function then the effect of registration is equivalent to minimizing the joint entropy.

3.9.2 Deformable Registration Example

The problem addressed by this example is the registration ofthe contrast-enhanced breast MRI time series images inFigure3.15. Dynamic contrast MRI has become a valuable tool for early detection of breast cancer [?]. During the MRIexam, a contrast agent is injected via a catheter. A 3D MRI scan is acquired prior to the injection, followed by a seriesof scans post-injection. The contrast uptake curves of malignant lesions behave differently to those corresponding tobenign lesions and thus characteristics of the uptake curves can be used to discriminate cancerous lesions. A typicalexam can take 15 minutes, any patient movement during the course of the exam will result in misalignment of theMRI time series images; as a result accurate registration the MRI time series will faciliate the extraction of the contrastuptake curves.

The two challenging factors in registering dynamic contrast breast MRI images are (1) the deformable motion ofthe breast tissue and (2) the non-uniform intensity change due to the contrast uptake. These factors must be takeninto account when choosing components for the registrationframework. Following the approach taken in [?], wewill model the deformable transform using regular grid of B-spline control points and use mutual information as the

Page 66: ITK Handout

3.9. Medical Imaging Examples 55

Figure 3.12: Registration start with the center of the CT image initially aligned with the center of the MR-T1 image.The CT image is shown as a color overlay on top of a gray-scale MR-T1 image. The yellow color corresponds withthe bright intensity regions of the CT image.

Page 67: ITK Handout

56 Chapter 3. Registration

Figure 3.13: Results of the multi-resolution registrationprocess described in section3.9.1. The CT image is shown asa color overlay on top of a gray-scale MR-T1 image. The yellowcolor corresponds with the bright intensity regionsof the CT image.

Figure 3.14: Joint intensity histogram of the CT and MR-T1 images of the head before (left) and after (right) registra-tion. In the histogram images, the intensity range from white (zero value) to black (highest value). Registration hasthe effect of concentrating the histogram to fewer non-zeroelements.

Page 68: ITK Handout

3.9. Medical Imaging Examples 57

Figure 3.15: 3D contrast-enhanced breast MRI time series images. Each 3D image is of size 192�192�13 with pixelsize of 0!9375�0!9375mm and 8mm slice thickness. Images courtesy of University of Washington.

similarity metric. For this example, we will use the mutual information implementation described in [?] using 50�50histogram bins and sampling 20% of the image to create the joint pdf. We will also use a tri-linear interpolator and aLBFGS [?] optimizer.

The top-left image of Figure3.16shows the absolute difference between the images before registration. The largedifferences result from contrast uptake (in the middle of the breast tissue) and motion artifacts (e.g. at boundary of thebreast tissue). It can be observed that the scale of misalignment is small and hence a multi-resolution approach is notneeded for this problem.

For this example, we will model the deformation using a 11�11�5 grid of B-spline control points. It should be notedthat for a cubic B-spline, 27 neighboring control points arerequired to evaluate the deformation at any one position. Asa result, the grid should be extended beyond the image domainso that it is always possible to evaluate the deformationat any point inside the image.

The top-right image of Figure3.16shows the result of the registration. It can be seen that the registration was able toremove a majority of the motion artifacts while differencesdue to contrast uptake were left unalterd. For comparision,we repeated the experiment using a different metric and a different transform model. In the bottom-left image, weused a mean squares metric instead of mutual information. Itcan be observed that for this case, the focus of theregistration is to reduce the large difference due to the contrast uptake which is not appropriate the dynamic contrastMRI application. In the bottom-right image, we used a simplerigid transform instead of the B-spline deformablemodel. Although some of the motion artifact has been removed, it is clear that a rigid transform is not sufficient tomodel the deformable motion of the breast tissue.

Page 69: ITK Handout

58 Chapter 3. Registration

Figure 3.16: Absolute difference between unregistered (top left) and registered contrast-enhanced breast MRI images.The top right image corresponds to deformable registrationusing mutual information as the basis of registration, thebottom left corresponds to deformable registration using mean squares and the bottom right, rigid registration usingmutual information.

Page 70: ITK Handout

CHAPTER

FOUR

Integrating ITK with GUI Toolkits

ITK is focused in providing image segmentation and registration functionalities. There is no direct support for per-forming visualization nor graphical user interface inITK. In a full fledged application these additional funtionalitiesmust be provided by other software packages or toolkits.

This section describes the integration ofITK with two popular open-source GUI toolkits. The first one isFLTK(The Fast Light Toolkit) and the second one isQt. Both toolkits are writen in C++ and offer a similar basic structure.FLTK has a rapid learning curve but also a reduced number of widgets. Qt offers a very comprehensive set of widgets,deep documentation and as a consequence its learning curve is stepper. When short time is available for development,FLTK may be a better option. When a refined presentation is desiredand abundant development time is available,Qtwould probably be the right choice1.

4.1 FLTK

FLTK is an open-source toolkit for supporting graphical user interface. It is multi-platform and has been tested insystems such as UNIX/Linux (X11), Microsoft Windows, and MacOS X. The source code of the toolkit as well asits documentation is available athttp://www.fltk.org . The toolkit is currently maintained by a small group ofdevelopers across the world with a central repository onSourceForge.

4.1.1 Installing the software

The first step for usingFLTK in your project is to download the source code and build it in your system following theinstructions provided inhttp://www.fltk.org . When buildingFLTK under Unix systems is important to performthe final installation step (e.g.make install ). This installation can be done in any directory, henceforth no admin-istrator rights are required.FLTK headers are renamed during the installation process in order to compensate for aninconsistency in the upper/lower case convention for file naming. When buildingFLTK under MS-Windows it is veryimportant to use the same build configuration that you plan touse for your project. This is for example,ReleaseorDebug. All the libraries used in your project must be build in the same configuration.

FLTK provides a tool calledfluid for interactive design of the GUI. This tool allows to createC++ classes, addmethods to them, as well as instantiating widgets and defining their multiple parameters.fluid stores this informationin files with extension.fl or .fld . From this files, fluid is capable to generateC++ code in the form of .cxx and .hfiles. You must avoid to modify the .h and .cxx files generated by fluid, since any changes will be lost the next timefluid is run. Given thatFLTK supports object oriented programming, the best way to program with it is to createclasses that encapsulate different levels of the GUI. This enables the developers to write the GUI as a base class withvirtual methods in the fluid file, then derive another class from it and overload its virtual methods. In this way, bytaking advantage of the natural mechanism of polymorphism in C++ you will be able to add functionality to the GUIcode without having to modify in any way the files generated byfluid .

1This observations should be taken with caution. They reflectthe personal opinion of the speaker rather than a stablishedfact. Very differentopinions will be found on this topic amongITK developers.

Page 71: ITK Handout

60 Chapter 4. Integrating ITK with GUI Toolkits

4.1.2 Configuring with CMake

CMake is the multi-platform tool used byITK. In order to combineFLTK andITK in your project the followinglines must be included in theCMakeLists.txt file of the project.

PROJECT( myFltkItkProject )

FIND_PACKAGE(ITK)IF(ITK_FOUND)

INCLUDE(${ITK_USE_FILE})ELSE(ITK_FOUND)

MESSAGE(FATAL_ERROR "ITK not found. Please set ITK_DIR.")ENDIF(ITK_FOUND)

FIND_PACKAGE(FLTK)IF (FLTK_FOUND)

INCLUDE_DIRECTORIES (${FLTK_INCLUDE_DIR})ELSE (FLTK_FOUND)

MESSAGE( FATAL_ERROR "This application requires FLTK.")ENDIF (FLTK_FOUND)

The first group of lines searches forITK while the second group of lines searches forFLTK. When CMake is runover thisCMakeLists.txt file the following CMake variables must be set.

� FLTK FLUID EXECUTABLE= full path to the toolfluid .� FLTK INCLUDE DIR = path to the headers directorywithout /FL� FLTK BASE LIBRARY = the full path to fltk.lib� FLTK GL LIBRARY = the full path to fltk gl.lib� FLTK FORMSLIBRARY = the full path to fltk forms.lib� FLTK IMAGES LIBRARY = the full path to fltk images.lib

Typically, if you provide the first two variables, CMake willbe able to figure out the location of the remaining com-ponents.

A minimal project usingFLTK will be composed of two files. OneC++ file and afluid file with extension .fl or .fld.This file is intended to be processed byfluid in order to generate .h and .cxx files. TheCMakeLists.txt file shouldalso add the project directory to the path for searching header files. This is done with the following CMake lines

INCLUDE_DIRECTORIES(${myFltkItkProject_SOURCE_DIR})

Note the user ofmyFltkItkProject which is the name give to the project by theproject cmake command. the nameof the executable will be defined with the following cmake command

ADD_EXECUTABLE( ApplicationPrototype main.cxx )

whereApplicationPrototype will be the executable name, andmain.cxx is the singleC++ file of this minimalapplication. CMake knows about how to transform afluid file into .cxx and .h files. The only thing that the developerhas to do is to indicate what build target will use the corresponding .cxx and .h files. This information is provided toCMake by the following line

Page 72: ITK Handout

4.1. FLTK 61

itk::Object

itk::ClassX

itk::Command

itk::CommandY itk::EventObjectZ

itk::EventObject

Figure 4.1:Class diagram of the classes involved in the Command/Observer pattern

FLTK_WRAP_UI(ApplicationPrototype ApplicationGUI.fl)

WhereApplicationPrototype is the name of the executable given above, andApplicationGUI.fl is the name ofthe fluid file. CMake will arrange forfluid to be invoked at build time for generating theC++ files. These files willhave names

� ApplicationGUI.h� ApplicationGUI.cxx

CMake will also add these two files to the dependency list of the executable, in this caseApplicationPrototype .Note that because these files are produced as part of the buildprocess, they will be created in the binary directory ofthe project and not in the source directory.

It is a common error to fail to provide one or several of theFLTK components required by theFLTK * variables inCMake. When that happens, theFLTK WRAPUI command is not enabled and a configuration error is produced byCMake. Should that happen, you must verify the paths provided for FLTK components and fix any inconsistency.

Finally, theCMakeLists.txt should provide the list of libraries to which the executableshouild be linked. This isdone with the following lines

TARGET_LINK_LIBRARIES( ApplicationPrototypeITKIO ITKBasicFilters ITKNumerics ITKCommon${FLTK_LIBRARIES})

This configuration is all what is needed by CMake for integrating FLTK andITK in a single project.

4.1.3 Writing a simple example

The mechanism built-in inITK for communicating with a GUI is based on theitk::Command . This class and itsderived classes implement a combination of theObserverandCommanddesign patterns [?].

A Command (readObserver) is connected to aITK class in order to watch for events sent from that class. A hier-archy of events is available inITK. The cooresponding classes can be found deriving from theitk::EventObject .Figure4.1shows class diagram of the main actors in the implementationof the Observer/Command pattern.

The communication is in practice implemented between derived classes of theitk::Object , derived classes of theitk::Command and derived classes of theitk::EventObject . Figure4.1illustrates the collabration diagram betweeninstances of these three classes.

A Command class is registered with a specific Object class andmanifests its interest in a particular class of Events.This is done in the code with the following invokation

Page 73: ITK Handout

62 Chapter 4. Integrating ITK with GUI Toolkits

itk::ObjectX

itk::EventA

itk::CommandK

itk::EventB

itk::EventA

itk::EventC

itk::CommandL

itk::CommandM

itk::CommandN

Figure 4.2:Collaboration diagram between the itk::Object , the itk::Command and the itk::EventObject .

fltkProgressBar

Fl_Slider

itk::MemberCommand< fltkProgressBar >

itk::Command

itk::ImageFilterX

itk::ProcessObject

Figure 4.3:Collaboration diagram of an ITK Command and a FLTK widget

CommandKType::Pointer myCommand = CommandKType::New();ObjectXType::Pointer myObject = ObjectXType::New();

myObject->AddObserver( itk::ProgressEvent(), myComman d );

In this case the instance ofmyCommandobserver is being registered with themyObject and declaring its interest in theProgressEvent . Note the use of the constructor of the event class. This is required since events use theRun TimeType Information(RTTI) mechanism to make possible for derived classes ofEventX to be recognized as beingEventXand henceforth triggering responses from any Command/Observer object that have declared to be intereseted in theoccurrence ofEventX .

In a GUI, a command is configured to be connected to a particular widget and at the same time be registered with aObject watching for the occurrence of an event. The goal is to be ableto modify the GUI when the event in questionis invoked from the observed object. The typical case is to setup a command observer to watch theProgressEvent ofan image processing filter.

One easy way to attach an observer to a widget is to derive a newwidget and add the observer as a member variableof the new widget class. The following code illustrates how this can be done for defining aFLTK FL Slider as aProgress Barwidget. Figure4.3 illustrates the classes involved in the linkage between an ITK filter and a FLTKwidget.

First, the class is derived from the FlSlider class inFLTK.

class ProgressBar : public Fl_Slider{

then the itk::MemberCommand class is used. This class derives from theitk::Command class and is designed to betemplated over the type of its host. In this way, the command is capable of invoking one of the member methods ofthe host class without violating the encapsulation of the host.

Page 74: ITK Handout

4.1. FLTK 63

typedef itk::MemberCommand< ProgressBar > RedrawCommand Type;

The command class is defined to be a member variable of the newly createdProgressBar class.

RedrawCommandType::Pointer m_RedrawCommand;

A convenience method is added for connecting the observer command of theProgressBar to any object deriving fromitk::Object . In this case the internal command registers its interest inthe itk::ProgressEvent of the object thatis being observed.

Observe( itk::Object *caller ){

caller->AddObserver( itk::ProgressEvent(), m_RedrawCo mmand.GetPointer() );}

The constructor of the new widget class invokes the constructor of the superclass, and also instantiates the internalcommand/observer. Once the command is created, two member methods of the widget class are defined as callbacksfor the itk::MemberCommand .

ProgressBar(int x, int y, int w, int h, char * label):Fl_Slider( x, y, w, h, label ) {m_RedrawCommand = RedrawCommandType::New();m_RedrawCommand->SetCallbackFunction( this, &Progress Bar::ProcessEvent );m_RedrawCommand->SetCallbackFunction( this, &Progress Bar::ConstProcessEvent );

}

These two methods must be defined in theProgressBar , with the following specific signatures

ProcessEvent( itk::Object * caller, const itk::EventObje ct & event )ConstProcessEvent( const itk::Object * caller, const itk: :EventObject & event )

Two different implementation are required in order to support const-correctness. In case the observer is connected toa const object, this double set up of callback methods facilitates to respect the constness of the object when an event isinvoked.

The function body of the non-const method is as follows

ProcessEvent( itk::Object * caller, const itk::EventObje ct & event ){

::itk::ProcessObject::Pointer process =dynamic_cast< itk::ProcessObject *>( caller );

this->value( process->GetProgress() );this->redraw();Fl::check();

}

The first line downcasts theitk::Object into a itk::ProcessObject . This is reasonable here since we plan toconnect the progress bar to image processing filters which all derive from itk::ProcessObject .

The body of the const method is quite similar

Page 75: ITK Handout

64 Chapter 4. Integrating ITK with GUI Toolkits

ConstProcessEvent( const itk::Object * caller, const itk: :EventObject & event ){

itk::ProcessObject::ConstPointer process =dynamic_cast< const itk::ProcessObject *>( caller );

this->value( process->GetProgress() );this->redraw();Fl::check();

}

In this particular case the methods invoked by the const and non-const callback are the same. But we can imaginesituations in which the reaction to the const and non-const should be different.

In our particular example, the callback methods of theProgressBar are obtaining the progress percent from the callerobject using theGetProgress() method, and using it for setting the level on the slider bar. Next, they forceFLTK toredraw the widget so the new level of the bar is presented on the screen. The redraw is forced by calling theredraw()followed by theFl::check() function.

The use of this newProgressBar class will be as follows

ProgressBar *bar = new ProgressBar; // FLTK derived classbar.Observe( gaussianFilter ); // connecting the observer

In this circumstances, whenever thegaussianFilter object invokes iteration events, theProcessEvent method ofthe ProgessBar will be executed and the appearance of the barwill be updated on the GUI.

This stands for the communication fromITK to theFLTK GUI. The communication in the other way is trivial sincefluid allows to create callbacks for every method defined in the class in the .fl file.

Similar mechanisms can be used for linking anyFLTK widget to anITK class capable of producing Events.

4.2 Qt

Qt is a C++ open-source multi-platform library for application development. It can be obtained fromhttp://www.trolltech.com/products/qt/index.html . Qt is a commercial product with a particular licensingmechanism that provides a good balance between the benefits of open-source and proprietary code. You may want tolook closer at the various licensing possiblities available for different platforms. In particular, a set of non-commercialGPL versions are available for X11, Mac and Windows. Although mostly known as a GUI library,Qt has a broaderscope and provide a good number of extra functionalities that makes it interesting for simplifying the desing andimplementation of final applications.

4.2.1 Installing the Software

Qt package is availabe athttp://www.trolltech.com/download/qt/x11.html . The latest version that can beused for free in Windows systems isQt 2.3. The most recent release isQt 3.2. This document describes the use ofQt2.3 in order to cover all platforms.

Qt installation is quite different between Unix and MS-Windows systems. In Unix systems the source code is dis-tributed and can be compiled locally. Under Windows, the distribution is made in binary. Many Linux systems havealso binary distributions in the form of packages, however attention should be paid to make sure that the compilerused for creating the binary distribution is the same that you intend to use for developing your project. For example,a conflict will appear if you takeQt binaries compiled with gcc 2.95 and try to use them in a project compiled withgcc 3.3. The simple solution in such cases is to download the source code distribution and buildQt libraries in yoursystem using exactly the same compiler that you intend to usefor your project.

Page 76: ITK Handout

4.2. Qt 65

4.2.2 Configuring with CMake

CMake offers native support for configuring a project that requiresQt. The minimalCMakeLists.txt file requiredfor usingITK along withQt is shown below.

PROJECT( myQtItkProject )

FIND_PACKAGE(ITK)IF(ITK_FOUND)

INCLUDE(${ITK_USE_FILE})ELSE(ITK_FOUND)

MESSAGE(FATAL_ERROR "ITK not found. Please set ITK_DIR.")ENDIF(ITK_FOUND)

INCLUDE (${CMAKE_ROOT}/Modules/FindQt.cmake)

The first group of lines searches forITK while the last line in the group searches forQt. The heuristics for locat-ing Qt components are defined in theFindQt.cmake file in the CMake distribution. When CMake is run over thisCMakeLists.txt file the following CMake variables must be set.

� QT MOC EXECUTABLE= full path to the toolmoc.� QT UIC EXECUTABLE= full path to the tooluic.� QT INCLUDE DIR = path to the headers directorywithout /qt� QT QT LIBRARY = the full path to qt.lib� QT GL LIBRARY = the full path to qt gl.lib

Under normal circumstances CMake will be able to figure out all of them for you, and the configuration will proceedsilently. However, if yourQt installation is atypical, you may have to enable theAdvancedmode in CMake andmanually set any undefined variable from the list above.

The rest of theCMakeLists.txt should specify how to combine theQt components together. For example, thefollowing lines show how the include directories must be specified.

INCLUDE_DIRECTORIES(${QT_INCLUDE_DIR}${myQtItkProject_BINARY_DIR}${myQtItkProject_SOURCE_DIR}

)

The first directory indicates where theQt headers are located. The two other directories indicate thesource directory ofthe current project and the binary directory where the project is being built. The reason for adding the binary directoryto the include path is thatQt generatesC++ code from the geometrical specification of the GUI. The generated.cxxand.h files are written in the binary directory, given that they areconsidered to be intermediate results on the buildprocess.

Next in theCMakeLists.txt file, we specify the libraries that should be linked with our application. This is illustratedin the following lines.

LINK_LIBRARIES(${ITK_LIBRARIES}

Page 77: ITK Handout

66 Chapter 4. Integrating ITK with GUI Toolkits

${QT_QT_LIBRARY}${QT_GL_LIBRARY}${OPENGL_glu_LIBRARY}${OPENGL_LIBRARY}

)

The variableITK LIBRARIES contains the basic libraries from the Insight toolkit. The variablesQT QT LIBRARY andQT GL LIBRARY are the mainQt library and its OpenGL auxiliary library. The OpenGL library was only needed inQt 2.3. This organization of libraries changed betweenQt version 2.3 and version 3.2. In the latest release, a singlelibrary is used. The last two libraries on the list are the native OpenGL libraries.

Qt implements an event management mechanism that allows to decouple software components. This mechanism isbased on the use ofSignalsandSlots. Since these two concepts are not directly supported byC++, Qt has build anabstraction layer on top ofC++ in order to allow the use of the two aditional keywordssignals andslots . Theabstraction layer requires a pre-compiler to process allC++ files containing mention of theQt-specific keywords. Thispre-compiler is calledmoc and is invoked by CMake before theC++ compiler is called. The developer must specifywhich files among theC++ source should be pre-processed withmoc, and which ones should simply be passed to theC++ native compiler. This is done in CMake by creating two lists of source files. The first one, containing all thesource files of the application, and the second one containing only those files requiringmoc pre-processing.

The first list is identified by the name of the project and the string SRCS. Like in the following lines.

SET(myQtItkProject_SRCSmain.cxxsource1.cxxsource2.cxx

)

The second list can be identified with any name, and will contain the list ofC++ files that make use of theQt-specifickeywords “signals: ” and “slots: ”. This will look like the following list.

SET(myQtItkProject_MOC_SRCSsourceWithSignalsAndSlots.h

)

Qt also has a tool calleddesigner that allows to interactively design the GUI composition. This tool saves the GUIdescription in files with extension.ui . C++ code is generated from this files by theQt tool uic. CMake knows how toinvoke this tool and what to do with the resulting files. The only thing left to the developer is to specify the list of.uifiles that are relevant to the current application. This is done by constructing a list of source files as indicated in thefollowing lines.

SET(myQtItkProject_GUI_SRCSWindowA.uiWindowB.ui

)

The list of GUI files to be processed byuic is passed to theQT WRAPUI command. This command can only beinvoked if theuic tool has been found at configuration time. In the following lines we first test for the availability oftheQT WRAPUI command, then we pass the name of the target (an executable inthis case), the list of project headerswhere generated.h files should be added, the list of project source files where generated.cxx files should be added,and finally the list of files requiring pre-processing byuic.

IF(QT_WRAP_UI)

Page 78: ITK Handout

4.2. Qt 67

QT_WRAP_UI(myQtItkExecutablemyQtItkProject_HDRSmyQtItkProject_SRCSmyQtItkProject_GUI_SRCS)

ENDIF(QT_WRAP_UI)

Once the.ui files have been processed, we can specify the command for the files that must be pre-processed bymoc.This is the CMake commandQT WRAPCPPshown in the following lines. The first argument is the name ofthe target(an executable in this case). The next arguments are the lists of sources to be processed.

IF(QT_WRAP_CPP)QT_WRAP_CPP(myQtItkExecutable

myQtItkProject_SRCSmyQtItkProject_MOC_SRCS)

ENDIF(QT_WRAP_CPP)

The following two lines are added to specify the use of DLLs (in MS-Windows) and multi-threading support.

ADD_DEFINITIONS(-DQT_DLL)ADD_DEFINITIONS(-DQT_THREAD_SUPPORT)

Finally, the target executable of the application is definedwith the line

ADD_EXECUTABLE(myQtItkExecutable myQtItkProject_SRCS )

where the second argument is the list of souce files. Note thatthe CMake-Qt specific commands have added sourcefiles to this list. The additional source files are those generated by intermediate processing during the build.

This concludes the minimal CMake configuration of a project usingQt andITK.

4.2.3 Writing a simple example

The mechanism built-in inITK for communicating with a GUI is based on theitk::Command and theitk::EventObject . This mechanism is conceptually very similar to thesignalsandslotsdefined inQt. However,the implementation details of these two mechanisms are quite different.

One of the cleanest ways of interfacingITK with Qt is to defineAdaptorsthat will allow ITK observers to be presentedasQt slots, andITK events to be presented asQt signals.

The basic structure of such adaptors is illustrated in the following class

class QtAdaptor : public QObject{

Q_OBJECTpublic:

QtAdaptor() {}virtual ˜QtAdaptor() {}

signals:void Signal();

public slots:virtual void Slot() {};virtual void Slot(int) {};

Page 79: ITK Handout

68 Chapter 4. Integrating ITK with GUI Toolkits

virtual void Slot(double) {};

};

QObject

itk::QtAdaptor

itk::QtSlotAdaptoritk::QtSignalAdaptor

Figure 4.4: Class diagram of the Qt-ITK adaptors.

These classes translate ITK events into Qt signals and

Qt slots into ITK Command/Observers.

Note thatQt allows slots to be defined with any type of parame-ters. Here we have restricted ourselves to only three types of slotsignatures. Nothing prevent us from further extending thisset.This class is derived in order to create a Slot adaptor and a Sig-nal Adaptor. Figure4.4 shows the class diagram of the adaptorclasses.

The Signal adaptor is presented below. This class contains anITK Command/Observer that allows it to attend for events in-voked fromITK classes. In response to such events, this classwill generateQt signals. In this way, this class behaves like a sig-nal transducer that transformsITK messages intoQt messages.

class QtSignalAdaptor : public QtAdaptor{

typedef SimpleMemberCommand<QtSignalAdaptor> CommandT ype;public:

QtSignalAdaptor(){m_Command = CommandType::New();m_Command->SetCallbackFunction( this, & QtSignalAdapto r::EmitSignal );}

virtual ˜QtSignalAdaptor() {}

CommandType * GetCommand(){return m_Command;}

void EmitSignal(){emit Signal();}

private:CommandType::Pointer m_Command;

};

Note that the class contains anitk::SimpleMemberCommand and defines its callback to be theEmitSignal()method. This method simply emits aQt signal as response to the reception of anITK event.

Figure4.5 shows the interaction between anITK filter, the itk::QtSignalAdaptor and aQWidget . The Signal-Adaptor is registered as an observer of theITK filter. In response to events received from the filter, the adaptor emitsQt signals that have been in turn connected to slots in some other Qt widget.

New widgets can be derived from existingQt widgets in order to customize their behavior in an application. Thefollowing code implements a button widget with three Slots that change the color of the button in response to incomingsignals.

class QtLightIndicator : public QButton{

Q_OBJECT

Page 80: ITK Handout

4.2. Qt 69

Slot()

QWidgetitk::QtSignalAdaptor

itk::Command Qt Signal()itk::FilterXAddObserver()

emit

connect()

Event

Figure 4.5:Collaboration diagram of the Qt-ITK signal adaptor. This class transduces ITK events into Qt signals. This mechanims

allows to notify a Qt-managed GUI about events happening in the image processing layer managed by ITK.

public:QtLightIndicator( QWidget *parent, char * name):

QButton( parent, name ) {}public slots:

void Start(){QColor yellow(255,255,0);this->setBackgroundColor( yellow );}

void Modified(){QColor red(255,0,0);this->setBackgroundColor( red );}

void End(){QColor green(0,255,0);this->setBackgroundColor( green );}

};

The following code creates anITK filter, an itk::QtSignalAdaptor and one of the newly definedQtLightIndicator . The signal adaptor is connected as an observer of theitk::StartEvent in the filter. Thesignal at the output of the signal adaptor is connected to oneof the slots in the light indicator.

itk::QtLightIndicator indicator( &mainQtWidget, "State " );indicator.setGeometry( horizontalPosition, 20, buttonW idth, buttonHeight );indicator.Modified();

itk::QtSignalAdaptor signalAdaptor1;filter->AddObserver( itk::StartEvent(), signalAdaptor 1.GetCommand() );QObject::connect( &signalAdaptor1, SIGNAL(Signal()), & indicator, SLOT(Start()) );

Additional connections can be made by adding more adaptors and registering them as observer of the filter. Thefollowing code adds two adaptors more, for theitk::ModifiedEvent and theitk::EndEvent respectively.

itk::QtSignalAdaptor signalAdaptor2;filter->AddObserver( itk::ModifiedEvent(), signalAdap tor2.GetCommand() );QObject::connect( &signalAdaptor2, SIGNAL(Signal()), & indicator, SLOT(Modified()) );

itk::QtSignalAdaptor signalAdaptor3;filter->AddObserver( itk::EndEvent(), signalAdaptor3. GetCommand() );QObject::connect( &signalAdaptor3, SIGNAL(Signal()), & indicator, SLOT(End()) );

The communication in the other direction is slightly more complex. In order to transduceQt signals intoITK events,

Page 81: ITK Handout

70 Chapter 4. Integrating ITK with GUI Toolkits

QWidget

signal()

itk::QtSlotAdaptor

Qt slot() itk::Command itk::FilterX

connect()

emit

SetCallback()

Execute

Figure 4.6:Collaboration diagram of the Qt-ITK slot adaptor. This class transduces Qt signals into ITK event by exposing a Qtslot and implementing internally an ITK command/observer. This mechanim allows to use Qt-originated signals in order to trigger

acctions in the image processing layer managed by ITK.

we create aitk::QtSlotAdaptor . This class will exposeQt slots suitable for being connected toQt signals. ThisSlots will invokeITK events. Figure4.6illustrates the collaboration diagram of theitk::QtSlotAdaptor .

template <typename T>class QtSlotAdaptor : public QtAdaptor{

typedef void (T::*TMemberFunctionVoidPointer)();public:

QtSlotAdaptor():m_MemberFunctionVoid(0) {}virtual ˜QtSlotAdaptor() {}

void SetCallbackFunction(T* object,TMemberFunctionVoidPointer memberFunction)

{m_This = object;m_MemberFunctionVoid = memberFunction;}

void Slot(){if( m_MemberFunctionVoid )

{((*m_This).*(m_MemberFunctionVoid))();}

}protected:

T* m_This;TMemberFunctionVoidPointer m_MemberFunctionVoid;

};

The class is templated over a typeT in order to make possible the customization of the member method to be invoked.The following code illustrates the use of theitk::QtSlotAdaptor . First, we instantiate a filter type. Then, weinstantiate a slot adaptor type for this filter type, and create one instance ofsloatAdaptor .

typedef itk::MedianImageFilter< ImageType > FilterType;typedef itk::QtSlotAdaptor<FilterType> SlotAdaptorTyp e;SlotAdaptorType slotAdaptor;

The slotAdaptor must now be connected as a slot in order to receive signals from otherQt widgets. The callback oftheITK filter must also be specified to the adaptor. This is done in thefollowing code.

QPushButton startButton( "Start", &mainQtWidget );startButton.setGeometry( horizontalPosition, 20, butto nWidth, buttonHeight );

QObject::connect( &startButton, SIGNAL(clicked()), &sl otAdaptor, SLOT(Slot()) );

Page 82: ITK Handout

4.2. Qt 71

slotAdaptor.SetCallbackFunction( filter, & FilterType: :Update );

Here we instantiated aQt button, connected its predefinedclicked() signal into theslotAdaptor and finally definedtheUpdate() method of the filter to be the callback to be invoked during theexecution of the Slot. In this configuration,when the user clicks on thestartButton in the GUI, the slot adaptor with trigger theUpdate() method in theITKfilter.

This is by no means the only way to connectITK andQt notifications mechanisms. Many other combinations arepossible, for example we could have made the transduction mechanism fromQt to ITK to be simply a slot listenerthat invokesITK events. In that way, anyITK command could be indirectly connected as observer of aQt signal.

Page 83: ITK Handout
Page 84: ITK Handout

CHAPTER

FIVE

Case Study IIntegrating ITK with Volview

5.1 Overview

This chapter describe the major issues involved in the integration of ITK algorithms into VolView, a visualizationapplications designed for producing high quality volume rendering1. The most interesting aspect of the mechanismused for integrating ITK capabilities into VolView is the decoupling between both systems. ITK methods do not haveto be exposed to the internal structure of VolView, nor VolView has been exposed at all to the existance of ITK method.

A plain C-language interface has been defined in VolView, that allows very generic methods to interface with theinternal data representation.

Another interesting aspect of the integration is that it is implemented in the form of run-time pluggins. This meansthat any number of ITK methods and procedures can be added dynamically. The only requirement is that the sharedlibraries in which theITK methods are implemented must be placed in a very specific directory where VolView willlook for them at starting time.

5.2 VolView Plugins Use Cases

The dynamic configuration of the plugins, and their decoupling from VolView’s internal structure facilitates their usein the following scenarios.

� Experiment locally with ITK algorithms for which new plugins are writen on-site. The number of pluginscurrently distributed is limited. However, all the elements required for creating new plugins are publiclyavailable, this makes possible for anyone to create new plugins in their in-house development.

� Create plugins for new image processing methods and send them to other sites for evaluation without havingto expose the source code, nor requiring the other site to expose their data. In the cases in which localdevelopments can not be publicly shared, the plugins make still possible to provide portions of executable codethat can be evaluated by other sites.

� Provide implementations for methods being published. It order to facilitate the verification of reproducibilityfor work published in scientific and technical journals, authors would be able to post plugins implementingtheir published methods. In this way, their peers in the community would be able to rapidly reproduce theirresults without being confronted to the discouraging barrier of having to implement an algorithm described in

1A free version of Volview is now available as part of a projectsponsored by the National Library of Medicine (NLM).

Page 85: ITK Handout

74Chapter 5. Case Study I

Integrating ITK with Volview

pseudo-code in a paper.

� Facilitate deployment. New releases of algorithms can be deployed without having to wait for full releasesof the application. The modular separation between VolViewand the plugins facilitate to update the pluginswithout having to make any changes in the application.

� Distribute specialized plugins as commercial products. Current plugins has been writen for very generic imageprocessing tasks. However, very specialized plugins couldbe writen for specific applications and could bedistributed in the form of commercial products. The great advantage of being VolView plugins is that all thevisualization and user interaction functionalities are obtained for free. The plugin developer can then focus onlyon the image processing aspects of his/her work.

� Hosting plugins repositories. Specialized plugins that are in the public domain could be made available inbinary form. In combination with adequate quering protocols they could serve as support resources for grid-likecomputation.

5.3 Plugin Data Flow

Figure5.1shows the data flow between VolView and the ITK plugin. VolView exports a 3D data set in the form of aplain C-language data structure. It also exports a set of GUIparameters intended to serve as parameters for the filtersin the ITK plugin. The plugin receives both structures and import the data set from the C-language structure. An ITKpipeline is instantiated in the plugin and the data is processed using the parameters provided by the GUI. During theprocessing, ITK events are translated into VolView refreshcalls that allow to display messages and update a progressbar on the GUI. At the end of the processing, the resulting data set is exported into a C-Language structure and passedback to VolView.

The Plugin has an initialization stage in which it can specify what kind of GUI elements does it require for operatingon the data. VolView instantiates widgets corresponding tosuch elements into the GUI, and when the user request theplugin to be executed, the values from the GUI elements are recolected and packaged in a C-language structure to bepassed to the plugin. The plugin has a couple of mechanisms form displaying information in VolView’s GUI. It can forexample, display a message on the status bar, update a progress bar, report errors and report final data on the executionof the plugin (e.g. number of iterations of a filter). Plugingexecution can also be aborted/cancelled from the GUI.

5.4 Plugin Life Cycle

The process of developing a VolView plugin is depicted in Figure5.2. A single public header file defines the plainC-Language data structures used for exporting and importing data, and for transferring GUI parameters between theVolView GUI and the plugin. A plugin developer must implementa set of C-Language functions that will be invokedby VolView during its interactions with the plugin. The source code of the plugin is compiled and packaged in theform of a shared library. No libraries from VolView are required during this process. The shared library containing theplugin is finaly copied in a specific directory of the VolView binary installation. This particular directory is searchedby VolView at start-up time. Any shared libraries found in this directory will be dynamically loaded. The name of thelibrary should conform to the name of the plugin initialization function.

The development cycle of a plugin is totally decoupled from VolView’s internal code. The single communication brigebetween the plugins and the application is the header file defining the data and GUI structures.

A developer of a new plugin, must simply implement the methods defined in the public header file.

Page 86: ITK Handout

5.4. Plugin Life Cycle 75

vtkVVPluginGUI Description

InputData

GUIParameters

Filter A Filter B Filter C ExportImport

Internal PipelineITK PluginvtkVVProcessData Structure

ExportData

vtkVVProcessData Structure

Volview Binary

Figure 5.1:Data flow diagram of the volview plugin. Data is exported from VolView in a plain C-Language structure. This structure

is imported into an ITK pipeline and processed using parameters transferred from VolView’s GUI. The result is exported into the plain

C-Language structure and imported back into VolView.

InstallationDir

/Plugins/bin

+ ProcessData()+ UpdateGUI()+ vvFilterFooInit()

vtkVVPluginAPI.h

FilterFooPluginShared Library

FilterFooPluginSource Code

VolViewBinary

User Developer

Figure 5.2:Life cycle of a VolView plugin. A single header file defines the interface between VolView and the plugins. This header

provides the plain C-language data structures for importing and exporting data, and defines the signature of the functions to be

implemented by every plugin.

Page 87: ITK Handout

76Chapter 5. Case Study I

Integrating ITK with Volview

5.5 Writing a Plugin

This section describes the basic steps required for writinga new plugin. The example here illustrates the implementa-tion of a simple filter having only one parameter to be set fromthe GUI.

5.5.1 Define the plugin name

The plugin name will determine the name of the shared libraryused for deployment. It will also determine thename of the initialization function. For example a plugin namesvvITKGradientMagnitude will be deployed in ashared library with namelibvvITKGradientMagnitude.so in Unix, andvvITKGradientMagnitude.dll on MS-Windows. Its initialization function will be calledvvITKGradientMagnitudeInit() .

5.5.2 The initialization function

The initialization function of the plugin must confrom to the following API

extern "C" {void VV_PLUGIN_EXPORT vvITKGradientMagnitudeInit(vtkV VPluginInfo *info){}}

where the symbolVV PLUGIN EXPORTand the structurevtkVVPluginInfo are both defined in the public header filevtkVVPluginInfo.h .

This initialization function is invoked by VolView at start-up time, just after the shared library has been dynamicallyloaded.

The typical content of this function is shown below.

{vvPluginVersionCheck();

// setup information that never changesinfo->ProcessData = ProcessData;info->UpdateGUI = UpdateGUI;info->SetProperty(info, VVP_NAME, "Gradient Magnitude I IR (ITK)");info->SetProperty(info, VVP_GROUP, "Utility");info->SetProperty(info, VVP_TERSE_DOCUMENTATION,

"Gradient Magnitude Gaussian IIR");info->SetProperty(info, VVP_FULL_DOCUMENTATION,

"This filter applies IIR filters to compute the equivalent o f convolvingthe input image with the derivatives of a Gaussian kernel and thencomputing the magnitude of the resulting gradient.");

info->SetProperty(info, VVP_SUPPORTS_IN_PLACE_PROCES SING, "0");info->SetProperty(info, VVP_SUPPORTS_PROCESSING_PIEC ES, "0");info->SetProperty(info, VVP_NUMBER_OF_GUI_ITEMS, "1") ;info->SetProperty(info, VVP_REQUIRED_Z_OVERLAP, "0");info->SetProperty(info, VVP_PER_VOXEL_MEMORY_REQUIRE D, "8");

}

First, the macrovvPluginVersionCheck() must be called in order to verify that the plugin API conformsto thecurrent version of VolView’s binary distribution. When theversions do not match, and error message is reported to theuser at run-time.

Page 88: ITK Handout

5.5. Writing a Plugin 77

Then theinfo structure is initialized. TheProcessData is set to the pointer of the function that will perform thecomputation on the input data. Setting the function as a function pointer allows a lot of freedom on the implementationof the function.

TheUpdateGUI pointer in theinfo structure is also set to a function pointer. The role of thisUpdateGUI function isto initialize the GUI parameters from the Plugin. In this function we specify all the properties of all the GUI widgetsrelated to the parameters of the plugin.

The functionSetProperty() is used to define general properties of the plugin. Some of these properties are displayedon the GUI as informative text. For example, the textual nameof the plugin, a terse documentation and an extendeddocumentation. The properties are identified by tags. This enforces further the decoupling between the internalrepresentation of information in VolView and the structureof code in the plugin. For example the tagVVP NAMEspecifies that the string being passed as third argument of theSetProperty() method should be used for the text labelof the plugin in the GUI.

Other non-GUI properties are also set with this method. For example, we specify whether this filter is capable ofperforming in-place processing or not, whether it allows toprocess data in pieces (streaming) or not. We also providean estimation of the memory consumption that will result from the execution of the filter. This last information isprovided in the form of an estimation of number of bytes to be used per pixel of the input data set.

The functionvvITKGradientMagnitudeInit() will be called only once during the start-up process of VolView.

5.5.3 The ProcessData function

The functionProcessData() is the one that actually performs the computations on the data. The signature of thisfunction is

static int ProcessData(void *inf, vtkVVProcessDataStruc t *pds)

where the first argument is actually a pointer to avtkVVPluginInfo structure and can be downcasted to it by doing

vtkVVPluginInfo *info = (vtkVVPluginInfo *)inf;

The second argument toProcessData() is thevtkVVProcessDataStruct . This structures carries the informationon the data set to be processed, including the actual buffer of pixel data, the number of pixels along each dimensionsin space, the pixel spacing and the pixel type among others. Of particular interest in this structure are the membersinData that is a pointer to the data buffer of the input data set, and theoutData that is a pointer to the output data setbuffer. TheProcessData() function is expected to extract the data from theinData pointer, process it and store thefinal results in theoutData buffer.

The typical starting code of this function involves extracting the meta information about the data set. Thefollowing code shows for example, how to extract the dimensions and spacing of the data set from thevtkVVProcessDataStruct andvtkVVPluginInfo structures.

SizeType size;IndexType start;

double origin[3];double spacing[3];

size[0] = info->InputVolumeDimensions[0];size[1] = info->InputVolumeDimensions[1];size[2] = pds->NumberOfSlicesToProcess;

for(unsigned int i=0; i<3; i++){

Page 89: ITK Handout

78Chapter 5. Case Study I

Integrating ITK with Volview

origin[i] = info->InputVolumeOrigin[i];spacing[i] = info->InputVolumeSpacing[i];start[i] = 0;}

5.5.4 Refreshing the GUI

Given that the execution of most image processing algorithms take considerable time on 3D data sets, it is importantto provide some feedback to the user as to how the processing is progressing. This also give a chance to the user forcancelling the operation is the expected total execution time is excesively long.

This notification can be done from theProcessData() by calling the UpdateProgress() method of thevtkVVPluginInfo structure.

float progress = 0.5; // 50% progressinfo->UpdateProgress( info, progress, "half data set proc essed");

This function will update the progress bar on VolView’s GUI and will set the last string as a message in the status bar.There is a balance to be found concerning the frequency with which this function should be invoked. If invoked toooften, it will negatively impact the performance of the plugin since a considerable amount of time will be spent in GUIrefreshing. If not called often enough, it will produce the impression that the processing is failing and the applicationis not responding to the user commands anymore.

Figure5.3presents the VolView’s GUI. On the left side, the GUI elements of the ITK geodesic active contours pluginare visible. The data set shown is a segmentation of the gray matter from the Visible Woman cryogenic data set.

This concludes our description of the integration between ITK and VolView in the form of plugins. Altough a goodnumber of details have been omitted here, this description should provide a good starting point for guiding developersin the process of writing new plugins. You are always encouraged to post any questions to the ITK users-list where weanticipate that the topic of VolView plugins will become a common subject.

Page 90: ITK Handout

5.5. Writing a Plugin 79

Figure 5.3:Screen shot from VolView. The data set loaded is a segmentation of the gray matter in the Visible Woman cryogenic

data set. The GUI elements of the ITK geodesic active contours module is visible on the left.

Page 91: ITK Handout
Page 92: ITK Handout

CHAPTER

SIX

ITK Integration with SCIRun

6.1 Introduction

ITK is by definition a toolkit, not a stand alone system. An ITKuser must therefore write and maintain additionalcode around the toolkit. In the following we present a possible solution by integrating the ITK toolkit with the SCIRunproblem solving environment. The goals of this work are three folds. First, provide a seamless integration of ITKfilters inside a particular thirdparty system, the SCIRun Problem Solving Environment. Second, provide ITK userswith dynamic GUI based front end. Third, provide a foundation upon which other external systems can build toincorporate ITK filters.

SCIRun is a Problem Solving Environment developed by the Scientific Computing and Imaging (SCI) Institute atthe University of Utah. Its aims are to provide a dynamic, GUIdriven, environment where a user can interactivelycompose and control simulation processes and employ visualization tools on various types of data. SCIRun is a genericcomputational steeting framework, it is not designed to solve any particular problem. Rather, users create their ownspecific modules and use the framework to construct complex module networks, control module execution and set theirparameters. SCIRun also provides a collection modules for numberical analysis, finite elements and visualziation aswell as some domain specific packages such as biomedical forward and inverse problems.

For the SCIRun problem solving environment, the integration with ITK provides a robust and broad range imageprocessing algorithms capabilities. On the other hand, theSCIRun framework provides ITK users with advancevisualization capabilities, GUI based interaction and an advance problem solving environemnt which allows dynamicdesign and steering of complex networks of ITK filters.

In the following, we define the aims and goals of this effort aswell as our particular solution. The strategic decisionswe made are described in section6.2. Section6.3briefly present SCIRun and we present our approach in section6.4.Section6.5layout the proposed XML descrioption for ITK filters, and wrapping the filters inside SCIRun is presentedin section6.6. A detailed example is given in Section6.7. We conclude in section6.8.

6.2 Aims

The aims of this effort are to integrate ITK and SCIRun without introducing any changes to ITK proper, minimize thechanges to the SCIRun framework, and to allow easy integration of both old and new ITK filters.

The first requirement, not to impose changes on ITK, stems from the notion that ITK is intended to be used by manyexternal systems. It is thus impractical to allow such systems to impose restrictions on ITK. On the other hand, we donot want to impose too many changes on SCIRun (or any other thirdparty system) as this will reduce the attraction forothers to integrate their systems with ITK. Finally, we wantto allow smooth integration and allow SCIRun to be ableto work with only a sub set of the ITK filters while this sub setsgrows to include more, new and old, ITK filters.

Page 93: ITK Handout

82 Chapter 6. ITK Integration with SCIRun

Figure 6.1: The SCIRun problem solving environment.

6.3 SCIRun

This section briefly describes the SCIRun framework and compare it with ITK.

During the past decade, SCIRun has been actively developed by the Scientific Computing and Imaging (SCI) Instituteat the University of Utah . The main premise of SCIRun is to allow a scientist to interactively steer a computation,change parameters, recompute, and then revisualize - all within the same programming environment, Figure6.1. Thetightly integrated modular environment created by SCIRun allows computational steering to be applied to a broadrange of advanced scientific computations. Specific examples include medical imaging, inverse problems, geologicalexploration, chemical interactions, and pollution dispersion. SCIRun includes modules and libraries in a wide rangeof areas such as performance analysis, geometric modeling,numerical analysis, and scientific visualization. TheSCIRun Problem Solving Environment(PSE), however, lacks support in the area of image processing. This deficiencyis most prominent in areas such as medical imaging. The Insight toolkit can complement SCIRun as it provides image-processing capabilities ranging from fundamental algorithms, to advanced segmentation, and registrations tools.

An ITK filter (a process object) can receive input (data objects) via input ports and forward results via output ports.In addition, an ITK filter can export access methods to a set ofparameters that control the filter behavior. In a PSEsetting such as SCIRun, the framework needs to be aware of thenumber and types of input and output ports each filtertype has. Using this information, the framework can ensure that only valid pipelines (connections from output to inputports) are created. The framework needs, therefore, a description of all the datatypes that are in use in the ITK system.

The parameters to an ITK filter pose an additional challenge.Some of the parameters to a filter may have defaultvalues, some may have restriction based on the the current input data, and some may not represent a particular value,rather, they represent an action the filter should perform. Furthermore, while a filter parameter can have a particulartype there may be several ways in which a user can set the valueof this parameter. For example, a threshold filter candeclare that it accept a threshold value as a float. However, the user can supply this value by entering it via a text entry,a slider, a knob or even as a pixel selection from an image. In other words, parameters that are exposed to the PSEuser via a GUI, need two levels of information, the type of thedata and the GUI representation.

SCIRun modules are similar to ITK modules and have input and output ports as well as parameters that are set by theuser, see Figure6.2. Currently, the GUI for SCIRun is written in Tcl/Tk and is mainly hand written for each SCIRunmodule, although some of the code is reusable via the use of itcl/itk (an object orient extensions to Tcl/Tk)

Page 94: ITK Handout

6.4. Approach 83

Figure 6.2: SCIRun network.

6.4 Approach

The approach we employ is toaugmentITK filters with meta data (using XML) that describes their capabilities, andprovide SCIRun with a set tools toautomaticallywrap ITK filters as SCIRun modules using this meta data. Thisautomation is achieved by processing the XML files via XSL (XML Stylesheet Language) translations to create C++wrappers as well as TCL/TK Graphical User Interface (GUI) binding suitable for use within the SCIRun framework.

There are several benefits to this approach. On the ITK side, no new code is required. In addition, the meta data isexternal to the filters and thus its existence,or lack of, does not affect current or new ITK filters. Furthermore, the metadata can be used to to help integrated ITK filters into other system beside SCIRun. There are also several benefitsof this approach for the SCIRun side as well. First, the process is automated, reducing the chance of human errors.New ITK filters that are added to the ITK repository, can be incorporated into SCIRun with little or no effort. Lastly,only ITK filters with associated meta data are incorporated into SCIRun. This allows smooth integration of ITK intoSCIRun where only a small collection of filter are added at first and this base collection can grow in time as morefilters are augmented with meta data.

The approach calls for two separate stages, create an XML descriptions for ITK modules and provide the XSL transla-tion for SCIRun. The XML description of ITK filters is a general one and thus can be used by other external packages.On the other hand, wrapping each filter into SCIRun is specificto SCIRun. However, the XSL mechanism we de-scribe in section6.6 is a general one and can be adopted for other systems. We note that SCIRun is going to use thismechnism to provide integration with other tools.

6.5 The XML description

The purpose of the XML meta data of an ITK filter is to describe the filter’s inputs,outputs, parameters, templatedtypes and other ITK specific information. It is important to note that this XML description is independent of anyexternal system and can be use by any thirdparty system that wish to integrate with ITK. In the following we describeeach of the XML fields. We use “italic” for a descriptive comment and “bold” to represent an example.

<filter-itk name=" itk::name"><description> describe the filter </description><templated> ... </templated><inputs> ... </inputs><outputs> ... </outputs><parameters> ... </parameters><includes> ... </includes>

</filter-itk>

Page 95: ITK Handout

84 Chapter 6. ITK Integration with SCIRun

�filter-itk%Top level tag for ITK filters. The name attribute require a unique identifier of the filter.

�description%A short description of the purpose of the filter. External packages may use this information to describe the filter to theuser or include in their documentation.

�templated%Most ITK filters are templated on one or two types, which is part of the ITK generic programming paradigm. Theadvantage of this style of programming is that the code can bespecialize for particular datatypes. The downside isthat if the type of the data is not known during compile time then one needs to instantiate an ITK filter for all possibletypes. In practice, most ITK filters have only one or two typesfor which the filter is most suitable.

The templatedsection lists the types the filter is templated over and specify those types for which the filter is mostsuitable. Additional, each declared template name becomesa valid type in the rest of the xml descriptions, i.e., in theinputsandoutputssections.

<templated><template> InputImageType </template><template> OutputImageType</template>...<defaults>

<default> itk::Image&lt float,2 &gt </default><default> itk::Image&lt float,2 &gt </default>

<defaults><default> itk::Image&lt int,2 &gt </default><default> itk::Image&lt int,2 &gt </default>

</defaults></templated>

�inputs and outputs%The ¡inputs¿ and ¡outputs¿ sections describes for each input and output of the filter how to call it (the function name)and the datatype it accepts. The structures of these sections are the same. Note that the datatype may be a templatedtype as specified in the ¡templated¿ section, see section6.5.

<inputs><input name=" SourceImage">

<type> InputImageType </type><call> SetInput </call>

</input></inputs>

�parameters%Parameters are similar to inputs and output however, they are (for ITK ver. 1.4) treated differently by ITK. Inputs andoutputs are build on top ITK dataflow mechanism while parameters are just simple functions to change some values

Page 96: ITK Handout

6.6. Wrapping ITK filters in SCIRun 85

Figure 6.3: SCIRun XML.

in the filter.

<parameters><param>

<name> upper_threshold </name><type> float </type><call> SetUpperThreshold </call>

</param></parameters>

�includes%The ¡includes¿ section lists the itk include files which contain the filters and associated datatypes declarations.

<includes><file> itkBinaryThreasholdImageFilter.h <file>

</includes>

6.6 Wrapping ITK filters in SCIRun

In section6.5 we presented the XML description for ITK filters. These meta data descriptions do not specify howthe GUI for an ITK should look like. There can be many different graphical representations for each filter parameter,e.g., for an isosurface extraction filter, an isovalue is just a double, but its GUI representation can be a text entry, aslider or a pixel in an image. As such, SCIRun represent ITK filters in a two level XML hierarchy. A simple toplevel XML file, sci filter.xml (Figure6.3, is associated with each ITK filter and contains references to the actual ITKXML (itk filter.xml) description as well as a particular GUI description in XML (sci gui filter.xml). The file alsodescribes several SCIRun specific attributes such as what type of instantiation should be used for a templated ITKfilter and additional required include files.

It should be emphases that while there is a single XML (itkfilter.xml) file per ITK filter as far as the ITK toolkit isconcerned, there are three such files per ITK filter in the SCIRun framework. These three files are thesci filter.xml,sci gui filter.xmlalong with the ITK ownitk filter.xml.

The actual wrapping of an ITK filter into SCIRun is achieve viathree XSL translation files which are SCIRun specific.These XSL files create the appropriate SCIRun module c++ file,the module TCL/TK GUI code and SCIRun ownXML descriptions of the new modules.

These three XSL files contain all of the SCIRun specific knowledge needed for automatically wrapping ITK filter. Assuch their implementation is beyond the scope of this presentation. Interested readers can download SCIRun and theadditional Package/Insight from http://www.sci.utah.edu to view these files and use SCIRun with ITK.

Other thirdparty systems can implement their own XSL translations based on the itkfilter.xml (and possibly additionalsystem dependent, pre filter, xml description).

Page 97: ITK Handout

86 Chapter 6. ITK Integration with SCIRun

6.7 Example

In this section we show how to create a xml description of an existing ITK filter and how to add such a filter intoSCIRun. As described in section6.4, the first part pretend to any ITK filter and does not involve SCIRun. The secondpart is specific for the SCIRun system yet it can illustrate how one might go doing so for other external system.

For this example we look at the ITK ReflectImageFilter which can reflect an image along a given direction. This is anexisting filter and thus do not yet have an XML descripion and thus we need to create it. Hopefully, there will be anITK repository for these XML description files. When such a repository will be created, you may find there the XMLfor the filter you need. Once we have this XML file we will need tocreate a SCI XML description of it and optionallyprovide a GUI XML description as well.

6.7.1 itk ReflectImageFilter.xml

We start with the XML for the ITK filter.

The header of the xml file describes the xml version and the DTDfile. The itk filter.dtd file, which is part of thissystem, provides a mechanism to validate the syntax correctness of this XML file.

<?xml version="1.0"?><!DOCTYPE filter-itk SYSTEM "itk_filter.dtd">

The filter definition starts next. It requires a uniqe id (“itk::RefelectImageFilter”), followed by a short descriptionofthe filter task.

<filter-itk name="itk::ReflectImageFilter"><description>

Implements a Reflection of an image along a selecteddirection. This class is parameterized over the typeof the input image and the type of the output image.

</description>

Most ITK filters use templated code. The next section defines the template parameters as well as default instantiations.Once templated types are defined they can be use to define inputs, outputs or parameters types. Most filters also havea small set of possible instantiation which are either the most common or are the only ones which make sense for thisparticular filter. Note that in XML, the correct syntax for “itk::Image¡float,2¿” is “itk::Image&lt;float,2&gt;”.

<templated><template>InputImageType</template><template>OutputImageType</template><defaults>

<default>itk::Image&lt;float, 2&gt;</default><default>itk::Image&lt;float, 2&gt;</default>

</defaults></templated>

Next come the definitions of the input and output ports as wellas the filter parameters. There is no standard namingin ITK of these ports nor their calling sequence. Therefore,the XML description need to specify the type of argumentand the calling sequence.

Page 98: ITK Handout

6.7. Example 87

<inputs><input name="InputImage">

<type>InputImageType</type><call>SetInput</call>

</input></inputs><outputs>

<output name="OutputImage"><type>OutputImageType</type><call>GetOutput</call>

</output></outputs>

<parameters><param>

<name>direction</name><type>int</type><call>SetDirection</call>

</param></parameters>

The last section describe which include files are needed by the filter.

<includes><file>itkReflectImageFilter.h</file>

</includes>

The filter-itk description is now complete.

</filter-itk>

Remember that this xml file describes the filter only as far as the ITK is concern. As such, other system can takeadvantage of this meachanism and parse this file without depending on the SCIRun framework.

6.7.2 sci ReflectImageFilter.xml

We now look at the SCIRun side, i.e., how does one integrate anITK filter into the SCIRun framework via the XMLmeachanism. As explained in section6.4, SCIRun uses a two level system to represent ITK filter. The top level fileprovide SCIRun dependent information about the filter as well as links to the ITK xml description and optionally anXML to describe the GUI.

First, we consider the case where a GUI file is not provided. The sci ReflectImageFilter.xml file header is similar tothat of the ITK XML, and must specified the DTD verification file.

<?xml version="1.0"?><!DOCTYPE filter SYSTEM "sci_filter.dtd">

The sci filter description must first provide a link to the ITK XML description.

Page 99: ITK Handout

88 Chapter 6. ITK Integration with SCIRun

Figure 6.4: A SCIRun network with an ITK ReflectImageFilter and a default GUI.

<filter name="ReflectImageFilter"><include href="ITK/itk_ReflectImageFilter.xml"/>

The following items are SCIRun specific. They describe whichSCIRun package the filter belongs to and whether thedefaults instantiations, as specified in the ITK XML are sufficient or does SCIRun need different ones. It also provideinformation on SCIRun specific include files that are required.

<filter-sci name="ReflectImageFilter"><package>Insight</package><category>Filters</category><instantiations use-defaults="yes"/><includes>

<file>Packages/Insight/Dataflow/Ports/ITKDatatypePort.h

</file></includes>

</filter-sci></filter>

6.7.3 Configuring SCIRun

Once the sciReflectImageFilter.xml is created, all the is left to do is add this file to the SCIRun repository (Pack-ages/Insight/Dataflow/Modules/Filters/XML) and running“make”. The SCIRun make system will pick up thesci ReflectImageFilter.xml file automatically and run it via theXSL translation system to create the required C++,Tcl and XML files. Figure6.4shows SCIRun with a ReflectImage module and its default GUI.

Page 100: ITK Handout

6.8. Conclusions 89

Figure 6.5: An updated version of the ReflectImage GUI.

6.7.4 Adding a Specific GUI

The default GUI our system generates uses text entry for every parameter. While it may be sufficient in some cases,in general one will probably want to make the GUI more intuitive. Two steps are required. First, creating an XMLdescription file of the GUI and second, adding the name of thisfile to sci ReflectImageFilter.xml.

We choose to use a radio button for the direction parameter. The new gui ReflectImageFilter.xml looks like:

<?xml version="1.0" ?><!DOCTYPE filter-gui SYSTEM "gui_filter.dtd">

<filter-gui name="ReflectImageFilter"><parameters>

<param name="direction"><gui>radiobutton</gui><values><val>0</val><val>1</val></values><default>1</default>

</param></parameters>

</filter-gui>

Finally, we update the the sciReflectImageFilter.xml and add a reference to the gui description.

<filter name="ReflectImageFilter"><include href="ITK/itk_ReflectImageFilter.xml"/><include href="ITK/gui_ReflectImageFilter.xml"/> <!-- the additional line -->

The result is shown in Figure6.5

6.8 Conclusions

We presented a particular integration of ITK toolkit with a thirdparty simulation and visualization framework(SCIRun). The approach is based on augmenting ITK filters with meta data in XML format. These XML formatsare then converted by a set of XSL translators to produce application specific C++, tcl/tk and XML code. Other sys-tems can build on top of this scheme and provide their own XSL translators to automatically generate wrappers forITK filters. Almost all of the details of the wrapping is shield from the user inside the system’s XSL files. The amountof work needed by an ITK developer and a SCIRun user is minimalmaking the integration of new and old ITK filterssimple and seamless.