Download pdf - DC ACCELERATORS

Transcript
  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    3D IMAGE RECONSTRUCTION IN MEDICINE AND BEYOND

    L. Bidaut+, C. Morel+ LFMI, Radiology and Surgery Departments, University Hospitals, Geneva, Switzerland Institute for High Energy Physics, University of Lausanne, Switzerland

    Abstract In the medical field, 3D volumes are reconstructed mainly by tomographic techniques. Transmission or emission projection data sets are acquired and processed to reconstruct slices across the volume of interest, e.g., the patients body. Although initially based on 2D acquisitions, current reconstruction techniques for emission modalities (e.g. PET) use more sensitive 3D acquisition data sets processed through modified or entirely original algorithms. Beyond the simple 3D imaging that single tomographies permit, multimodality approaches and equipment actually help reconstruction algorithms to perform better. Mixing these technical developments with complex clinical imaging protocols provides the foundation for a refined and open-ended multidimensional and multisensor approach to either diagnosis or therapy planning and follow-up.

    1. INTRODUCTION

    Since the advent of X-ray-based Computed Tomography (CT) [1,2], other medical imaging modalities have been developed which at least initially used a similar acquisition and reconstruction principle.

    For CT, data are initially acquired by measuring the attenuation of an X-ray beam through the body at various locations around the body during the synchronized rotation of the X-ray tube and detecting equipment. Reconstruction is then a simple filtered back-projection of the acquired sinograms to best recreate a map of the attenuation coefficients (linked to the tissues' densities) inside the field of view (FOV).

    For emission tomography (e.g., Single Photon Emission CT (SPECT) or Positron Emission Tomography (PET)), the acquisition is based on the detection of the radioactive decay within the FOV. Similarly to CT, this detection is rearranged in sinograms which are later filtered and back-projected to estimate a map of the radioactive activity inside the FOV.

    Due to the scope of this report, we shall concentrate on emission tomographies, and even more so on PET. For such modalities, several factors affect the quality of the results at various stages.

    For example in PET, acquisitions were initially performed in 2D, with mechanical septas to focus the rays and prevent too many scattered events from being detected. Because such a design was also getting rid of many useful signals, the septas were eventually discarded which increased the S/N ratio by a factor of about 4 to 6 with the same detector design. Of course, the acquisition data sets were not 2D anymore and this new paradigm forced reconstruction techniques to be modified or even totally re-invented to take care of the new dimensionality as well as of the increased scatter and noise components.

    Another major factor affecting data quality for emission tomographies is the attenuation of the rays through the various objects they intersect before being detected. These obstacles not only include the table or various mechanical holders, but also the body itself which can significantly attenuate the signal, even at the higher energies stemming from positron decay. Classically, attenuation which is generally closely linked to the material's density has been corrected prior to reconstruction by

    1/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    estimating the attenuation map through either direct measurement (e.g., with an external source of constant activity), or simple (e.g., ROI based) simulation of the various objects in the FOV. Multimodality techniques and more recently even machines now allow the attenuation to be estimated with greater accuracy and resolution directly from the actual morphology of the patient. The latest developments for iterative reconstruction techniques actually incorporate morphology in the non linear fit both as a parameter for correcting attenuation or other density-linked effects, and also as an added constraint for the calculations.

    This report will attempt to succinctly present the rationale and evolution of tomography, and its outcome through techniques such as multimodality and advanced clinical protocols. Magnetic Resonance Imaging (MRI) has been voluntarily left aside because its acquisition and imaging principles are actually much closer to frequency spectrum analysis than transmission or emission tomographies are.

    2. TOMOGRAPHY IN MEDICINE 2.1 Principles

    Medical imaging aims to get in vivo pictures of the interior of the body. However, direct imaging of a whole slice through a body using a regular camera is not possible since visible light does not penetrate deeply in human tissues. Fortunatly, except in the visible part of the spectrum, living matter is mostly transparent to electromagnetic radiations. Thus, medical images can be obtained by using sources of light at lower or higher energies than visible light, either transmitted through the body or directly emitted from the body. In both cases, when using either X-rays for transmission tomography or gamma rays for emission tomography, only those rays which escape the body can be detected. Consequently, the picture that a scanner using X-rays or gamma rays can see of a slice through a body is not actually represented in the direct space, but rather by its projections from all around the body.

    2.2 2D concepts

    The problem of image reconstruction is to obtain a representation of an object in the direct space from its representation in a projection space.

    Figure 1: Radon and Fourier transforms

    To make it simple, let us consider the case of parallel projections as they can be constructed from the detection of annihilation pairs in PET. In two dimensions (2D), the whole set of parallel projections which can be built for every projection angle around the object is a 2D representation of the object in a projection space called sinogram, where, by convention, each line corresponds to a parallel projection of the slice at a different angle (Figs. 1 and 2).

    2/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    The analytic solution of the 2D image reconstruction has been known for nearly 80 years and was established by Radon [3], who gave his name to the mathematical transformation which gives the representation of an object in the sinogram space from its representation in the direct space.

    As indicated in Figure 1, the Radon Transform is equivalent in 2D to the X-Ray Transform which accounts for the description of line integrals through the object represented in direct space. An object can also be described by its spatial frequencies. For this, a Fourier Transform is applied to its representation in direct space. There is a close relationship between the representation of an object in the sinogram space and its representation in a spatial frequency space which permits by inversion of the Fourier Transform to reconstruct a representation of an object in direct space f x, y( ) from its projections p s,( ). This relationship is expressed by the central slice theorem which connects the 1D Fourier Transform of a parallel projection P s,( ) to the 2D Fourier Transform of the object image F x ,y( ) along an axis perpendicular to the projection direction:

    P s,( )= F s cos,s sin( )

    (1)

    x

    y

    P(,) = F(cos,sin)

    x

    y

    s

    t

    s

    p(s,)

    Figure 2: Central Slice Theorem: spatial projection (left) vs. frequency space (right)

    Consequently, as shown in Figure 2, measuring the projections all around the object is equivalent to measuring the 2D Fourier Transform of the object using a polar coordinate system. Thus, the representation of the object in direct space can be obtained by inverting this frequency space representation, being aware that the inverse Fourier Transform has to be applied in a Cartesian coordinate system. Therefore, a Jacobean must be introduced to hold for the change of variables in the frequency space from polar coordinates to Cartesian coordinates. This ends up in multiplying the frequency space representation of the object obtained from the 1D Fourier Transform of the measured projections by the absolute value of the frequency s . In other words, a ramp filter is applied to the measured projections, and the representation of the object in the direct space is obtained after backprojection of these filtered projections onto the lines of projections.

    f x, y( ) = d ds s ei 2s s

    0

    , s = x cos + y sin (2)

    This filtered backprojection (FBP) algorithm is a unique analytic solution to the problem of the inversion of the 2D Radon Transform and allows for reconstructing the image of a 2D object from its

    3/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    projections. This solution is purely analytical in the sense that projections are assumed to be continuous functions measured with an infinite accuracy.

    The ramp filter s used in FBP is not at all related to sampling considerations. In order to reduce the amplification of high frequencies the ramp filter causes (statistical noise lies in the high frequencies), another low pass filter has to be added to it and windowed in the frequency space for taking into account the fact that projections are not continuously measured but sampled with a finite sampling step given by the scanner [4] (Fig. 3).

    Figure 3: Filters (or none) for 2D tomography

    2.3 Extension to 3D acquisition

    Eventually, 2D PET machines were extended to 3D acquisition by removing the septas (physical collimators) which were limiting the number of LORs. This move produced a significant jump in the sensitivity of the machines to true (i.e. useful) coincidences but also brought a few other features and drawbacks [5].

    In 3D, the situation gets more complex because the 3D Radon Transform is not equivalent to the 3D X-Ray Transform. In 3D PET, the representation of a volume in the direct space has to be reconstructed from its 2D parallel projections. However, the representation of the object volume in the projection space is no longer a 3D picture, but a 4D one. Therefore, the problem of the inversion of the 3D X-Ray Transform necessitates reducing the 4D projection representation to a 3D direct space representation. The central slice theorem still applies in 3D and is expressed by the relation: F

    r ( )= P

    r ( ),

    r = 0

    r , where is a frequency space vector and a unit vector along the projection

    direction. The derivation of the FBP algorithm remains, but the transfer function of the filter H ,r ( )

    is no longer unique and has to satisfy a condition expressed for every frequency vector r

    on the modulation transfer function (MTF) of the filter over all projection directions [6].

    T

    r ( )= d 2 r ( )H , r ( )

    = 1,

    r 3

    (3)

    A general solution for the filter H ,r ( ) is obtained by normalising an arbitrary function G ,

    r ( )

    by its MTF, providing that G ,r ( ) is such that the normalisation does not vanish.

    4/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    H ,r ( )= G ,

    r ( )

    d2 r ( )G , r ( )

    r 3

    (4)

    Figure 4: 4 acceptance normalization (left) vs. cylindrical scanner (right)

    As shown in Figure 4 (left), for = 4 the normalisation obtained for G ,r ( ) 1 is

    proportional to the circumference of the great circle perpendicular to r

    .

    d 2

    r ( )

    4 =

    2r (5)

    For a cylindrical scanner geometry, the acceptance of the scanner is obviously limited to the maximum aperture angle 0 of the cylinder and 4 . The normalisation has then to be carefullycalculated for each frequency

    r as the intersection arc length of the great circle perpendicular to

    r

    with the geometrical acceptance of the scanner (Fig. 4, right). For this setting, a filter was derived for G ,

    r ( ) 1 by Colsher in 1980 [7]:

    HColsherr ( )=

    r 2

    , cos cos0r

    4arcsin sin0 sin( ), cos < cos0

    (6)

    Alternatively, the solution can also be constructed by adding an appropriate factor to G ,r ( ):

    H ,r ( )= G , r ( )+

    w ( ) 1 d2 r ( )G , r ( )

    d2 r ( )w ( )

    (7)

    where is any integrable function of such that the denominator does not vanish. As an example, the FaVoR filter [8] is expressed following that type of solution.

    w ( )

    5/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    The large amount of calculations required by the exact analytic inversion of the 3D X-Ray Transform have motivated the development of fast approximate methods incorporating all the information measured by a volume scanner into the reconstructed image. Rebinning methods like single slice rebinning [9], multi-slice rebinning [10], and Fourier rebinning [11] have been derived to answer this problem.

    The main drawback of analytic reconstruction methods comes from the fact that measured projections are only pale, noisy copies of true analytic projections. This is especially the case when low statistics data are used, resulting in streak artefacts in the reconstructed image which come from the backprojection of data with significant Poisson variances (Fig. 5, left and middle). To overcome this problem, iterative reconstruction methods have been investigated thoroughly during the last decade where a discrete transition matrix Hij is built to express the probability for an event occurring in voxel j of the object to be detected in bin i of the object's projection.

    pi = Hij

    j fj

    (8)

    This relation defines a system of linear equations which is inconsistent or ill posed. Relaxation methods using non-negative variables have to be used in order to invert this system and obtain a discrete representation of the image in direct space. These iterative algebraic methods do not depend only on the measured projection matrix, but also allow the measurements to be weighed in order to model physical properties of the detection system and to compensate for factors that affect the accuracy of the data, such as attenuation and scatter. Furthermore, additional weights can include constraints and penalty functions to ensure that the solution obtained has certain desirable properties. Because they are often related to densities or anatomical features, the definition of the weights and constraints can directly benefit at various levels from multimodality approaches (see further).

    To take into account the stochastic nature of the detected events, a statistical Poisson model can be introduced. This allows a likelihood function to be defined for the discrete object representation in direct space:

    p | f( ) = p i

    pi

    pi!i e p i

    (9)

    The maximum likelihood expectation maximisation (ML-EM) algorithm [12], as well as its accelerated variant OSEM (Ordered Subset Expectation Maximisation) [13], are iterative methods for computing the maximum likelihood estimate of the image based on the measured projections. Although ML algorithms have a tendency to develop noise artefacts with increasing iterations, it appears that they have the potential to produce images of a higher quality both in terms of resolution and contrast than 3D analytic algorithms [14]. As a remedy to the noise instability of ML algorithms, it is possible to maximise the a posteriori probability f | p( ) rather than the a priori probability corresponding to the likelihood function. The Bayes theorem, which connects

    with the likelihood function, then permits prior information about smoothness of the object [15,16] to be incorporated in the reconstruction process. Examples of 3D maximum a posteriori (MAP) reconstructions are given in Figures 5 and 6 (right) using a median root prior (MRP) where the most probable value of a voxel is assumed to be close to the local median [17]. 3D MRP reconstructions are compared to 2D (Fig. 5, left) and 3D (Fig. 5, middle, and Fig. 6, left) analytic reconstructions, and to a OSEM reconstruction (Fig. 6, middle) of the same data.

    p | f( )) f | p(

    6/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    Figure 5: Chest slice: 2D FBP (left), 3D RP (middle), 3D MAP-MRP (right)

    Figure 6: Phantom: 3D RP (left), OSEM (middle), 3D MAP-MRP (right)

    2.4 Real life issues in medical imaging

    In real life clinical imaging several factors affect the quality of the detection and of the subsequent imaging.

    2.4.1 Quality control: normalization and calibration

    A significant issue in PET is the need for a very strict quality control protocol to make sure that all detectors on the gantry are working with the same (or at least a known) efficiency. This assures that for a given gantry geometry all annihilation events occurring anywhere in an empty FOV will produce a coincidence signal of the same strength. To put it another way: normalization actually insures the uniformity of the detection for all the LORs within the FOV. Traditional direct normalization uses the same source to illuminate and correct every LOR in the FOV. Although simple to handle, this method has some rather obvious drawbacks: it does not adequately differentiate between true coincidences and scatter (see later) [18], and it can only use low activity sources (i.e., very long acquisition times) in 3D mode [19]. Because of such limitations, and despite some promising techniques [20], an adequate normalization technique is still something of a Holy Grail.

    Besides normalization, calibration assures that the gain of the tomograph is known and stays the same through time. Together, normalization and calibration procedures allow data from a camera to be accurately quantified and stay comparable for similar protocols.

    2.4.2 False or mistaken coincidences

    In clinical PET, true coincidences which fall within a specific time/energy/geometry window for a given camera are the relevant measurements but this index can suffer significantly from missed or

    7/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    false coincidence events which stem mainly from random or multiple coincidences, or from scattered photons.

    2.4.2 .1 Random and multiple coincidences

    Random coincidences are caused by the detection within the coincidence time window and LORs of the camera of two photons coming from two separate annihilations. These coincidences which add statistical noise to the data can be corrected by assuming a direct link between single events (events without a coinciding one) and random coincidences [21], or through so-called delayed coincidence [22].

    Multiple coincidences occur when more than two photons fall within the coincidence time/energy/geometry window of the camera. Because they cannot be assigned to only one LOR, such events are simply discarded and generally contribute to the single events' rate.

    2.4.2 .2 Scatter

    Photons from positron-electron annihilations in PET can interact with an electron in the material (e.g., human tissues) they travel through before being detected. Such an interaction not only increases the energy of the electron but also changes the direction of the photon and its energy according to Equation 10 [23].

    E =E

    1 + (E m0c2 )(1 cos)

    with m0c2

    the rest mass of the electron, the scattering angle (10)

    For the detection equipment and on final images, the scattered photons represent ghost annihilations and add a background level and a statistical noise to the true coincidences' distribution. Such events cannot be simply discarded by energy windowing which would reject too much of the useful signal as well or reinstalling septas on machine of the latest designs which would significantly decrease the S/N ratio.

    Because 3D machines do not have septas any longer, they are also much more sensitive to scattered events than 2D machines were [5]. Although scatter used to be mostly ignored in 2D, it has become a major effect to tackle for opening the door to machines with even larger solid angles and total number of LORs. In such a context, many techniques have been proposed to correct scatter in 3D acquisitions [24,25,26,27,28]. Currently, because the amount of scattered events actually depends on the attenuation of the objects in the FOV and on the geometry of the tomograph, some of the most successful correction techniques use algorithms based on the modeling of scatter in the FOV [29,30]. Similarly to the correction of attenuation and partial volume effect (see later), multimodality approaches have become mandatory to the implementation of such techniques.

    2.4.3 Attenuation

    Even the high energy photons of PET can interact with matter on their paths. The probability of this interaction increases with the atomic number of the atoms and with a decreasing energy of the photons. The linear attenuation coefficient of a substance is the probability that for a photon of a given energy there will be interaction on a unit distance. This definition leads to Equation 11 [23] which links the intensity I(x) at a distance inside a substance of attenuation to the initial intensity

    xI0 .

    (11) I(x) = I0e

    (x )dx0

    x

    8/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    For PET, attenuation occurs independently for both photons from an annihilation and the probability of coincidence detection is therefore:

    Pc(LOR) = P1 (LOR) P2 (LOR) = e ( x)dx

    dist ( LOR )

    with P1(LOR) = e ( x)dx

    dist(1,LOR )

    , P2 (LOR) = e (x )dx

    dist (2 , LOR ) (12)

    Because it is independent of the location along the line of response (LOR), the attenuation factor (1-Pc) will be the same for any annihilation along that LOR (Fig. 7). This enables a simple correction of attenuation in PET (in SPECT, only one photon is detected and the attenuation is therefore not constant along a given LOR, or rather projection line).

    Figure 7: PET attenuation principle: constant for a given LOR

    To correct for attenuation, a map of the attenuating substances within the FOV has to be estimated. For most machines (but also depending on the acquisition protocol), such a map can actually be directly measured by sources rotating on the edge of the FOV [31]. During the rotation, the detectors facing the sources directly measure the attenuation of their rays through the attenuating substances to create a transmission sinogram which is later used to correct the emission data.

    Figure 8: Various types of attenuation maps for the head: from a simple ellipse (or more depending on the

    required level of details) to an anatomical map based on CT

    Other approaches to estimating a realistic attenuation map can use a simplified synthetic model of the objects inside the field of view (such as a single ellipse for the head) or pre-segmented transmission data [32,33] (Fig. 8). Nowadays, and even more so in the wake of multimodality machines [34], attenuation which is related to the tissues' density is best estimated with morphological data sets (e.g., CT) of the same patient and surrounding equipment. This information (which can also be obtained through more traditional multimodality techniques; see later) can either be used in the traditional way by projecting the map into a correction sinogram or directly within more sophisticated reconstruction algorithms.

    9/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    2.4.4 Low resolution effects

    Other than the effects previously mentioned, physical principles of PET scanners lead to low resolution data sets. The relatively large size of the detectors affects the exact localization of a detection, as does the delay (i.e., free travel) between a positron emission and its annihilation, or the time resolution of the machine. While smaller but still sizeable detectors and faster electronics will certainly be developed, the free travel of a positron is only linked to its energy and to the media it travels through. This latter characteristic represents the true resolution limit of PET imaging.

    2.4.4 .1 Deconvolution

    Combining the geometry of the tomograph (including the size of the individual detectors) and the final reconstruction acts as a low-pass filter on the data which are actually emitted and detected in the FOV. If this filter's transfer function is identified (either through direct measures or theoretical modeling and simulation of the tomograph's response), deconvolution can be applied to correct the reconstructed data and - hopefully - restore their true resolution [35].

    2.4.4 .2 Partial volume effect

    Low resolution imaging systems blend close original space locations together in the final reconstructed volume. If these locations contain the same activity, their representation (and subsequent interpretation) is not affected by this partial volume effect (PVE). If the activities are different though, the final result shows only an average activity at the corresponding location in the reconstructed data set and therefore tends to decrease the original contrast. Because such spatially based differences can be relevant in a clinical scope, PVE correction schemes have been developed which use a priori information about the distribution of the activity (e.g., based on the functional anatomy) to correct the reconstructed data and virtually increase their resolution beyond what the tomograph alone could actually produce [36]. Obviously, things are never as simple as they should be and as for all others effects the correction of PVE is still a very open and active research field which also heavily relies on multimodality approaches.

    3. MULTIMODALITY

    Nowadays, clinical medical imaging protocols routinely use several modalities with various physical principles to provide various and complementary information (Fig. 9).

    CT is imaging absolute density as measured by the attenuation of X-rays, and is therefore mainly used for hard tissue imaging (i.e., bones, etc.). MR is imaging the resonance/precession of protons from water molecules and, depending on the sequences used, can exhibit exquisite details in the water rich soft tissues (e.g., all but the bones). Both CT and MR produce data closely related to the anatomy/morphology of the patient and they are used as anatomical references in most multimodality protocols.

    SPECT, through various radioactively labeled tracers, can show metabolism such as blood perfusion and tumor growth. Because of its more refined principle (i.e., the annihilation of a positron with an electron and the emission/detection of two opposite gammas of 511 keV each), PET, which also uses various tracers and protocols, produces data of a higher intrinsic quality and has the potential to be more sensitive and quantitative than SPECT will ever be, providing that its drawbacks (e.g., scatter, attenuation, non-anatomical modality, etc.) are adequately addressed.

    10/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    Figure 9: Various modalities: Top (from the Visible Human data set): true anatomy, CT, MRI proton density

    and T1 weighted images; Bottom (from a clinical data set): MRI, PET and SPECT.

    As mentioned several times before, the need for multimodality approaches in this context (i.e., mainly for the correction of detrimental effects) principally stems from the necessity to accurately estimate the actual morphology of the volume (e.g., the patient) where the reconstruction will be performed. This information can then be used to correct for attenuation, scatter, PVE, to constrain the reconstruction where it is most appropriate, and of course to localize the reconstructed data in relation to the actual anatomy of the patient. Multimodality approaches principally rely on registration techniques, and on cross-visualization or cross-processing techniques of various complexity and/or creativeness. The actual techniques which will be used in a certain protocol will depend on the modalities or the organ(s) of interest, as well as on the clinical or research paradigm to be addressed. A few illustrative examples follow.

    3.1 Registration techniques

    The purpose of registration is to provide the best way to non ambiguously map the location of a voxel (volume element) from one data set with the location of a corresponding voxel in another data set. The goal of registration is therefore to estimate the most likely correspondence (i.e., geometric transformation) between two (or more) data sets.

    For the latest design of intrinsic multimodality machines [37], the registration technique will be a simple axial translation based on the table's motion between the two detectors (e.g., a Siamese PET and CT gantry).

    For (still) more standard and largely disconnected imaging equipment, the registration step has to take care of the various positions and physiological states of the patient on the various machines (different tables and acquisition/reconstruction geometries) and at the various times of the individual examinations.

    In the course of many years several approaches have been developed or refined which can now address most of the needs for rigid body transformation between modalities and data sets of various

    11/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    types [38,39,40]. Typically, these approaches go from manual alignment (e.g., based on simultaneous display and interactive transformation of image pairs) to the alignment of selected landmarks (artificial or anatomical fiducials), the registration of graphical primitives of a higher dimensionality (lines, planes, surfaces), and, finally, to the alignment of whole volumes through volume-based similarity measures (Fig. 10). The more complex techniques generally rely on iterative algorithms where a cost (analog to a distance between the reference and the realigned primitives or volumes) is minimized until a threshold or an actual minimum is reached. The geometric transformation extracted from the final parameters is then applied to produce a data set aligned with the actual reference (e.g., the anatomy of the patient).

    Figure 10: Several ways of aligning multimodality (MR and SPECT) images: images, surfaces, volume

    correlation (as seen on the 2D histogram)

    3.2 Visualization

    Once put together, registered data sets make up a multimodality (or multisensor) world of extended dimensionality which can be navigated at will through various visualization paradigms. Although there is a tremendous and much needed potential for cross processing of complementary modalities, multimodality visualization is still the most obvious outcome for many.

    Visualization techniques for multimodality volumes need to merge the most relevant information from each data set in a manner comprehensible to various types of users and in the scope of various clinical and research-oriented paradigms.

    When only slices are needed, 2D cuts through each registered volume can be individually colored and mixed with a user specified mutual transparency ratio prior to rendering. Such a display can, for example, exhibit metabolism over anatomy and permit localizing a hot spot in relation to a suspected tumor.

    When whole structures need to be visualized in 3D (or used as attenuation mask for attenuation correction), they first have to be segmented. The complexity of segmentation techniques range from simple thresholding of a data set (e.g., to extract bone in a CT) to manual editing (contouring, tagging, etc.), or even more refined histogram, morphological, topological and statistical techniques [41]. Once segmented, a structure can be used as a mask or as the underlying object onto which other information from the registered data sets will be projected. 3D visualization can then take the shape of either

    12/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    surface or volume rendering or a combination of both [42] (see various figures in this report and particularly Fig. 14).

    3.3 Cross processing: using morphology in PET reconstruction

    In the special case of refined reconstruction techniques which can use information from different sources, multimodality approaches can be used to inject morphology-derived (or other) knowledge at various levels into the reconstruction process (Fig. 11).

    Morphology can for example be used by the reconstruction algorithm to estimate the most likely map of attenuation coefficients for which the acquisition data need to be corrected. Although this correction can still be performed with standard PET transmission images, a registered CT from another modality or from a combined tomograph can preferentially be used to derive the PET oriented attenuation map.

    When the acquired signal is known to have come exclusively from a specific organ or tissue class (such as a tumor, or gray matter in the brain, etc.), this added knowledge and its corresponding anatomical mask extracted from an anatomical data set can be used by the algorithm to constrain the reconstruction to only the relevant portion of the volume. This type of constraint is more traditionally (although seldom) taken care of by an a-posteriori multimodality PVE correction [36] (Fig. 12). Incorporating this correction into the reconstruction algorithm has the potential to provide in only one step reconstructed data sets with both improved spatial resolution and quantitative accuracy.

    Finally, and although there is still much work going on to reliably estimate scatter for a geometry as complex as the human body, morphology and tissue characteristics extracted from CT or MR can also be used within the reconstruction algorithm to (at least) partially correct for the estimated scatter, most often in parallel with attenuation correction.

    Figure 11: Multimodality in relation to reconstruction

    13/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    Figure 12: PVE correction on a multimodality brain data set: MRI (top left) is used to segment the brain in its various tissue components (top right: from dark to clear: white, gray matter, and CSF). Based on the estimated tomograph transfer function, this map is then used to correct the PET data set where the relatively small size of

    the cortex (gray matter) affects its representation. When compared to the original data on the left, the PVEc slice (middle row) exhibits restored gray matter concentration which also shows on a 3D MIP (bottom row) through

    the whole brain.

    4. EXAMPLES OF ADVANCED PROTOCOLS

    In medicine, tomographies are not only developed for the beauty of the equipment or of the algorithm, but moreover to bring the best knowledge about specific characteristics of pathological (or even normal) anatomy, metabolism and function. As such, any tomography (be it CT, MR, SPECT or PET) is often only a component of more complex multidisciplinary protocols whose final aim is to best understand, diagnose, plan therapy and assess recovery for pathologies otherwise too complex to characterize. Two examples will now be provided in which multimodality approaches have been mandatory to the proper staging of the patient.

    4.1 Liver tumor

    A patient was referred because of jaundice and a liver tumor was diagnosed. In the diagnostic work-up, a CT angiography was performed and showed a large tumor and thrombosis of the portal branches. Because of the thrombosis, one could not be sure whether the tubular structures seen at the hilus were only vessels or also dilated bile ducts. In order to clarify this question, which was of crucial importance for the choice of therapy, an MR cholangiography was performed and the images were fused with the CTA by registering the liver and other structures from both data sets.

    After registration, even the blood vessels which were not clearly exhibited on the CTA could be differentiated from the biliary ducts shown by the MR cholangiography. It was obvious on the registered images that the main left intrahepatic bile duct was dilated, which made a percutaneous biliary drainage a realistic therapeutic approach.

    14/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    After having driven a drain and a stent into the common bile duct, a second CT was performed to control the intervention. This CT was registered to the other data sets, again based on the anatomical landmarks of the liver. The resulting multi(3)-modality data set allowed the biliary drain and stent from the second CT to be shown in relationship to the liver tumor and the intrahepatic bile ducts (Fig. 13, bottom right).

    Figure 13: Multimodality imaging for planning a drain and a stent in the area of a liver tumor: Top: planning CT

    angiography and MR cholangiography; Bottom: control CT and 3D model incorporating all the information (from clear to dark: liver, tumor, drain+stent).

    4.2 Epilepsy

    Figure 14 summarizes the case of a patient for whom the non-invasive epilepsy investigation hinted at a large part of the left frontal cortex, a major area normally not to be tampered with too much. On the top and middle row, Figure 14 exhibits the results from the non invasive phase of the investigation. MRI shows a normal brain while FDG-PET shows a mild hypometabolism which correlates to the cortical mapping of an EEG based EMT (Electro-Magnetic Tomography) of an interictal spike average and also with the significant perfusion increase derived from ECD-SPECT data sets.

    In order to refine the Phase I findings, a CT compatible grid and a few strips were surgically implanted through a scalp and bone flap, and the patient was then monitored for a few days. The control CT performed after the implantation was used to initially estimate the location of the grid's contacts. This estimation was then further refined by an automatic model-based multi-constrained fitting after registering the control CT to the Phase I MR anatomy [43]. At the end of the whole process, every individual grid contact could be projected in the Phase I multisensor world and the results of the Phase II grid monitoring could then be directly compared to the Phase I findings (Fig. 14, middle and bottom left row). Figure 14 shows very good agreement between Phase I and Phase II results: the Phase II focus pin-pointed by the grid (the 8 clearer contacts on the two leftmost/anterior grid columns) closely correlates to the Phase I evidence. Artificially stimulating the grid's contacts

    15/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    also allowed the surgeons to map the cortex directly under the grid for locating functional areas to be preserved by or serve as spatial reference to subsequent surgery.

    As an ultimate control of the whole diagnosis/treatment sequence for this patient, surgical slides taken during the final intervention were merged with the Phase I/II multisensor world (two rightmost pictures of the bottom row of Figure 14). This "real-life" information demonstrates that surgery actually took place where Phase I and II data were all pointing. For further information, reference [43] describes this case in more details.

    Figure 14: Multisensor imaging for epilepsy: the top images were acquired non-invasively (MRI, FDG-PET,

    EEG-based EMT, parametric ECD-SPECT) and led to the implantation of the invasive electrodes (grid). After monitoring, and registration of the control CT with the other modalities, the focus identified with the electrodes was found to correlate and refine the non invasive findings. Surgery was finally performed based on all the data

    at hand and a slide taken during the intervention was aligned to the all-digital world.

    5. CONCLUSION

    Tomography for clinical modalities is a field which has been evolving very quickly in the last few years because of innovative designs and of advances in algorithms, data perception and computing power. The evolution of most medical imaging protocols and people toward the integration of obviously complementary equipment also opened the door to using data sometimes produced in very different scopes for refining the processing of others. The multimodality approaches (both soft and hard) and the refined reconstruction algorithms and processing approaches also permitted by always more powerful computer architectures and acquisition systems will eventually permit to see through and inside the human body and metabolism like probably not so many ever dared to dream of.

    16/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    ACKNOWLEDGEMENTS

    The PARAPET consortium for OSEM and MAP-MRP images. The Department of Radiology for clinical MRI, SPECT and PET images, the Clinic of Neurology for clinical electrophysiology data (EEG and spherical EMT(Loreta)), the EEG and Epilepsy Unit for overseeing clinical epilepsy investigations, and the Clinic of Neurosurgery for all surgery (including invasive implantation). Data collection, merging and all other processing and visualization performed in the Laboratory of Functional and Multidimensional Imaging.

    REFERENCES [01] G.N. Hounsfield, A method of an apparatus for examination of a body by radiation such as x-

    ray or gamma radiation, (1972), British Patent Number 1283915.

    [02] G.N. Hounsfield, Computerized transverse axial scanning (tomography): Part 1:Description of system, British J of Radiology 46 (1973) 1016-1022.

    [03] J. Radon, Uber die Bestimmung von Funktionen durch ihre Integralwerte langs gewisser Mannigfaltigkeiten, Berichte des Schsischen Akademie der Wissenschaften 69 (1917) 262-277.

    [04] R.E. Ziemer, W.H. Tranter and D.R. Fannin, Signals and systems: continuous and discrete (Macmillan Pub Co., 1989).

    [05] S.R. Cherry, M. Dahlbom, and E.J. Hoffman, 3D PET using a conventional multislice tomograph without septa, J. Comput. Assist. Tomogr. 15 (1991) 655-668.

    [06] M. Defrise, D.W. Townsend and R. Clack, Three-dimensional image reconstruction from complete projections, Phys. Med. Biol. 34 (1989) 573-587.

    [07] J.G. Colsher, Fully three-dimensional positron emission tomography, Phys. Med. Biol. 25 (1980) 103-115.

    [08] C. Comtat, C. Morel, M. Defrise and D.W. Townsend, The Favor algorithm for 3D PET data and its implementation using a network of transputers, Phys. Med. Biol. 38 (1993) 929-944.

    [09] M.E. Daube-Witherspoon and G. Muehllener, Treatment of axial data in three-dimensional PET, J. Nucl. Med. 82 (1987) 1717-1724.

    [10] R.M. Lewitt, G. Muehllener and J.S. Karp, Three-dimensional reconstruction for PET by multi-slice rebinning and axial image filtering, Phys. Med. Biol. 39 (1994) 321-340.

    [11] M. Defrise, P.E. Kinahan, D.W. Townsend, et al., Exact and approximate rebinning algorithms, IEEE Trans. Med. Imag. 16 (1997) 145-158.

    [12] L.A. Shepp and Y. Vardi, Maximum likelihood reconstruction for emission tomography, IEEE Trans. Med. Imag. 1 (1982) 113-122.

    [13] H.M. Hudson and R.S. Larkin, Accelerated image reconstruction using ordered subsets of projection data, IEEE rans. Med. Imag. 13 (1994) 601-609.

    [14] A.J. Reader, A. Visvikis, K. Erlandsson, et al., Intercomparison of four reconstruction techniques for positron volume imaging with rotating planar detectors, Phys. Med. Biol. 43 (1998) 823-834.

    [15] S. Geman and D. Geman, Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Trans Pat. Anal. Mech. Intell. 6 (1984) 721-724.

    [16] T.J. Heber and R.M. Leahy, A generalised EM algorithm for 3-D Bayesian reconstruction from Poisson data using Gibbs priors, IEEE Trans. Med. Imag. 8 (1989) 194-202.

    17/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    [17] S. Alenius and U. Ruotsalainen, Bayesian image reconstruction for emission tomography based on median root prior, Eur. J. Nucl. Med. 24 (1997) 258-265.

    [18] J.M. Ollinger, Detector efficiency and Compton scatter in fully 3D PET, IEEE Trans. Nucl. Sci. 42(4) (1995) 1168-1173.

    [19] J.S. Liow, and S.C. Stroher, Normalisation using rotating rods for 3D PET, Proc. 3rd Int. Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine (1995).

    [20] E.J. Hoffman, T.M. Guerrero, G. Germano, et al., PET system calibrations and corrections for quantitative and spatially accurate images, IEEE Trans. Nucl. Sci. 36(1) (1989) 1108-1112.

    [21] B.E. Cooke, A.C. Evans, E.O. Fanthome, et al., Performance figure and images from the Therascan 3128 positron emission tomograph, IEEE Trans. Nucl. Sci. 31(1) (1984) 640-644.

    [22] M.E. Casey, and E.J. Hoffman, Quantitation in Positron Emission Computed Tomography: 7. A technique to reduce noise in accidental coincidence measurements and coincidence efficiency calibration, J. Comput. Assist. Tomogr. 10 (1986) 845-850.

    [23] R.D. Evans, The atomic nucleus (McGraw-Hill, 1955)

    [24] D.L. Bailey, and S.R. Meickle, A convolution-subtraction scatter correction method for 3D PET, Phys. Med. Biol. 39 (1994) 411-424.

    [25] C.S. Levin, M. Dahlbom, and E.J. Hoffman, A Monte-Carlo correction for the effect of Compton scattering in 3-D PET brain imaging, IEEE Trans. Nucl. Sci. 42(4) (1995) 1181-1185.

    [26] S.R. Cherry, S.R. Meickle, and E.J. Hoffman, Correction and characterisation of scattered events in three-dimensional PET using scanners with retractable septas, J. Nucl. Med. 34 (1993) 671-678.

    [27] L. Shao, R. Freifelder and J.S. Karp, Triple energy window scatter correction technique in PET, IEEE Trans. Med. Imag. 13(4) (1994) 641-648.

    [28] C.W. Stearns, Scatter correction method for 3D PET using 2D fitted gaussian functions, J. Nucl. Med. 36 (1995) 105P

    [29] J.M. Ollinger, Model-based scatter correction for fully 3D PET, Phys. Med. Biol. 41 (1996) 153-176.

    [30] C.C. Watson, D. Newport, and M.E. Casey, A single-scatter simulation technique for scatter correction in 3D PET, in Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, P. Grangeat and J.L. Amans (Eds.), (Kluwer Academic Publishers, 1996).

    [31] C.J. Thompson, A. Dagher, D.N. Lunney, et al., A technique to reject scattered radiation in PET transmission scans, Proc. SPIE 671 (1986) 244-253.

    [32] S. Siegel, and M. Dahlbom, Implementation and evaluation of a calculated attenuation correction for PET, IEEE Trans. Nucl. Sci. 39(4) (1992) 1117-1121.

    [33] M. Xu, P.D. Cutler, and W.K. Luk, Adaptative, segmented attenuation correction of whole-body PET imaging, IEEE Trans. Nucl. Sci. 43 (1996) 331-336.

    [34] P.E. Kinahan, D.W. Townsend, T. Beyer, and D. Sashin, Attenuation correction for a combined 3D PET/CT scanner, Medical Physics 25 (1998) 2046-2053.

    [35] J.C. Yanch, A.T. Irvine, S. Webb, and M.A. Flower, Deconvolution of emission tomoraphic data: a clinical evaluation, British J of Radiology 61 (1988) 221-225.

    18/19

  • Luc Bidaut et al., preprint to be published in CERN Yellow Reports

    [36] C. Labb, J.C. Froment, A. Kennedy, et al, Alzheimer Disease and Associated Disorders 10 (1996) 141-170.

    [37] T. Beyer, D.W. Townsend, T. Brun, et al., A combined PET/CT scanner for clinical oncology, Journal of Nuclear Medicine 41 (2000) 1369-1379.

    [38] P.A. Van den Elsen, E.J.D. Pol and M.A. Viergever, Medical image matching a review with classification, IEEE Engineering in Medicine and Biology 40(1993) 26-39.

    [39] J. West, J.M. Fitzpatrick, M.Y. Wang, et al, Comparison and evaluation of retrospective intermodality brain registration techniques, Journal of Computer Assisted Tomography 21(4) (1997) 554-566.

    [40] M. Holden, D.L.G. Hill, E.R.E. Denton, et al, Voxel similarity measures for 3-D serial MR brain image registration, IEEE Transactions on Medical Imaging 19(2) (2000) 94-102.

    [41] R.A. Robb, Three dimensional biomedical imaging (Wiley-VCH Pub., 1995).

    [42] L.M. Bidaut, R. Pascual-Marqui, J. Delavelle, et al, Three- to five-dimensional biomedical multisensor imaging for the assessment of neurological (dys)function, J Digit Imaging 9(4) (1996) 185-198.

    [43] L.M. Bidaut, Model-based integration of invasive electrophysiology with other modalities, SPIE-Medical Imaging 4319 (2001).

    BIBLIOGRAPHY

    R.A. Brooks and G. Di Chiro, Principles of computer assisted tomography (CAT) in radiographic and radioisotopic imaging, Phys. Med. Biol. 21(5) (1976) 689-732.

    G.T. Herman, Computer Tomography, reconstruction from projections (Academic Press, 1980)

    M.E. Phelps, J.C. Mazziotta and H.R. Schelbert (eds.), Positron emission tomography and autoradiography - principles and applications for the brain and heart (Raven Press, 1986)

    F.J. Beekman, M. Defrise and M.A. Viergever (Eds.), Special issue on volumetric reconstruction of medical images, IEEE Transactions on Medical Imaging 19(5) (2000)

    19/19

    INTRODUCTION

    TOMOGRAPHY IN MEDICINE

    Principles

    2D concepts

    Extension to 3D acquisition

    Real life issues in medical imaging

    Quality control: normalization and calibration

    False or mistaken coincidences

    Random and multiple coincidences

    Scatter

    Attenuation

    Low resolution effects

    Deconvolution

    Partial volume effect

    MULTIMODALITY

    Registration techniques

    Visualization

    Cross processing: using morphology in PET reconstruction

    EXAMPLES OF ADVANCED PROTOCOLS

    Liver tumor

    Epilepsy

    CONCLUSION

    Pruhonice2001-preproceedings/DB2.pdf

  • 1

    POSITRON EMISSION TOMOGRAPHY A.M.J. Paans PET-Center Groningen University Hospital, Groningen, the Netherlands

    Abstract Positron Emission Tomography (PET) is a method for determining biochemical and physiological processes in vivo in a quantitative way by using radiopharmaceuticals labeled with positron emitting radionuclides as 11C, 13N, 15O and 18F and by measuring the annihilation radiation using a coincidence technique. This includes also the measurement of the pharmacokinetics of labeled drugs and the measurement of the effects of drugs on metabolism. Also deviations of normal metabolism can be measured and insight in biological processes responsible for diseases can be obtained.

    1. General introduction The idea of in vivo measurement of biological and/or biochemical processes was already envisaged in the 1930's when the first artificially produced radionuclides, which decay under emission of externally detectable radiation, of the biological important elements carbon, nitrogen and oxygen were discovered with help of the then recently developed cyclotron. These radionuclides decay by pure positron emission and the annihilation of positron and electron results in two 511 keV -quanta under a relative angle of 180o which are then measured in coincidence. This idea of PET could only be realized when the inorganic scintillation detectors for the detection of -radiation, the electronics for coincidence measurements and the computer capacity for data acquisition and image reconstruction became available. For this reason Positron Emission Tomography is a rather recent development in functional in vivo imaging. PET employs mainly short-lived positron emitting radiopharmaceuticals. The radionuclides employed most widely are: 11C (t = 20 min), 13N (t = 10 min), 15O (t = 2 min) and 18F (t = 110 min). Carbon, oxygen, nitrogen and hydrogen are the elements of life and the building stones of nearly every molecule of biological importance. However, hydrogen has no radioactive isotope decaying with emission of radiation which can be detected outside the human body. For this reason a fluorine isotope is often used as a replacement for a hydrogen atom in a molecule. Due to these short half-lives the radionuclides have to be produced in house, preferably with a small, dedicated cyclotron. Since the chemical form of the produced radionuclides can only be simple, input from organic- and radiochemistry is essential for synthesis of the desired complex molecule. Input from pharmacy is required for the final formulation and pharmacokinetic studies and medical input is evident and required for application. Longer lived positron emitting radionuclides are sometimes commercially available or obtainable from research facilities with larger accelerators. Some examples of longer liver positron emitting radionuclides are 52Fe (t = 8.3 h), 55Co (t =17.5 h) and 124I (t = 4.2 d). Sometimes also positron emitting radionuclides can be obtained from a generator system. Examples are 82Rb (t = 76 s) from 82Sr (t = 25.5 d) and 68Ga (t = 68 m) from 68Ge (t = 270 d). Although all these radionuclides are used, the isotopes of the biological most important elements receive most attention. At the moment small dedicated cyclotrons are a commercially available product. These

  • 2

    accelerators are one or two particle machines with fixed energies. At the moment mostly negative-ion machine are being installed because of their relative simple extraction system and high extraction efficiency. They are installed complete with the targetry for making the four above mentioned short-lived radionuclides in batches up to 100 GBq or more. Also the chemistry for some simple chemical products is incorporated e.g. 11CO2, 11CO, C15O, C15O2, H215O etc.. Sometimes more complex syntheses, e.g. 18FDG, 18F-DOPA, H11CN, 11CH4 or 13NH3, are also available from the cyclotron manufacturer or a separate specialized company. These products become available via dedicated, automated systems or via a programmable robotic system. Other radiopharmaceuticals have to be set up individually in each PET center. The state of the art positron camera is a complex radiation detection technology product combined with a relative large computing power for data acquisition and image reconstruction. The basic detector in a modern PET camera is a BGO detector block divided in e.g. 8x8 sub-detectors and read out by 4 photomultiplier tubes (PMT). By adding and subtracting the individual signals of the PMT's the scintillating sub-detector in the BGO block can be identified. Around 70 blocks will form a ring and 4 of these rings can added to get an axial field of view of approximately 15-16 cm. In this way 31-63 planes are imaged simultaneously with a spatial resolution of 4-7 mm FWHM depending on the specific design of the tomograph. The septa between the adjacent sub-detector rings can also be retracted creating a much higher sensitivity in this 3-D mode at the cost of a larger scatter fraction. With the present generation of positron camera=s the singles count rates that can be managed are in the order of over 50,000,000 counts per second resulting in coincidence count rates of over 500,000 per second. Hardware and software for data acquisition, image reconstruction and for image manipulation is available. Positron cameras are able to measure the radioactivity in absolute terms, Bq/pixel, which is an unique feature. This is possible because the coincidence technique allows for the correction of the attenuation of radiation inside the body of the individual patient. This correction is accomplished by making an individual "transmission image" with an external positron emitting source. This individual transmission image can also been used to correct for scattered radiation present in the image after a 3D-acquisition. This external source is built into the camera and can be extended from its well shielded storage box during the operation of the positron camera. To translate the measured radioactivity distribution into functional or physiological parameters, compartimental models have been developed for radiopharmaceuticals with known metabolite profiles. Although only a few measurable quantities, i.e. tissue and plasma concentration (the latter by taking blood samples), are available, it is still possible to calculate e.g. the glucose consumption by employing a dynamic data acquisition protocol in combination with a compartimental model. It is also possible to make a whole body scan by translating the patient through the PET camera. By projection the transverse section images a whole body overview can be made. A PET center is the combined relevant knowledge of chemistry, medicine, pharmacy and physics and a PET center is staffed by all these disciplines in a good cooperating team. 2. Accelerators for PET radionuclide production In the energy range from 10-20 MeV all four basic radionuclides, 11C, 13N, 15O and 18F, can be produced. In general the choice is to use a cyclotron, not a linear accelerator, because in this energy range the cyclotron is a versatile and economic solution. Although the higher the energy the more of the excitation can be exploited and the higher the yield, some companies on purpose designed cyclotrons at the low energy range of 10-11 MeV protons for economical reasons.

  • 3

    Possible commercial PET cyclotron manufacturers are CTI (USA), Ebco Industries (Canada), General Electric (USA), IBA (Belgium) or Oxford Industries (UK). In the following a more general approach for radionuclide production is taken but at the end the focus is again on the details of the production four basic PET radionuclides. 2.1 General production formulae The production of radionuclides can be achieved by using neutrons or by using charged particles as irradiation source. Irradiation by neutrons leads to neutron capture and so to neutron rich nuclides. The use of charged particles like protons, deuterons, helium-3 or helium-4 leads to nuclear reactions of the type (p,xn) or (p,) which results in the production of neutron deficient nuclides. Both types of nuclear reactions lead to radioactive radionuclei and the farther away from the line of stability the shorter the half life will be in general. The energy requirement of a good yield for charged particle induced reactions will be roughly equal to the mass difference with 5 - 10 MeV added to be well above the threshold energy and to reach the maximum in the reaction cross section. Nuclear reactions induced by electrons have a very low cross section because of the weak interaction of electrons with matter. In case of electrons mostly the "bremsstrahlung" generated on a heavy target is used. So we have -ray induced reactions which have a very poor selectivity for a nuclear reaction channel. Due to this poor selectivity the isolation of the desired radionuclide often requires quite some chemistry to be performed. Production by charged particles is controlled by following:

    dNf = Ni Nt dt - Nf dt

    with Nf = number of nuclides produced Ni = number of incoming particles = partial reaction cross section = ln2/t = decay constant Nt = number or target nuclei

    The first term gives the production rate while the latter gives the loss by decay during the irradiation. Integration leads to:

    Nf (t) = Ni Nt (1 - e-t)/ With Ni = i/(Zi e)

    i = beam current Zi = charge of incoming particle e = elementary charge Nt = m NA / M m = weight in g/cm2M = molecular weight NA = Avogadro's number

  • 4

    Since is a function of energy

    dNf (t,E) = (NA i)/(Zi e M) dm/dE (1 - e-t)/ (E) dE

    with dE/dm = dE/d(x) = stopping power (Bethe) Integration over the energy range of interest leads to:

    Nf (t,E) = (NA i)/(Zi e M) (1 - e-t)/ f (dE/d(x))-1 (E) dE So the yield is current and not time determined. Irradiation times of more than two half lives are not productive. For radionuclide production accelerators with a high beam current and larger beam size on the target position to avoid heating problems are required. This is just the opposite of what experimental nuclear physics generally requires. After irradiation the normal laws of radioactive decay are valid:

    Nf (t) = Nf(EOB) e-(t-EOB)

    with EOB = End Of Bombardment The relation between the activity in Bq and the total number of radionuclei involved can be calculated according to:

    A(t) = N(t) - N(t+1) = N(t) (1 - e-) if

  • Table 1. Stopping power (dE/d(x) = MeV cm2/g) for aluminium for different particles at different energies

    20 40 60 80 MeV

    p 19.62 11.38 8.34 6.72 d 33.83 19.68 14.34 11.48 3He 184.9 108.3 79.02 63.17 4He 229.1 135.2 98.87 79.09

    So the stopping power is increasing with increasing atomic number of the incoming particles and is decreasing with increasing energy. Using heavy ions (A>4) for radionuclide production will be hampered two fold: i) low cross sections and ii) relative high stopping powers. Conclusion: if possible use protons or deuterons.

    Fig 1. Example of excitation functions: 76Se(p,xn)77-xBr nuclear reactions. Solid lines: experimental excitation curves for (p,xn) on 76Se. Dashes lines: theoretical excitation function according to the ALICE code.

    5

  • 6

    The excitation functions of (p/d//,xn) reactions gives the cross section as function of the incoming particle energy. There will always be an overlap in reaction channels: (p,n+1) starts while (p,n) still continues often as an evaporation reaction, at higher energy direct mechanism play a more important role. To make an evaluation of the cross sections involved for nuclear reactions nuclear evaporation codes can be used, e.g. the code ALICE available from the Nuclear Energy Agency in France. 2.2 Accelerators and specific activity With charged particles neutron deficient radionuclides are produced. In a nuclear reactor neutron capture is the most important nuclear reaction leading to neutron rich radionuclides. So both production possibilities are complementary. There are very few overlaps. An interesting overlap example is the production of 18F by starting with neutrons to generate tritons which induce a charged particle reaction:

    In a nuclear reactor: 6LiCO3 + n t + 16O 18F + n By the irradiation with neutrons tritons are generated by the breaking up of the 6Li into a tritium and a helium-3 nucleus. The triton is than able to produce 18F from the 16O nucleus. Because of the incorporation of the oxygen inside the molecule, the range of the tritons is not essential. Since this is a two-stage process the yield is lower than the yields achievable with direct reactions as can be done with charge particle induced reaction like:

    p + 18O 18F + n The beam energy required for (p,xn) reaction is a rule of the thumb: 7 + 10*xn MeV with xn the number of neutrons to be knocked out or evaporated. For an exact calculation one should calculate the mass difference and add roughly 7 MeV for the maximum cross section. For radionuclide production the beam quality (dE/E) is not that important. The beam size should be not to small in order not to have to high power density on the target which could initiate problems like melting or evaporation, so a beam size of cm2 instead of mm2 has to be preferred. The targets can be installed inside or outside the cyclotron. With internal targets the extraction of the beam is avoided, so beam losses are also avoided. A higher yield with respect to external targets has to be expected. At negative ion cyclotrons the extraction efficiency can be very close to 100% due to the effect that by stripping the electrons a natural extraction becomes true. In case of target problems it is more easy to process the target at an external target position than it is with the target at an internal position. Also a radioactive contamination of the cyclotron, due to an internal target, can interfere with the normal maintenance program. The transport of the irradiated material can be very easy in the case of a gas target. Just a normal flow can cary the radioactivity over rather long distances. In case of fluid target a helium flow through thin tubing can push the irradiated material into the wished position. With solid target an exchange system or train system can transport the target or target material. The local situation will dictate the solution of the particular problems. Classes of accelerators for radionuclide production

  • 7

    Ep < 20 MeV (p,n), (p,), (d,n), particles: p, d. Typical cyclotron for PET-Centers Ep < 35 MeV up to (p,3n), as a multi-particle (p, d, , ) cyclotron this is a versatile radionuclide

    production machine. Users are the commercial radionuclide producers. Ep < 70 MeV up to (p,5n). Often a multi-particle, variable energy cyclotron. A complex

    production machine. Often build as a research machine for nuclear physics where radionuclide production is only minor interest. Examples are the former cyclotron at the KVI (Groningen, Netherlands) and cyclotrons at PSI (Zrich, Switzerland). Radionuclides like 123I, 81Rb,82Sr/82Rb, 52Fe, combined productions 52Fe + 55Co can be performed with these cyclotrons.

    Ep > 100 MeV Examples are linear accelerators at BNL (Brookhaven, USA) and LANL (Los Alamos, USA). These are research accelerators used for radionuclide production for scientific goals primarily. Due to the high energy, spallation reactions are the main reaction mechanism.

    The technical staff required for these different classes of accelerators show of course a tremendous differences. Recent developments in cyclotrons for radionuclide production are the negative ion machines: H-, D- is accelerated. To have a high beam current an external source with axial injection is required. The big advantage of negative ions is that after passing the through a carbon stripper foil all electrons are removed and beam is automatically bended outwards the machine. An extraction efficiency of 100% is possible and by positioning the stripper foil only half way the beam also multiple (at least 2) extracted beams are possible. Commercial negative ion machine are available in wide energy range, 10-230 MeV. The demands on the vacuum during acceleration are higher than for postive ion machines because by stripping off one electron by the residual gas not only the acceleration process is stopped but the resulting energetic neutral beam can damage the cyclotron. If more than one electron is stripped the beam will bend into the opposite direction and hit the vacuum chamber or internal parts of the cyclotron. At the moment there are also developments in low energy (8 - 12 MeV) RFQ-Linacs for the accelaration of 3He beams in combination with special target to produce the required PET radionuclides. Due to the nature of the induced nuclear reaction (p,xn) here is a change in element:

    Z Z + 1 The term "carrier free" is used when no cold material of the same chemical as the radioactive is present. This is very difficult because often a natural dilution is the case. Often the term "non carrier added" or "nca" production is used. This means no cold material of the same chemical identity is added on purpose during the preparation of the radiopharmaceutical.

  • 8

    Specific activity is the amount of activity per gram or mole. The theoretical maximum of the specific activity a few radionuclides:

    11C 9.2 * 109 Ci/mol = 340 TBq/mmol 14C 6.2 * 101 Ci/mol = 2.3 MBq/mmol (5730 yr vs 20.4 min) 123I 2.4 * 108 Ci/mol = 8.9 TBq/mmol

    The maximum theoretical specific activity is determined by the half life. In practise these theoretical maxima in specific activity are never reached. Very special precautions have to be taken to keep the dilution factor within bounds. With a carrier free or non carrier added synthesis one expects no toxic effects (e.g. H11CN is not toxic any more) and no physiological effects (tracer principle). 2.3 Targetry and specific PET productions The produced radionuclide can only be obtained in a simple chemical form. Irradiating complex chemical structures causes problems because of the large energy deposition inside the target material which can damage the chemical structure of molecule. In nuclear reactions energies of MeV's are required. For the chemical binding in molecules energies in the order of eV are required. 2.3.1 Targetry general Gas targets, liquid targets or solid state targets can be used. A general problem is the cooling of dissipated power E.i. The 11C-production with a 15 MeV proton beam at 30 A generates 450 W. One of the most critical parameters for the life time of a target is the power density (W/cm2) which in fact is gouverned by the beam size. Increase the beam size in order to decrease the power denstity is often advisable. Accidents, because of malfunctioning of the target, happen, in most cases, rather soon after the start of the irradiation. Water cooling on the back-side and helium-cooling on the entrance foil, double foil technique, in a closed circuit should be used. The energy loss in entrance foil should as small as possible. The mechanical properties of the foil should be able to hold the pressure from inside the target and should have a good heat conductivity in order to get rid of the energy deposition of the beam. The energy loss in target material should be optimized based on excitation function The low energy part of the beam, after transmission through the active part of the target, can be dumped in the back side of target. In this way activation of the cooling water is inhibited. Sometimes one can combine two targets behind each other and so use the whole energy range of the beam. An example is the simultaneous production of 52Fe and 55Co by solid target combination: 55Mn(p,4n)52Fe and 56Fe(p,2n)55Co. These are also examples of target materials with a high melting point. Melting and/or evaporation of target material can be serious problem, e.g. pure selenium as target material for the production of bromine will cause problems because of the evaporation of the selenium. Selenium copper or silver alloys have high melting points ( 1000 0C or more) instead of an evaporation temperature of just over 200 0C as is the case for pure selenium. All materials which can be hit by the beam should be selected for a minimal production of longer lived radionuclides. So aluminum or copper as target holder and e.g. graphite to stop the beam.

  • 9

    Avoid iron or stainless steel because many long living radionuclides can be produced. However, since the entrance foils has to be thin and strong, e.g. havar, stainless steel or titanium foils have to be used, thickness 8-25 m, because of their strength. A helium cooling of the foils, by using a double entrance foil technique can be very effective. A better cooling of the target is possible by rotating the target. Since in fact a duty cycle of less than 100% is introduced herewith, it is not effective for the production. By positioning the target in an inclined position, instead of perpendicular to the beam, also a lower power density can be arranged. There are always different nuclear reactions possible to produce a radionuclide, an example for 11C is given in table 2. Table 2. Nuclear reactions for the production of carbon-11 Particle Reaction E(thresh)

    (MeV) (mb) 12C(,n)11C 18.7 4 p 11B(p,n)11C 3.0 250

    12C(p,pn)11C 20.3 100 14N(p,)11C 3.1 250

    d 10B(d,n)11C 0. 250 12C(d,p2n)11C 24.4 60

    3He 9Be(3He,n)11C 0. 50 10B(3He,pn)11C 0. 280 11B(3He,p2n) 2.3 30 12C(3He,4He)11C 0. 300 16O(3He,24He)11C 6.3 50

    4He 9Be(4He,2n)11C 18.8 17 10B(4He,p2n)11C 27.4 11B(4He,p3n)11C 42.4 12C(4He,4Hen)11C 24.9 50

    2.3.2 11C production The most commonly used reaction is: 14N(p,)11C. The target material is nitrogen with a little oxygen mixed in: N2 (99.9999%) + O2 (2%), pressure 7 - 10 bar depending on target and beam energy. The primary products in the target are 11CN radicals, recoil reaction of 11C with N2, and 11CO as recoil reaction of 11C with O2. The 11CN radicals and the 11CO are than, during irradiation, oxydized to 11CO2. The 11CO2 is collected in a liquid nitrogen trap and than over distilled into the chemistry set-up for further chemical synthesis. Be aware of possible chemical reactions inside the target during irradiation. For instance when CH4 (methane) is irradiated with protons polymerization of the target gas will result in a yellow coating of the target wall and all radioactivity is adsorbed in this material. The molecular structure of the target material should inert with respect to possible processes induced by the

  • beam.

    Fig. 2 11C - target system for the proton induced reaction on nitrogen.

    Fig. 3 11CO2 collection system. Gas samples can be taken for analysis. The yield can be measured by trapping the 11CO2 in the NaOH. Using the trap in liquid nitrogen the whole yield can be recovered and than vacuum distilled into the chemical set-up.

    10

  • 11

    2.3.3 13N production The most commonly used reaction for the 13N - production is 14N(p,)13N with H2O as target material. After irradiation the 13N is available as nitrate or nitrite in the water. Distillation under steam with Devarda's alloy yields 13NH3. Nowadays, by addition of ethanol into the target water, there is an in-target production of 13N-ammonia. The ammonia is most often used for cardiac flow studies. 2.3.4 15O production The most commonly used reaction for the production of oxygen-15 is the 14N(d,n)15O reaction. With a positive Q-Value of 3.1 MeV a 3 MeV deuteron only cyclotron is sufficient for the production. In fact the beam energy should not become above 6.5 MeV in order to avoid radioactive impurities. The target material is high purity nitrogen with oxygen mixed in: N2 (99.9999%) with an addition of O2 (4%) and yields 15O2. The most commonly use of oxygen-15 is for rCBF studies. This possible is two ways: i) convert according to 15O2 + C (400oC) ---> C15O2. Upon inhalation the C15O2 is converted into H215O instantaneously in the lung enzymatically. It is also possible to convert the oxygen into water according to: 15O2 + H2 in an oven with Pt catalysator yields a continuous stream of H215O which can be administered intravenously. 2.3.5 18F production The most commonly production of 18F is the 18O(p,n)18F reaction with as target material H218O, Oxygen-18 enriched water (>90%, costs in April 2001: US$ 160.00 /ml). The 18F is available as ion in the water. After separation of the fluorine and the water the fluorine is available for chemistry. The water can be used again after distillation to eliminate impurities. The enrichment grade will diminish of course by the distillation procedure. The specific activity of the fluorinated end product can easily be better than of carbon-11 product because fluor is less abundant than e.g. CO2. A second method for the production of 18F is by the 20Ne(d,)18F reaction. To the Ne-gas F2 is added to passivate the target chamber wall. If this passivation is not done all the produced 18F is adsorbed to the wall and can not be extracted for further chemistry. The 18F becomes available as F2 and the relative low specific activity depends on the amount of fluorine added before irradiation. The two different chemical forms of 18F allow for different chemical labeling strategies.

    2.4 Commonly used radiopharmaceuticals The most commonly used radiopharmaceutical in PET-centers is 2-[18F]Fluoro-2-deoxy-D-glucose (FDG). FDG is mostly used for studies in oncology. Due to its half life (110 min) it can be transported over a 2 h transport distance and a lot PET camera's without an in house cyclotron are operated on FDG only in this way. Oxygen-15 labeled water, half life 2 min, is also frequently used in research centers for brain activation studies. The use of all other radiopharmaceutical is often locally determined by clinical or research interest.

  • 12

    18FDG Glucose analogue for studies in brain, heart and oncology. The most used

    radiopharmaceutical. H215O Functional brain studies (rCBF) C 15O2 Functional brain studies (rCBF). Is converted into water in the lung

    instantly CO Cerebral blood volume studies 11C-tyrosine Amino acid for brain studies and oncology 13NH3 Ammonia for blood flow studies in the heart 11C-raclopride Dopamine receptor system, Parkinson's Disease 18F-DOPA Dopamine receptor system, Parkinson's Disease 11C-acetate Cardiological studies 3. PET scanner 3.1 Decay of neutron deficient radionuclides There are two decay possibilities for neutron deficient radionuclides: positron emission or electron capture (EC): Positron decay p n + + + Electron capture p + e- n + The energy condition for decay by positron emission is:

    Q(+) = M(A,Z+1)c2 - M(A,Z)c2 - 2m0c2 + I So positron decay is only possible if an energy of 2m0c2 (= 1022 keV) or more is available, otherwise Electron Capture will happen. In practise a rather surplus in energy is required before a large percentage of the decay goes by the positron decay channel instead of the EC channel. With the positron decay there are two conservation laws to be obeyed: i) Conservation of energy tells that 1022 keV is available and ii) conservation of momentum tells that at the moment of annihilation no momentum is available so p = 0 kgm/s before and after annihilation. The positron is slowed down in tissue, at the end of track a positronium, a hydrogen like atom, is formed by positron and electron. Positron and electron are anti-particles and so they will annihilate. In singlet state of the positronium a 2 quanta annihilation (mean life time 8 ns) will occur. In the triplet state a 3 quanta annihilation (mean life time 7 s) will take place. The triplet state will be formed in only 0.3%. The 2-quanta annihilation shows a finite width of 0.5o around 180o in the angular correlation measurements signaling that the momentum at the moment of annihilation is not always exact zero. 3.2 Imaging 3.2.1. In conventional nuclear medicine In conventional nuclear medicine a gamma camera consisting of a NaI crystal, thick 3/8" or 1/2", with photomultiplier tubes (PMT's) and a collimator in front is used for image formation. The position of a scintillation is calculated from:

  • 13

    X = PX(i) L(i) / L(i)

    Y = LY(i) L(i) / L(i) with PX(i) and PY(i) the position coordinates of PMT(i) and L(i) the amount of light received by PMT(i) and:

    L(i) = Total light output = Energy of the detected quant

    A collimator is a compromise between spatial resolution and efficiency. Works optimal for gamma radiation between 100 and 200 keV. Two modes of images can be obtained: planar images and transverse section images by rotating the camera around the patient (SPECT= Single Photon Emission Computed Tomography). Both modes do not yield quantitative information because a correction for the attenuation not is possible. The thickness required for the collimator and the thickness of septa (the lead between the holes) are a function of the energy of the gamma rays. At a gamma energy of 511 keV the wall thickness and the thickness of the collimator have to be increased in such a way that the hole size and the spatial resolution become competitive and the weight of the collimator is roughly 200 kg. The efficiency of the collimator has decreased with a factor two, with respect to a general purpose collimator at 140 keV, to approximately 5x 10-5. 3.2.2. Annihilation radiation To image the annihilation radiation profit should be taken from its unique properties: 511 keV co-linear (180o) and simultaneously! So coincidence measurements and ideally TOF (Time Of Flight) measurements should be done. For two detectors, A and B, at a distance 2d and a point source P at distance x from the center line the difference in distance is:

    PA - PB = (d+x) - (d-x) = 2x

    time involved: t = 2x/c with 2x = 1 mm, t = 3.3 ps Scintillation detectors with these timing properties and high sensitivity for 511 keV radiation do not exist at the moment. Until ~1983 mostly NaI detectors and some BaF2 detectors systems have been used. After ~1983 BGO (Bismuthgermanate Bi4Ge3O12) material in a block-detector structure is mainly used. Originally the detector with its single PMT was determining the spatial resolution. To improve in efficiency BGO with its density and high Z-value is being used. To improve also in spatial resolution a gamma camera read-out with four PMT's was designed on a single BGO crystal. Due to the thickness of the BGO detector the scintillation light is spread out quite a bit. By cutting the BGO detector into 8x8 sub-detectors a light guiding was build in which allowed, together with the four PMT's, for a spatial resolution basically the same as the size of the sub-detectors. The drawback of BGO is its relative low light output (15% of the output of NaI). In the past also Gadolinium-orthosilicate (GSO) has been applied in combination with BGO as a dual detector on one PMT. An advantage of GSO is the higher light output, see table 3. Due to the differences in decay time one can determine, using pulse shape discrimination, which of the

  • 14

    two detectors is responding. Recently GSO is again being used but now as an area detector with a gamma camera logic read-out. Recently LSO or Lutetium-ortho-silicate, has been tested for application in PET scanners. The relative high light output (75% of the output of NaI) is an advantage. The disadvantage of a natural radioactive component in natural Lu has no consequences as long as coincidence measurements are performed. The first PET scanners with LSO detectors, both for small animal studies and for whole body human studies, are commercially available now. Table 3. Detector materials used in PET scanners

    NaI BGO GSO LSO Density (g/cc) 3.67 7.13 6.7 7.4 Eff. Atomnumber 51 75 59 66 Mean Free Path (cm) 2.88 1.05 1.43 1.1 Hygroscopic yes no no no Decay time (ns) 230 300 56/600 40 Relative light yield 100% 15% 25% 75% Energy resolution* 7.8% 10.1% 9.5% 10% * NB: This energy resolution is valid for a single crystal. Is not necessarely true for a block-

    detector In PET scanners somehow a circular of hexagonal structure has to be realized in order to perform coincidence measurements. This can also been realized by a rotating dual headed uncollimated gamma camera system. These systems, limited in count rate because only 2 detectors are used, and all types of different configurations NaI of BaF2 based have been used till roughly 1983 when the BGO block-detector was introduced. The block detector is commonly divided into 8x8 sub-detectors which are read-out by 4 PMT's. The size of the block detector depends on the resolution to be achieved. Size of the sub-detector varies from 4*4 to 6.5*6.5 mm2 and the thickness is varying between 20 and 30 mm. See fig. 4-6. In order to increase the axial length up to 4 adjacent detector rings are assembled into a PET scanner.

  • Fig. 4. BGO Block detector with PMT's (Courtesy Siemens/CTI)

    Fig. 5. Light guiding effect of the cutting in the BGO block detector (Courtesy Siemens/CTI)

    15

  • Fig. 6. Spatial resolution of a BGO block detector with 4 PMT's (Courtesy Siemens/CTI) 3.3 Data acquisition and image reconstruction from projections 3.3.1 Data acquisition The organization of the acquired data can be in two forms: 1) Event-by-event or List Mode The position of each individual annihilation pair and some type of timing information are individually stored. Afterwards reconstruction into sinograms in a then defined dynamic study is performed. The time frames of the study can be chosen or redefined with this form of storage. 2) Sinogram mode The data acquired by a PET scanner is projection data by nature since only a coincidence and no TOF measurement is possible. The total number of annihilation events on a Line of Response (LOR) are stored in one matrix element according to (r,) coordinates which is called a sinogram because of its behaviour for rotating point source. A LOR is the line between two detectors operated in coincidence. Which storage mode is chosen is depending on the amount of LOR's vs the number of events. Examples: i) Dual head coincidence system: Always list mode because it is a 3D-system with > 100

    MLOR's and the number coincident events measured is considerably less. ii) 2D ring system: Always sinogram mode 16

  • 17

    iii) 3D high resolution ring system: To be considered based on number of LOR's and expected number of coincident event.

    Measuring along a LOR is measuring a line integral projection, so the Radon transform is measured. The Radon transform maps the data from (x,y) coordinate system into projection data domain, (r,). All points on the line (LOR) are mapped onto a single point. A point in object space will follow a sinogram in projection space (x',). 3.3.2 Reconstruction by Filtered Back Projection The most commonly used method in image reconstruction is Filtered Back Projection. The back projected image is given by: g(r) = f f(r) h(r,r') dr' With f(r) the real radioactivity distribution and with h(r,r') the system response function or Point Spread Function (PSF). The deconvolution is most easy to perform in Fourier space:

    G(k) = F(k).H(k) with G(k) = f g(r) exp(2i k.r) dr F(k) = G(k) H(k)-1

    Due to noise in the original data, the limited band width in the Fourier transform, over-emphasing of noise easily occurs. To prevent a window with a smooth cut-off should be applied e.g.:

    Hanning window: W(k) = 0.5 + 0.5 cos( k/kmax), 0 for k > kmax

    kmax = 1/2d according to Nyquist sampling theorem. 3.3.3 Image reconstruction by Maximum Likelihood Expectation Maximization Maximum Likelihood Expectation Maximization (ML-EM) is an iterative method that maximizes the probability of the reconstructed image for a given set of measured projection data. Each emitted photon from a pixel b (b=1, 2, ..B) in the object is detected by a detector unit d (d=1,2,..D) with a probability p(b,d). The unknown emission density f(b) can be estimated using the measured projection data n*(d) in detector d.

    *(d)=b f(b) p(b,d) = expected number of counts in detector d In case of a Poisson distribution the likelihood function L is:

    L = d exp[-*(d)] {*(d)n*(d)/n*(d)!} = d Pn*(d) *(d) with Pn*(d) de Poisson distribution and *(d) the expectation of n*(d) In the iterative scheme the difference between step k and k+1 is minimized and can be used as stop criterion. In reality it is easier to examine the logarithm:

    log(L(k+1)/L(k))=log(L(k+1)) - log(L(k)) This formula can be calculated because the p(b,d)'s are known, n*(d) is the measured projection data. For the initial values for the f(b)=s the distribution can be assumed to be uniform. In PET

  • 18

    this ML-EM scheme showed to be successful because positron emission follows Poisson statistics. In practice the stop criterion has to bet set by evaluating the images at different iterations. After a certain number of iterations artifacts can be generated and this stage should be avoided. 3.3.4 Different types of PET scan The scans which can be made in PET scanner are: i) Static scan: a set of transverse section images. Interpretation by visual inspection and/or

    by left/right differences. Often sufficient for a clinical study. ii) Dynamic scan: a set of consecutive scans in time. The distribution as function of time can

    be studied in imaged area. Information as function of time is essential input for the derivation of functional parameters. Also arterial blood sampling and analysis is often required for the quantification of a functional parameter.

    iii) Whole body scan: a set of consecutive scans over the body. By combining these scan in a 3-dimensional volume an overview of the radioactivity in the body is visualized. A whole body scan is often used in oncological studies.

    3.4 Resolution, parallax, scatter, accidental coinci


Recommended