Monitoring Cavitation in HIFU to Assist Treatment

Preview:

DESCRIPTION

Monitoring Cavitation in HIFU to Assist Treatment. Chang-yu Hsieh 1 , Penny Probert Smith 2 , Tom Leslie 3 and James Kennedy 4 - PowerPoint PPT Presentation

Citation preview

Monitoring Cavitation in HIFU to Assist Treatment

Chang-yu Hsieh1, Penny Probert Smith2, Tom Leslie3 and James Kennedy4

1Wolfson Medical Vision Laboratory, Department of Engineering Science, University of Oxford 2The HIFU Unit, Churchill Hospital, Oxford Radcliffe Health Trust, Oxford

IntroductionIn some HIFU therapy protocols, the evidence of cavitation is taken as a strong indicator of tissue lesions. Cavitation is associated with hyperechoic regions (‘bright up’) in the image. The use of ultrasound (US) visualization for the guidance and monitoring of HIFU therapies most often relies on the appearance of a bright hyper-echoic region in the US B-mode image. Currently these are drawn on to the image by hand.

In this poster we present a method to extract automatically regions in which cavitation occurs, from processing B-mode images taken during treatment. This will allow a history of cavitation events to be provided on demand during treatment. Two approaches are compared: one based on an intensity threshold and the other using an intensity model. Results are shown on simulated and real images taken during HIFU treatment.

Figure 1: B-mode image taken from HIFU kidney treatment with two regions of cavitation delineated by hand.

Aim

To provide a robust method to delineate automatically the cavitation region (or several regions) in a B-mode image.

To compare a model based approach with standard intensity thresholding, and determine the most appropriate noise models

To determine and develop displays which can be used during treatment to assist the clinical delivery of HIFU: for example measures of the cavitation region such as intensity and persistence of bubbles, and other factors of significance in heating and lesion formation.

MethodThe approximate region of interest in the B-

mode image is determined from past cavitation events or from knowledge of the current focal position (currently extracted by hand).

The pixels in the region are labelled as either belonging to the cavitation or background, and a boundary drawn between the cavitation region and background.

Data on the cavitation event is extracted (eg total intensity, peak intensity) through either

Threshold method: Assumes abrupt intensity change between cavitation and background,

determined from histogram analysis using Otsu’s method (as implemented in Matlab).

Statistical method: Includes noise through modelling intensity levels as p.d.fs. Provides for spatial variations within region and allows local spatial smoothing.

Conventional Threshold MeasuresBased on the intensity measures from histogram analysis. Otsu’s method determines threshold which maximises the variance between two classes.

Figure 2(a) : Blue region of cavitation with intensity histogram. Intensity threshold 92 (Otsu) – left and 102 - right.

Figure 2(b) : Red region of cavitation with intensity histogram. Intensity threshold 58 (Otsu) – left and 62 - right r

Problems in threshold measure• Very sensitive to threshold – as in results above• Choosing a suitable threshold is difficult & time consuming. • Does not account for systematic changes in intensity (eg from attenuation) or local changes (eg from cavitation activity itself)

Automatic Statistical Model• Based on the Bayesian framework• Uses Hidden Markov Random Field with Expectation Maximisation• Maximises the posterior probability of the associated class labels, x, to intensity y:

Can handles local changes and systematic errors Details in [1]

LlSiLlSi

xPxyPyxPx,,

)()|((max)|((maxˆ

Details of Model1. Noise model For each pixel i, p( yi| xi) is noise probability density function. Approximated by Rayleigh or Gaussian p.d.f..2. Spatial models Assigns a prior probability p(xi), using the Gibb

s distribution in Markov Random Field to encourages neighbouring pixels to have the same class labels. The summation is over the neighbouring eight pixels and δ is the Dirac delta function:

Iterative solution using Expectation Maximisation

Initial estimation of noise model parameters

through Otsu’s method [3]

ij

iji d

xUxUxp)1(

)();(exp)(

References1. Gonzalez RC, Woods RE and Eddins SL, ”Digital Ima

ge Processing Using Matlab”, ISBN 0-13-008519-7,2004

2. Zhang Y, Brady M and Smith Stephen, Segmentation of Brain MR Images Through a Hidden Markov Random Field Model and the Expectation Maximization Algorithm”, IEEE Trans. Medical Imaging, vol. 20, no.1, Jan 2001

3. Otsu N, “A threshold selection method from gray-level histogram,” IEEE Trans. Syst. Man Cybern, vol 9, pp.62-66, Jan 1979.

ResultsCompare with estimation by hand

Figure 3: Cavitation occurs during HIFU liver treatment.

Segmentation results of HMRF-EM with Gaussian and Rayleig

noise models. Comparison is with hand drawing region of

cavitation (MCR denotes the misclassified pixels ratio).

Figure 4: Several test images are generated with different levels of Rayleigh noise, showing results from HMRF-EM for two different noise models and thresholding.

Figure 5: Tracking the changes in boundary of cavitation in HIFU (Figure1). There are 221 frames, approximately 10/s in the video and only every 2s interval of images are shown.

Figure 6: Measures of cavitation: Decay of Mean and variance of the two regions. Top: blue region. Bottom: red

Noise models: Gaussian Rayleigh

MCR : 8.3 % MCR : 7.1 %

1SNR 2SNR 3SNR 4SNR

0s 2s 4s

6s 8s 10s

Conclusion and Further Work

This work investigates the possibility of providing reliable image segmentation to keep track of cavitation events during HIFU treatment. The method presented has been shown to be successful in delineating cavitation events even at low brightness.

Further work is required to improve the algorithm to handle a greater number of cavitation events, possibly overlapping. Together with this is needed fundamental and clinical work on the heating effects of bubbles, to determine which parameters are of most use to display clinically, and to find ways of deriving these (possibly using r.f. images too).

Recommended