8
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008 11 Optic Disc Detection From Normalized Digital Fundus Images by Means of a Vessels’ Direction Matched Filter Aliaa Abdel-Haleim Abdel-Razik Youssif, Atef Zaki Ghalwash, and Amr Ahmed Sabry Abdel-Rahman Ghoneim* Abstract—Optic disc (OD) detection is a main step while de- veloping automated screening systems for diabetic retinopathy. We present in this paper a method to automatically detect the position of the OD in digital retinal fundus images. The method starts by normalizing luminosity and contrast through out the image using illumination equalization and adaptive histogram equalization methods respectively. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Hence, a simple matched filter is proposed to roughly match the direction of the vessels at the OD vicinity. The retinal vessels are segmented using a simple and standard 2-D Gaussian matched filter. Consequently, a vessels direction map of the segmented retinal vessels is obtained using the same segmentation algorithm. The segmented vessels are then thinned, and filtered using local intensity, to represent finally the OD-center candidates. The difference between the proposed matched filter resized into four different sizes, and the vessels’ directions at the surrounding area of each of the OD-center candidates is measured. The minimum difference provides an estimate of the OD-center coordinates. The proposed method was evaluated using a subset of the STARE project’s dataset, containing 81 fundus images of both normal and diseased retinas, and initially used by literature OD detection methods. The OD-center was detected correctly in 80 out of the 81 images (98.77%). In addition, the OD-center was detected correctly in all of the 40 images (100%) using the publicly available DRIVE dataset. Index Terms—Biomedical image processing, fundus image anal- ysis, matched filter, optic disc (OD), retinal imaging, telemedicine. I. INTRODUCTION T HE optic disc (OD) is considered one of the main features of a retinal fundus image (Fig. 1), where methods are de- scribed for its automatic detection [1], [2]. OD Detection is a key preprocessing component in many algorithms designed for the automatic extraction of retinal anatomical structures and lesions [3], thus, an associated module of most retinopathy screening systems. The OD often serves as a landmark for other fundus features; such as the quite constant distance between the OD and the macula-center (fovea) which can be used as a priori knowledge to help estimating the location of the macula [1], Manuscript received February 19, 2007; revised May 2, 2007. Asterisk indi- cates corresponding author. A. A.-H. A.-R. Youssif and A. Z. Ghalwash are with the Department of Com- puter Science, the Faculty of Computers and Information, Helwan University, Cairo, Egypt (e-mail: [email protected]; [email protected]). *A. A. S. A.-R. Ghoneim is with the Department of Computer Science, the Faculty of Computers and Information, Helwan University, Cairo, Egypt (e-mail: [email protected]). Digital Object Identifier 10.1109/TMI.2007.900326 Fig. 1. (a) Typical normal fundus image, it shows the properties of a normal optic disc (bright ovoid shape on the left-hand side). (b) Fundus diagnosed of having high severity retinal/subretinal exudates. [9]. [3]. The OD was also used as an initial point for retinal vascu- lature tracking methods [3], [4]; large vessels found in the OD vicinity can serve as seeds for vessel tracking methods. Also, the OD-rim (boundary) causes false responses for linear blood vessel filters [5]. The change in the shape, color or depth of OD is an indi- cator of various ophthalmic pathologies especially for glaucoma [6], thus the OD dimensions are used to measure abnormal fea- tures due to certain retinopathies, such as glaucoma and diabetic retinopathies [4], [7]. Furthermore, the OD can initially be rec- ognized as “one or more” candidate exudates regions—one of the occurring lesions in diabetic retinopathies [6] due to its sim- ilarity of its color to the yellowish exudates [Fig. 1(b)]. Identi- fying and removing the OD improves the classification of exu- dates regions [8]. The OD is the brightest feature of the normal fundus, and it has approximately a vertically slightly oval (elliptical) shape. In colored fundus images, the OD appears as a bright yellowish or white region (Fig. 1). The OD is considered the exit region of the blood vessels and the optic nerves from the retina, also characterized by a relatively pale view owing to the nerve tissue underlying it. Measured relatively to the retinal fundus image, it occupies about one seventh of the entire image [1]. Alterna- tively, according to [4], the OD size varies from one person to another, occupying about one tenth to one fifth of the image. The process of automatically detecting/localizing the OD aims only to correctly detect the centroid (center point) of the OD. On the other hand, disc boundary detection aims to correctly segment the OD by detecting the boundary between the retina and the nerve head (neuroretinal rim). Some methods estimated the contour (boundary) of the OD as a circle or an ellipse (e.g., [1], [3], [7], and [10]). Other methods have 0278-0062/$25.00 © 2007 IEEE

Main Paper

Embed Size (px)

Citation preview

Page 1: Main Paper

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008 11

Optic Disc Detection From NormalizedDigital Fundus Images by Means of a

Vessels’ Direction Matched FilterAliaa Abdel-Haleim Abdel-Razik Youssif, Atef Zaki Ghalwash, and Amr Ahmed Sabry Abdel-Rahman Ghoneim*

Abstract—Optic disc (OD) detection is a main step while de-veloping automated screening systems for diabetic retinopathy.We present in this paper a method to automatically detect theposition of the OD in digital retinal fundus images. The methodstarts by normalizing luminosity and contrast through out theimage using illumination equalization and adaptive histogramequalization methods respectively. The OD detection algorithmis based on matching the expected directional pattern of theretinal blood vessels. Hence, a simple matched filter is proposedto roughly match the direction of the vessels at the OD vicinity.The retinal vessels are segmented using a simple and standard2-D Gaussian matched filter. Consequently, a vessels directionmap of the segmented retinal vessels is obtained using the samesegmentation algorithm. The segmented vessels are then thinned,and filtered using local intensity, to represent finally the OD-centercandidates. The difference between the proposed matched filterresized into four different sizes, and the vessels’ directions at thesurrounding area of each of the OD-center candidates is measured.The minimum difference provides an estimate of the OD-centercoordinates. The proposed method was evaluated using a subsetof the STARE project’s dataset, containing 81 fundus images ofboth normal and diseased retinas, and initially used by literatureOD detection methods. The OD-center was detected correctly in80 out of the 81 images (98.77%). In addition, the OD-center wasdetected correctly in all of the 40 images (100%) using the publiclyavailable DRIVE dataset.

Index Terms—Biomedical image processing, fundus image anal-ysis, matched filter, optic disc (OD), retinal imaging, telemedicine.

I. INTRODUCTION

THE optic disc (OD) is considered one of the main featuresof a retinal fundus image (Fig. 1), where methods are de-

scribed for its automatic detection [1], [2]. OD Detection is a keypreprocessing component in many algorithms designed for theautomatic extraction of retinal anatomical structures and lesions[3], thus, an associated module of most retinopathy screeningsystems. The OD often serves as a landmark for other fundusfeatures; such as the quite constant distance between the ODand the macula-center (fovea) which can be used as a prioriknowledge to help estimating the location of the macula [1],

Manuscript received February 19, 2007; revised May 2, 2007. Asterisk indi-cates corresponding author.

A. A.-H. A.-R. Youssif and A. Z. Ghalwash are with the Department of Com-puter Science, the Faculty of Computers and Information, Helwan University,Cairo, Egypt (e-mail: [email protected]; [email protected]).

*A. A. S. A.-R. Ghoneim is with the Department of Computer Science,the Faculty of Computers and Information, Helwan University, Cairo, Egypt(e-mail: [email protected]).

Digital Object Identifier 10.1109/TMI.2007.900326

Fig. 1. (a) Typical normal fundus image, it shows the properties of a normaloptic disc (bright ovoid shape on the left-hand side). (b) Fundus diagnosed ofhaving high severity retinal/subretinal exudates. [9].

[3]. The OD was also used as an initial point for retinal vascu-lature tracking methods [3], [4]; large vessels found in the ODvicinity can serve as seeds for vessel tracking methods. Also,the OD-rim (boundary) causes false responses for linear bloodvessel filters [5].

The change in the shape, color or depth of OD is an indi-cator of various ophthalmic pathologies especially for glaucoma[6], thus the OD dimensions are used to measure abnormal fea-tures due to certain retinopathies, such as glaucoma and diabeticretinopathies [4], [7]. Furthermore, the OD can initially be rec-ognized as “one or more” candidate exudates regions—one ofthe occurring lesions in diabetic retinopathies [6] due to its sim-ilarity of its color to the yellowish exudates [Fig. 1(b)]. Identi-fying and removing the OD improves the classification of exu-dates regions [8].

The OD is the brightest feature of the normal fundus, and ithas approximately a vertically slightly oval (elliptical) shape. Incolored fundus images, the OD appears as a bright yellowishor white region (Fig. 1). The OD is considered the exit regionof the blood vessels and the optic nerves from the retina, alsocharacterized by a relatively pale view owing to the nerve tissueunderlying it. Measured relatively to the retinal fundus image,it occupies about one seventh of the entire image [1]. Alterna-tively, according to [4], the OD size varies from one person toanother, occupying about one tenth to one fifth of the image.

The process of automatically detecting/localizing the ODaims only to correctly detect the centroid (center point) ofthe OD. On the other hand, disc boundary detection aims tocorrectly segment the OD by detecting the boundary betweenthe retina and the nerve head (neuroretinal rim). Some methodsestimated the contour (boundary) of the OD as a circle oran ellipse (e.g., [1], [3], [7], and [10]). Other methods have

0278-0062/$25.00 © 2007 IEEE

Page 2: Main Paper

12 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008

been proposed for the exact detection of the OD contour (e.g.,“snakes” which has the ability to bridge discontinuities of theedges [6]).

After stating the motivations of localizing the OD, definingthe OD, and highlighting the difference between disc detectionand disc boundary detection, the remaining part of the paper isorganized as follows. Most of the available methods for auto-matic OD detection are reviewed in Section II. In Section III, adescription of the material used is given. Section IV presents theproposed algorithm. The results are presented and discussed inSections V and VI, respectively. Finally, conclusion and furtherwork are found in Section VII.

II. OD DETECTION METHODS: A LITERATURE REVIEW

Although the OD has well-defined features and characteris-tics, localizing the OD automatically and in a robust manner isnot a straight-forward process, since the appearance of the ODmay vary significantly due to retinal pathologies. Consequently,in order to effectively detect the OD, the various methods de-veloped should consider the variation in appearance, size, andlocation among different images [11].

The appearance of the yellowish OD region was character-ized by a relatively rapid variation in intensity because of the“dark” blood-filled vessels beside the “bright” nerve fibers.Sinthanayothin et al. [1], [12] detected the OD by identifyingthe area with the highest average variation among adjacentpixels using a window size equal to that of the OD. The imageswere preprocessed using an adaptive local contrast enhance-ment method which was applied to the intensity component.

Insteadofusing theaveragevariance in intensity, andassumingthat the bright appearing retinopathies (e.g., exudates) are farfrom reaching the OD size, Walter and Klein [13] approximatedthe OD center as the center of the largest brightest connected ob-ject in a fundus image.They obtained a binary image includingallthe bright regions by simply thresholding the intensity image.

In [14], Chrástek et al. applied an averaging filter to the green-band image, and located the OD roughly at the point of thehighest average intensity. Again, the brightness was used byLi and Chutatape [4], [6] in order to find the OD candidate re-gions for their model-based approach. Pixels with the highest1% gray-levels in the intensity image were selected; obviouslythese pixels were mainly from areas in the OD or bright le-sions. The selected pixels were then clustered, and small clus-ters were discarded. A disc-space (OD model) was created byapplying principal component analysis (PCA) to a training setof 10 intensity normalized square subimages manually croppedaround the OD. Then for each pixel in the candidate regions,the PCA transform was applied through a window with differentscales ( ). The OD was detected as the region with thesmallest Euclidian distance to its projection onto the disc-space.

Another model-based (template matching) approach was em-ployed by Osareh et al. 542 [8], [15], and [16] to approximatelylocate the OD. Initially, the images were normalized by ap-plying histogram specification, and then the OD region from 25color-normalized images was averaged to produce a gray-leveltemplate. The normalized correlation coefficient was then usedto find the most perfect match between the template and all thecandidate pixels in the given image.

One more template matching approach is the Haus-dorff-based template matching used by Lalonde et al. [10]together with pyramidal decomposition and confidence as-signment. In the beginning, multiresolution processing wasemployed through pyramidal decomposition which allowedlarge scale object tracking; small bright retinal lesions (e.g.,exudates) vanish at lower resolutions, facilitating the searchfor the OD region with few false candidates. A simple confi-dence value was calculated for all the OD candidate regionsrepresenting the ratio between the mean intensity inside thecandidate region and in its neighborhood. The Canny edgedetector and a Rayleigh-based threshold were then applied tothe green-band image regions corresponding to the candidateregions, constructing a binary edge map. Finally, the edge mapregions were matched to a circular template with differentradii using the Hausdorff distance. Another confidence value(the number of overlapped template pixels divided by the totalnumber of template pixels) was calculated for all the regionshaving a Hausdorff distance between the edge map and thetemplate less than a certain threshold value. The region havingthe highest total confidence value was considered the OD.

Pyramidal decomposition aids significantly in detecting largeareas of bright pixels that probably coincides with the OD, butit can be simply fooled by large areas of bright pixels that mayoccur near the image’s borders due to uneven illumination. Forthat reason, Frank ter Haar [11] applied illumination equal-ization to the green-band of the image, and then a resolutionpyramid using a simple Haar-based discrete wavelet transformwas created. Finally, the brightest pixel at the fifth level of theresolution pyramid was chosen to correspond to the OD area.Frank ter Haar [11] proposed an alternative to the later methodbased on the pyramidal decomposition of both the vasculatureand the green-band, where the fifth level of the resolutionpyramid for both the illumination equalized green-band and thebinary vessel segmentation are summed, and the highest valuecorresponds to the OD center.

Hough transform; a technique capable of finding geometricshapes within an image, was employed to detect the OD. In[7], Abdel-Ghafar et al. employed the circular hough transform(CHT) to detect the OD which has a roughly circular shape.The retinal vasculature in the green-band image was suppressedusing the closing morphological operator. The Sobel operatorand a simple threshold were then used to extract the edges inthe image. CHT was finally applied to the edge points, and thelargest circle was found consistently to correspond to the OD.Barrett et al. [17] also proposed using the Hough transform tolocalize the OD. Their method was implemented by [11] in twodifferent ways.

Frank ter Haar [11] explored the Hough transform using acouple of methods. In the first one, Hough transform was ap-plied only to pixels on or close to the retinal vasculature in abinary image of the vasculature obtained by [18]. The binaryvasculature was dilated in order to increase the possible OD can-didates. Alternatively, in the second method Frank ter Haar [11]applied Hough transform once again but only to the brightest0.35% of the fuzzy convergence image obtained by [19], [20].Once more, dilation was applied to the convergence image toovercome the gaps created by small vessels [11].

Page 3: Main Paper

YOUSSIF et al.: OPTIC DISC DETECTION FROM NORMALIZED DIGITAL FUNDUS IMAGES BY MEANS OF A VESSELS’ DIRECTION MATCHED FILTER 13

Fig. 2. Schematic drawing of the retinal vasculature orientations [11].

The shape, color, and size of the OD showed large varianceespecially in the presence of retinopathies, and therefore, detec-tion methods based on these properties were shown to be weak,and impractical [11]. An alternative property to be examined isthe retinal vasculature. The OD is the entrance point for boththe optic nerve, and the few main blood vessels which split intomany smaller vessels that spread around the retina. As a result,most of the recently proposed techniques try to utilize the infor-mation provided by the retinal vasculature.

Fuzzy convergence [19], [20] is a novel voting type algorithmdeveloped by Hoover and Goldbaum in order to determine theorigination of the retinal vasculature (convergence point), andthus, localize the OD which is considered the only consistentlyvisible property of the OD. The inputs to the fuzzy convergencealgorithm were six binary vessel segmentations (each at a dif-ferent scale) obtained from the green-band image. Each vesselwas modeled by a fuzzy segment, which contributes to a cu-mulative voting image (a convergence image) where each pixelequals the amount of fuzzy segments on which the pixel lied. Fi-nally, the convergence image was smoothed and thresholded todetermine the strongest point(s) of convergence. If the final re-sult was inconclusive, the green-image was illumination equal-ized, and Fisher’s linear discriminant was applied to regionscontaining the brightest pixels to detect the OD.

Frank ter Haar [11] proposed two methods using a vessel-branch network constructed from a binary vessel image. In thefirst one, he searched for the branch with the most vessels soas to locate the OD. Alternatively, in the second method hesearched the constructed vessel-branch network for all paths,and since the end points of all paths represents a degree ofconvergence, therefore the area where more path-endings werelocated probably coincides with the OD location. The Houghtransform was then applied to these areas (having the highestpath-endings convergence) to detect the OD.

One last method employed by Frank ter Haar [11] was basedon fitting the vasculature orientations on a directional model.Since starting at the OD the retinal vasculature follows approxi-mately the same divergence pattern in all retinal images (Fig. 2),any segmented vasculature can be fitted on the directional modelrepresenting the common divergence pattern in order to detectthe OD. The directional model (DM) was created using the vesselsegmentations of 80 training images. For all the vessels’ pixels ineach image, the orientation was calculated to form a directional

TABLE IOD DETECTION RESULTS FOR THE PROPOSED AND LITERATURE

REVIEWED METHODS

vessel map. The 80 directional maps were then manually aligned,and the DM was created by averaging (at each pixel) all the cor-responding orientation values. The right-eye DM created wasreflected in order to create a left-eye DM. Finally, each pixel in aninput vasculature was aligned to the OD center in both DMs, andthen the distance between the fitted input vasculature and bothDMs was measured. The pixel having the minimal distance toboth DMs is selected as the OD location. This method recordedthe best success rate among all other 15 automatic OD-detectionmethods evaluated by Frank ter Haar [11] (Table I).

Page 4: Main Paper

14 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008

Closely related to vasculature fitting on a directional model,Foracchia et al. [21] identified the position of the OD using a ge-ometrical model of the vessel structure. Once again, this methodwas based on the fact that in all images, the retinal vasculatureoriginates from the OD following a similar directional pattern.In this method, the main vessels originating from the OD wasgeometrically modeled using two parabolas. Consequently, theOD position can be located as the common vertex of the twoparabolas (i.e., the convergence point for the retinal vascula-ture). After the complete model of vessels direction was cre-ated, the vasculature of the input image was extracted, and thedifference between the model directions and the extracted ves-sels directions was minimized using the weighted residual sumof squares (RSS) and a simulated annealing (SA) optimizationalgorithm.

In [22], Goldbaum et al. combined three OD properties thatjointly located the OD. Namely, the blood vasculature conver-gence towards the OD, the OD’s appearance as a bright disc, andthe large vessels entering the OD from above and below. Usingmost of the OD properties previously mentioned, Lowell et al.[5] carefully designed an OD template which was then corre-lated to the intensity component of the fundus image using thefull Pearson-R correlation. This detection filter (template) con-sists of a Laplacian of Gaussian with a vertical channel carvedout of the middle corresponding to the main blood vessels de-parting the OD vertically.

Instead, Tobin et al. [23] proposed a method that mainly re-lied on vasculature-related OD properties. A Bayesian classi-fier (trained using 50 images) was used to classify each pixel inred-free images as OD or Not-OD, using probability distribu-tions describing the luminance across the retina and the density,average thickness, and average orientation of the vasculature.Abràmoff and Niemeijer [24] generally used the same featuresfor OD detection, but the features were measured in a specialfashion. Then a kNN regression (trained using 100 images) wasused to estimate the OD location. To the best of our knowl-edge, the latter two methods are the only supervised methodsproposed in literature for OD detection.

III. MATERIAL

Two publicly available datasets were used to test the proposedmethod. The main dataset is a subset of the STARE Project’sdataset [9]. The subset contains 81 fundus images that wereused initially by Hoover and Goldbaum [19] for evaluating theirautomatic OD localization method. The images were capturedusing a TopCon TRV-50 fundus camera at 35 field-of-view(FOV), and subsequently digitized at 605 700, 24-bits pixel[19]. The dataset contains 31 images of normal retinas and 50of diseased retinas. The dataset was also used by Foracchia etal. [21]. Moreover, Frank ter Haar [11] used it for comparing 15different OD-detection methods. Reported results using STAREare shown in Table I.

The second dataset used is the DRIVE dataset [25], estab-lished to facilitate comparative studies on retinal vasculaturesegmentation. The dataset consists of a total of 40 color fundusphotographs used for making actual clinical diagnoses, where33 photographs do not show any sign of diabetic retinopathyand seven show signs of mild early diabetic retinopathy. The

–24 bits, 768 by 584 pixels–color images are in compressedJPEG-format, and acquired using a Canon CR5 nonmydriatic3CCD camera with a 45 FOV.

IV. PROPOSED METHOD

This study, inspired by the work of Frank ter Haar [11],Hoover and Goldbaum [19], and Foracchia et al. [21], presentsa method for the automatic detection of the OD. The proposedmethod comprises several steps. Initially, a binary mask isgenerated. Then the illumination and contrast through outthe image are equalized. Finally, the retinal vasculature issegmented, and the directions of the vessels are matched tothe proposed filter which represents the expected vessels’directions in the OD vicinity.

A. Mask Generation

Mask generation aims to label the pixels belonging to the(semi) circular retinal fundus region-of-interest (ROI) in the en-tire image, and exclude the background of the image from fur-ther calculations and processing [3]. We used the method pro-posed by Frank ter Haar [11], who applied a threshold to theimage’s red-band (empirically ). And then the morpho-logical operators (opening, closing, and erosion) were appliedrespectively (to the result of the preceding step) using a 3 3square kernel to give the final ROI mask [Fig. 4(a)].

B. Illumination Equalization

The illumination in retinal images is nonuniform due to thevariation of the retina response or the non-uniformity of theimaging system. Vignetting and other forms of uneven illumina-tion make the typical analysis of retinal images impractical anduseless. In addition, Aliaa et al. [26] proved that uneven illumi-nation negatively affects the process of localizing successful ODcandidates. To overcome the nonuniform illumination, Hooverand Goldbaum [19] adjusted (equalized) each pixel using thefollowing equation:

(1)

where is the desired average intensity (128 in an 8-bitgrayscale image) and is the mean intensity valueof the pixels within a window of size . The meanintensities are smoothed using the same windowing. Instead ofusing a variable window size as applied by [19], we used arunning window of only one size (40 40) as applied by [11].Consequently, since the amount of pixels used while calculatingthe local average intensity in the center is more than the amountof pixels used near the border, the ROI of the retinal imagesis shrunk by five pixels to discard the pixels near the border[11]. Illumination equalization (1) is applied to the green-bandimage [Fig. 4(b) and (c)].

C. Adaptive Histogram Equalization (AHE)

Wu et al. [27] applied the AHE to normalize and enhance thecontrast within fundus images. They found it more effective thanthe classical histogram equalization, especially when detectingsmall blood vessels characterized by low contrast levels. AHE

Page 5: Main Paper

YOUSSIF et al.: OPTIC DISC DETECTION FROM NORMALIZED DIGITAL FUNDUS IMAGES BY MEANS OF A VESSELS’ DIRECTION MATCHED FILTER 15

Fig. 3. Proposed “vessels’ direction at the OD vicinity” matched filter.

is applied to an illumination equalized inverted green-bandimage as proposed in [27], where each pixel is adapted usingthe following equation:

(2)

where , denotes the pixel ’s neighborhood (asquare window with length ), if ,andotherwise. The values of and where empirically chosen by[27] to be 81 and 8, respectively [Fig. 4(d)].

D. Retinal Blood Vessels Segmentation

To segment the retinal blood vessels, we used the simple andstandard edge fitting algorithm proposed by Chaudhuri et al.[28], where the similarity between a predefined 2-D Gaussiantemplate and the fundus image is maximized. Twelve “15 15”filters (templates) were generated to model the retinal vascula-ture along all different orientations (0 to 165 ) with an angularresolution of 15 , then applied to each pixel where only the max-imum of their responses is kept. In order to generate a binaryvessel/nonvessel image [Fig. 4(e)], the maximum responses arethresholded using the global threshold selection algorithm pro-posed by Otsu [29].

Instead ofapplying the 12 templates to an averaged green-bandimage as suggested by [28], applying them to the adaptively his-togram equalized image significantly improves the segmentationalgorithm and increases the sensitivity and specificity of thedetected vessels [30]. A vessels direction map (VDM) can beobtained from the segmentation algorithm by recording thedirection of the template that achieved the maximum response ateach pixel. Then, for all the pixels labeled as nonvessel, the cor-responding values in the VDM can be assigned to “ ” or not-a-number (NAN) in order to exclude them from further processing.

E. Vessels’ Direction Matched Filter

“A matched filter describes the expected appearance of a de-sired signal, for purposes of comparative modeling” [31]. Thus,in order to detect the OD, a simple vessels’ direction matchedfilter is proposed to roughly match the direction of the vesselsat the OD vicinity (Fig. 3). The 9 9 template is resized usingbilinear interpolation to sizes 241 81, 361 121, 481 161,and 601 201 to match the structure of the vessels at differentscales. These sizes are specially tuned for the STARE andDRIVE, but they can be easily adjusted to other datasets.The difference between all four templates (in the single given

Fig. 4. Proposed method applied to the fundus image in Fig. 4(h). (a) ROI maskgenerated. (b) Green-band image. (c) Illumination equalized image. (d) Adap-tive histogram equalization. (e) Binary vessel/non-vessel image. (f) Thinnedversion of the preceding binary image. (g) Final OD-center candidates. (h) ODdetected successfully using the proposed method (white cross, right-hand side).

direction) and a VDM is calculated, and the pixel havingthe least accumulated difference is selected as the OD center[Fig. 4(h)]. To reduce the computational burden, matched filtersare applied only to candidate pixels picked from the fundusimage. The binary vessel/nonvessel image is thinned [Fig. 4(f)].Hence, reducing the amount of pixels labeled as vessels intothe vessels’ centerline. All remaining vessel-labeled pixels thatare not within a 41 41 square centered on each of the highest4% intensity pixels in the illumination equalized image arerelabeled as nonvessel pixels [Fig. 4(g)]. This final step aimsonly to reduce the number of OD candidates, and thus alteringthe size of the square or the amount of highest intensity pixelssimply has no significant effect. The remaining vessel-labeledpixels are potential OD centers, thus selected as candidates forapplying the four sizes of the matched filter.

Page 6: Main Paper

16 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008

Fig. 5. Results of the proposed method (white cross represents the estimatedOD center). (a) Only case where the OD detection method failed. (b)–(h) Resultsof the proposed method on the images shown in [21].

V. RESULTS

The proposed method achieved a success rate of 98.77% (i.e.,the OD was detected correctly in 80 out of the 81 images con-tained in the STARE dataset). The estimated OD center is con-sidered correct if it was positioned within 60 pixels of the manu-ally identified center, as proposed in [19] and [21]. The averagedistance (for the 80 successful images) between the estimatedOD center and the manually identified center was 26 pixels. Theonly case in which the OD was not correctly detected [Fig. 5(a)],was due to uneven crescent-shaped illumination near the borderthat biased the OD candidates and affected the vessels segmen-tation algorithm. OD detection results for the proposed and liter-ature reviewed methods are summarized in Table I. Besides, and

Fig. 6. Results of the proposed method using the DRIVE dataset (white crossrepresents the estimated OD center).

for comparison, Fig. 5(b)–(h) shows the OD detection results ofthe proposed method on the images shown in [21]. Additionally,the OD was detected correctly in all of the 40 DRIVE dataset im-ages (a 100% success rate) using the proposed method (Fig. 6).The average distance between the estimated and the manuallyidentified OD centers was 17 pixels.

VI. DISCUSSION

A Matlab prototype was implemented for the procedure ap-plying the matched filters, where runs needed on average 3.5min for each image on a Laptop (2-MHz Intel Centrino 1.7 CPUand 512 Mb RAM). Though using 2-D matched filters for retinalvessels segmentation and OD detection involves more compu-tation than other literature reviewed methods, the use of suchtemplate-based methods eases the implementation on special-ized high speed hardware and/or parallel processors.

Characterized as the brightest anatomical structure in a retinalimage, selecting the highest 1% intensity pixels should containareas in the OD [6], unfortunately, due to uneven illumination(vignetting in particular) the OD may appear darker than otherretinal regions, especially when retinal images are often cap-tured with the fovea appearing in the middle of the image and the

Page 7: Main Paper

YOUSSIF et al.: OPTIC DISC DETECTION FROM NORMALIZED DIGITAL FUNDUS IMAGES BY MEANS OF A VESSELS’ DIRECTION MATCHED FILTER 17

OD to one side [19]. Illumination equalization significantly nor-malized the luminosity across the fundus image, increasing thenumber of OD candidates within the OD vicinity. Selecting thehighest 4% intensity pixels guaranteed the presence of OD-can-didates within the OD area.

Moreover, using high intensity pixels to filter segmented ves-sels implicitly searches for areas of high variance, as proposed in[1]. However, employing intensity to segmented vessels as pro-posed is much more robust compared to OD localization methodsbased on intensity variation or just on intensity values. Latermethods will not be straightforward once the OD loses its distinctappearance due to the nonuniform illumination or pathologies.

AHE transformed the image into a more appropriate appear-ance for the application of the segmentation algorithm. Thoughthe 2-D matched filter achieved poor results compared to otherretinal vessels segmentation methods [18], applying it to theAHE image significantly improved its performance. A clear ad-vantage of using the 2-D matched filter for vessels segmentationis the ability to obtain the VDM implicitly while segmentation,without any additional algorithm as proposed in [23] and [24].

The proposed 2-D vessels’ direction matched filter was suc-cessful and robust in representing the directional model of theretinal vessels surrounding the OD. Resizing the filter into foursizes (seeSection IV-E)aimedtocapture thevertical longitudinalstructure of the retinal vessels, as proposed by Tobin et al. [23],where theconvolutionmaskwas roughly1ODdiameterwideand3 OD diameters tall. The proposed filter is obviously simpler thanthedirectionalmodelproposedby [11], theparabolicgeometricalmodel proposed by [21], and the fuzzy convergence approachproposed by [19]. In addition, using the STARE dataset that is fullof tough pathological situations, the proposed method achievedbetter results compared to the reviewed methods (Table I).

VII. CONCLUSION AND FUTURE WORK

The paper presented a simple method for OD detection usinga 2-D vessels’ direction matched filter. The proposed approachachieved better results compared to the results reported in liter-ature. An extension for this study could be as follows.

1) Enhancing the performance of the vessel segmentationalgorithm which will significantly affect the performanceand efficiency of the proposed method. This can beachieved by employing other preprocessing techniques, orby employing postprocessing steps as proposed in [1].

2) Using other OD properties or vascular-related OD proper-ties beside intensity and variance to reduce the OD-centercandidates will further enhance the performance (e.g., ves-sels’ density and diameter).

3) Examining the performance of the existing OD detectionmethods using other large, benchmark, and publicly-avail-able datasets to achieve more comprehensive results.

4) Choosing other vessels segmentation algorithms where theVDM can be implicitly obtained (e.g., the angles of themaximum 2-D-Gabor wavelet response used in [32] and[33] can be simply used as a VDM).

ACKNOWLEDGMENT

The authors would like to thank their fellow authors of refer-ences [1], [3], [9], [18], and [28] for their support in acquiringthe materials and resources needed to conduct the present study.

REFERENCES

[1] C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H. Williamson, “Au-tomated localisation of the optic disk, fovea, and retinal blood vesselsfrom digital colour fundus images,” Br. J. Ophthalmol., vol. 83, no. 8,pp. 902–910, 1999.

[2] T. Teng, M. Lefley, and D. Claremont, “Progress towards automateddiabetic ocular screening: A review of image analysis and intelligentsystems for diabetic retinopathy,” Med. Biol. Eng. Comput., vol. 40,pp. 2–13, 2002.

[3] L. Gagnon, M. Lalonde, M. Beaulieu, and M.-C. Boucher, “Procedureto detect anatomical structures in optical fundus images,” in Proc. Conf.Med. Imag. 2001: Image Process., San Diego, CA, Feb. 19–22, 2001,pp. 1218–1225.

[4] H. Li and O. Chutatape, “Automatic location of optic disc in retinalimages,” in IEEE Int. Conf. Image Process., Oct. 7–10, 2001, vol. 2,pp. 837–840.

[5] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, and L.Kennedy, “Optic nerve head segmentation,” IEEE Trans. Med. Imag.,vol. 23, no. 2, pp. 256–264, Feb. 2004.

[6] H. Li and O. Chutatape, “A model-based approach for automated fea-ture extraction in fundus images,” in 9th IEEE Int. Conf. ComputerVision (ICCV’03), 2003, vol. 1, pp. 394–399.

[7] R. A. Abdel-Ghafar, T. Morris, T. Ritchings, and I. Wood, “Detec-tion and characterisation of the optic disk in glaucoma and diabeticretinopathy,” presented at the Med. Image Understand. Anal. Conf.,London, U.K., Sep. 23–24, 2004.

[8] A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, “Classifica-tion and localisation of diabetic-related eye disease,” in 7th Eur. Conf.Computer Vision (ECCV), May 2002, vol. 2353, LNCS, pp. 502–516.

[9] STARE Project Website Clemson Univ., Clemson, SC [Online]. Avail-able: http://www.ces.clemson.edu~ahoover/stare

[10] M. Lalonde, M. Beaulieu, and L. Gagnon, “Fast and robust opticdisk detection using pyramidal decomposition and Hausdorff-basedtemplate matching,” IEEE Trans. Med. Imag., vol. 20, no. 11, pp.1193–1200, Nov. 2001.

[11] F. ter Haar, “Automatic localization of the optic disc in digital colourimages of the human retina,” M.S. thesis, Utrecht University, Utrecht,The Netherlands, 2005.

[12] C. Sinthanayothin, “Image analysis for automatic diagnosis of diabeticretinopathy,” Ph.D. dissertation, University of London (King’s CollegeLondon), London, U.K., 1999.

[13] T. Walter and J.-C. Klein, “Segmentation of color fundus images of thehuman retina: Detection of the optic disc and the vascular tree usingmorphological techniques,” in Proc. 2nd Int. Symp. Med. Data Anal.,2001, pp. 282–287.

[14] R. Chrástek, M. Wolf, K. Donath, G. Michelson, and H. Niemann,“Optic disc segmentation in retinal images,” Bildverarbeitung für dieMedizin 2002, pp. 263–266, 2002.

[15] A. Osareh, “Automated identification of diabetic retinal exudates andthe optic disc,” Ph.D. dissertation, Department of Computer Science,Faculty of Engineering, University of Bristol, Bristol, U.K., 2004.

[16] A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, “Comparisonof colour spaces for optic disc localisation in retinal images,” in Proc.16th Int. Conf. Pattern Recognition, 2002, pp. 743–746.

[17] S. F. Barrett, E. Naess, and T. Molvik, “Employing the Hough trans-form to locate the optic disk,” in Biomed. Sci. Instrum., 2001, vol. 37,pp. 81–86.

[18] M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abràmoff,J. M. Fitzpatrick and M. Sonka, Eds., “Comparative study of retinalvessel segmentation methods on a new publicly available database,”SPIE Med. Imag., vol. 5370, pp. 648–656, 2004.

[19] A. Hoover and M. Goldbaum, “Locating the optic nerve in a retinalimage using the fuzzy convergence of the blood vessels,” IEEE Trans.Med. Imag., vol. 22, no. 8, pp. 951–958, Aug. 2003.

[20] A. Hoover and M. Goldbaum, “Fuzzy convergence,” in Proc. IEEEComputer Soc. Conf. Computer Vis. Pattern Recognit., Santa Barbara,CA, 1998, pp. 716–721.

[21] M. Foracchia, E. Grisan, and A. Ruggeri, “Detection of optic disc inretinal images by means of a geometrical model of vessel structure,”IEEE Trans. Med. Imag., vol. 23, no. 10, pp. 1189–1195, Oct. 2004.

[22] M. Goldbaum, S. Moezzi, A. Taylor, S. Chatterjee, J. Boyd, E. Hunter,and R. Jain, “Automated diagnosis and image understanding with ob-ject extraction, object classification, and inferencing in retinal images,”in Proc. IEEE Int. Congress Image Process., Los Alamitos, CA, 1996,vol. 3, pp. 695–698.

Page 8: Main Paper

18 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 27, NO. 1, JANUARY 2008

[23] K. W. Tobin, E. Chaum, V. P. Govindasamy, T. P. Karnowski, and O.Sezer, Reinhardt, M. Joseph, Pluim, and P. W. Josien, Eds., “Characteri-zation of the opticdisc in retinal imagery using a probabilistic approach,”in Med. Imag. 2006: Image Process., 2006, vol. 6144, pp. 1088–1097.

[24] M. D. Abràmoff and M. Niemeijer, “The automatic detection of theoptic disc location in retinal images using optic disc location regres-sion,” in Proc. IEEE EMBC 2006, Aug. 2006, pp. 4432–4435.

[25] Research section, digital retinal image for vessel extraction (DRIVE)database University Medical Center Utrecht, Image Sciences Institute,Utrecht, The Netherlands [Online]. Available: http://www.isi.uu.nl/Re-search/Databases/DRIVE

[26] A. A. A. Youssif, A. Z. Ghalwash, and A. S. Ghoneim, “A comparativeevaluation of preprocessing methods for automatic detection of retinalanatomy,” in Proc. 5th Int. Conf. Informatics Syst. (INFOS2007), Mar.24–26, 2007, pp. 24–30.

[27] D. Wu, M. Zhang, J.-C. Liu, and W. Bauman, “On the adaptive detec-tion of blood vessels in retinal images,” IEEE Trans. Biomed. Eng., vol.53, no. 2, pp. 341–343, Feb. 2006.

[28] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum,“Detection of blood vessels in retinal images using two-dimensionalmatched filters,” IEEE Trans. Med. Imag., vol. 8, no. 3, pp. 263–269,Sep. 1989.

[29] N. Otsu, “A threshold selection method from gray level histograms,”IEEE Trans. Syst., Man, Cybern., vol. SMC-9, no. 1, pp. 62–66, Jan.1979.

[30] A. A. A. Youssif, A. Z. Ghalwash, and A. S. Ghoneim, “Compar-ative study of contrast enhancement and illumination equalizationmethods for retinal vasculature segmentation,” presented at the 3rdCairo Int. Biomed. Eng. Conf. (CIBEC’06), Cairo, Egypt, Dec.21–24, 2006.

[31] A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood ves-sels in retinal images by piecewise threshold probing of a matched filterresponse,” IEEE Trans. Med. Imag., vol. 19, no. 3, pp. 203–210, Mar.2000.

[32] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar Jr., H. F. Jelinek, and M.J. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet andsupervised classification,” IEEE Trans. Med. Imag., vol. 25, no. 9, pp.1214–1222, Sep. 2006.

[33] A. A. A. Youssif, A. Z. Ghalwash, and A. S. Ghoneim, “Automaticsegmentation of the retinal vasculature using a large-scale sup-port vector machine,” in 2007 IEEE Pacific Rim Conf. Commun.,Computers Signal Process., Victoria, BC, Canada, Aug. 22–24,2007.