17
1 23 Journal of Medical Systems ISSN 0148-5598 Volume 36 Number 5 J Med Syst (2012) 36:2865-2879 DOI 10.1007/s10916-011-9764-4 Directional Binary Wavelet Patterns for Biomedical Image Indexing and Retrieval Subrahmanyam Murala, R. P. Maheshwari & R. Balasubramanian

subramanym murala

Embed Size (px)

DESCRIPTION

About Subramanayam murala, CBIR

Citation preview

Page 1: subramanym murala

1 23

Journal of Medical Systems ISSN 0148-5598Volume 36Number 5 J Med Syst (2012) 36:2865-2879DOI 10.1007/s10916-011-9764-4

Directional Binary Wavelet Patterns forBiomedical Image Indexing and Retrieval

Subrahmanyam Murala,R. P. Maheshwari & R. Balasubramanian

Page 2: subramanym murala

1 23

Your article is protected by copyright and

all rights are held exclusively by Springer

Science+Business Media, LLC. This e-offprint

is for personal use only and shall not be self-

archived in electronic repositories. If you

wish to self-archive your work, please use the

accepted author’s version for posting to your

own website or your institution’s repository.

You may further deposit the accepted author’s

version on a funder’s repository at a funder’s

request, provided it is not made publicly

available until 12 months after publication.

Page 3: subramanym murala

ORIGINAL PAPER

Directional Binary Wavelet Patterns for BiomedicalImage Indexing and Retrieval

Subrahmanyam Murala & R. P. Maheshwari &R. Balasubramanian

Received: 18 April 2011 /Accepted: 25 July 2011 /Published online: 6 August 2011# Springer Science+Business Media, LLC 2011

Abstract A new algorithm for medical image retrieval ispresented in the paper. An 8-bit grayscale image is dividedinto eight binary bit-planes, and then binary wavelettransform (BWT) which is similar to the lifting scheme inreal wavelet transform (RWT) is performed on eachbitplane to extract the multi-resolution binary images. Thelocal binary pattern (LBP) features are extracted from theresultant BWT sub-bands. Three experiments have beencarried out for proving the effectiveness of the proposedalgorithm. Out of which two are meant for medical imageretrieval and one for face retrieval. It is furthermentioned that the database considered for three experi-ments are OASIS magnetic resonance imaging (MRI)database, NEMA computer tomography (CT) databaseand PolyU-NIRFD face database. The results afterinvestigation shows a significant improvement in termsof their evaluation measures as compared to LBP andLBP with Gabor transform.

Keywords Directional Binary Wavelet Patterns (DBWP) .

Local Binary Patterns (LBP) . Image retrieval

Introduction

Motivation

With the growth in medical technology and advancement ofthe living world, there has been an expansion of biomedicalimages in hospitals and medical institutions in order to meetones’ medical requirement. This huge data is in differentformat such as computer tomography (CT), magneticresonance images (MRI), ultrasound (US), X-ray etc.Handling of this data by human annotation is a cumber-some task thereby, arousing a dire need for some familiarsearch technique i. e. content based image retrieval (CBIR).It is very difficult to identify the exact disease location inthe patient reports (images) for new physicians as comparedwith more experienced physicians. This problem can besolved using CBIR system by giving the patients’ report asquery, the physician can retrieve related patient reportswhich are previously collected and stored with descrip-tion about disease in the database. With the help ofreference reports, the physicians can identify the exactdisease in the present patient report. The previouslyavailable CBIR systems for medical image retrieval areavailable in [1–4].

The feature extraction forms a prominent stair inCBIR and its effectiveness relies typically on themethod of features extraction from raw images. Com-prehensive and extensive literature survey on CBIR ispresented in [5–10].

Texture analysis has been an eye catcher due to itspotential values for computer vision and pattern recognition

S. Murala (*)Instrumentation and Signal Processing Laboratory,Department of Electrical Engineering,Indian Institute of Technology Roorkee,Roorkee 247667 Uttarakhand, Indiae-mail: [email protected]

R. P. MaheshwariDepartment of Electrical Engineering,Indian Institute of Technology Roorkee,Roorkee 247667 Uttarakhand, Indiae-mail: [email protected]

R. BalasubramanianDepartment of Mathematics,Indian Institute of Technology Roorkee,Roorkee 247667 Uttarakhand, Indiae-mail: [email protected]

J Med Syst (2012) 36:2865–2879DOI 10.1007/s10916-011-9764-4

Author's personal copy

Page 4: subramanym murala

applications. Texture based medical image retrieval is abranch of texture analysis particularly well suited foridentification of disease region, and then retrieval of relateddocuments in the database is making it a star attraction frommedical perspective.

The bit plane histogram and hierarchical bit planehistogram along with cumulative distribution function(CDF) is presented in [11] for CT and MRI imageretrieval. The blood cell image retrieval using colorhistogram and wavelet transform can be seen in [12].Classification of benign and malignant breast massesbased on shape and texture features in sonography imagesis proposed in [13]. The mass regions were extracted fromthe region of interest (ROI) sub-image by implementinghybrid segmentation approach based on level set algorithms.Then two left and right side areas of the masses are elicited.After that, six features (Eccentricity_feature, Solidity_feature,DeferenceArea_Hull_Rectangular, DeferenceArea_Mass_Rectangular, Cross-correlation-left and Cross-correlation-right) based on shape, texture and region characteristics ofthe masses were extracted for further classification. Finally asupport vector machine (SVM) classifier was utilized toclassify breast masses. In [14] a boosting framework forvisuality-preserving distance metric learning is proposed formedical image retrieval. The mammographic images anddataset from ImageCLEF are used for performance evalua-tion. Quellec et al. [15] proposed the optimized wavelettransform for medical image retrieval by adapting the waveletbasis, within the lifting scheme framework for waveletdecomposition. The weights are assigned between waveletsub-bands. They used the diabetic retinopathy and mammo-graphic databases for medical image retrieval. The wavelettransform based brain image retrieval is presented in [16].The co-occurrence matrix based retrieval of medical CT andMRI images in different tissues is can be seen in [17].Further, the image retrieval of different body parts isproposed in [18] which employs color quantization andwavelet transform.

A concise review of the available related literature,targeted for development of our algorithms is presented.At first binary wavelet transform (BWT) is proposedfor binary image compression [19–22]. Further, it is beenextended to grayscale image by separating multilevelgrayscale image into series of bi-level bitplane imagesusing bitplane decomposition, and then the BWT isperformed on each bitplane [23, 24]. The BWT hasseveral distinct advantages over the real wavelet trans-form (RWT), such as no quantization introduced duringthe transform and computationally efficient since onlysimple Boolean operations are involved. The mostimportant feature of this binary field transform is theconservation of alphabet size of wavelet coefficients,which indicates the transformed images have the same

number of grayscale levels as the original images. Lawand Sui proposed the in-place implementation for BWT[25] which is similar to the lifting scheme in the realwavelet transform.

Local binary pattern (LBP) features have emerged asa silver lining in the field of texture retrieval. Ojala etal. proposed LBP [26] which are converted to rotationalinvariant for texture classification in [27]. Rotationalinvariant texture classification using feature distributionsis proposed in [28]. The combination of Gabor filter andLBP for texture segmentation [29] and rotational invarianttexture classification using LBP variance with globalmatching [30] has also been reported. Liao et al. proposedthe dominant local binary patterns (DLBP) for textureclassification [31]. Guo et al. developed the completedLBP (CLBP) scheme for texture classification [32].Recently, LBP has been used in the field of biomedicalimage retrieval and classification and proved to be a greatsuccess. Peng et al. proposed the texture feature extractionbased on a uniformity estimation method in brightness andstructure in chest CT images [33]. They used the extendedrotational invariant LBP and the gradient orientationdifference to represent brightness and structure in the image.Unay et al. proposed the local structure-based region-of-interest retrieval in brain MR images [34]. Quantitativeanalysis of pulmonary emphysema using LBP is presented in[35]. They improved the quantitative measures of emphyse-ma in CT images of the lungs by using joint LBP andintensity histograms.

Main contribution

The authors have bestowed the thrust for carrying out theexperiments on the following:

1) The multi-resolution binary images are computed usingBWT on each bitplane.

2) The combination of BWT and LBP called binarywavelet patterns (BWP) is proposed.

3) The performance of the proposed method is experiencedfor biomedical image retrieval.

The effectiveness is proved by conducting three experiments(two onmedical databases and one on face database for imageretrieval) on different image database.

The organization of the paper is as follows: In“Introduction”, a brief review of medical image retrievaland related work are given. A concise review of BWT canfurther be visualized in “Binary wavelet transform”. Thelocal binary patterns and proposed method (DBWP) arepresented in “Local Patterns”. Further, experimental resultsand discussions to support the algorithm can be seen in“Experimental results and discussions”. Conclusions arederived in “Conclusion”.

2866 J Med Syst (2012) 36:2865–2879

Author's personal copy

Page 5: subramanym murala

Binary wavelet transform

1-D binary wavelet transform (1-D BWT)

The implementation of binary wavelet transform (BWT) onbinary images is similar as the lifting wavelet transform isconducted on grayscale image.

Let x be an 1×N signal, the transformed BWTcoefficients matrix T can be constructed as follows:

T ¼ C D½ �T ð1Þ

where

C ¼ cjs¼0; cjs¼2; ::::::::; cjs¼N�2

� �TD ¼ djs¼0; djs¼2; ::::::::; djs¼N�2

� �T ð2Þ

ajs¼k defines a vector with elements formed from a circularshifted sequence of a by k. AT is the transpose of A, and

c ¼ c0; c1; ::::::::cS�1f gTd ¼ d0; d1; ::::::::dS�1f gT ð3Þ

S is the number of scales, ci and di are the scaling (lowpass)and the wavelet coefficients (highpass) respectively. TheBWT is then defined as:

y ¼ T x ð4Þ

In [25], the 32 length-8 binary filters are classified intofour groups depending on the number of “1”s in the binaryfilters. Examples of the binary filters in each group aregiven in Table 1.

In-place implementation of BWT

Law and Siu [25] have proposed the implementation ofBWT as similar to the lifting scheme in real wavelettransform by in-place implementation. In order to have anin-place implementation structure, the odd number and theeven number samples of the original signal are split intotwo sequences. These two sequences are then updated

according to the filter coefficients from the lowpass and thebandpass filters. This structure is similar to the “split,update and predict” procedure in the lifting implementationof the real field wavelet transform. The lowpass output andthe bandpass output are interleaved together in the trans-formed output. BWT implementation with group 1 filter isconducted, where the all even number yields lowpassoutput, while odd number refers to bandpass output whichare calculated by applying exclusive-or (XOR) operationbetween even and odd samples of input signal. The schemeis depicted in Fig. 1. Similar in-place structures can beextended easily to other groups.

2-D BWT

A separable 2-D binary wavelet transform can be computedefficiently in binary space by applying the associated 1-Dfilter bank to each row of the image, and then applying thefilter bank to each column of the resultant coefficients.Figure 2 shows one level pyramidal wavelet decompositionof an image I = f(x, y) of size a × b pixels.

In the first level of decomposition, one lowpass sub-image(LL) and three orientation selective highpass sub-images (LH,HL and HH) are created. In second level of decomposition, thelowpass sub-image is further decomposed into one lowpass(LL) and three highpass sub-images (LH, HL and HH). Theprocess is repeated on the lowpass sub-image to form higherlevel of wavelet decomposition. In other words, BWTdecomposes an image in to a pyramid structure of the sub-images with various resolutions corresponding to the differentscales. Three-stage decomposition will create three lowpasssub-images and nine (three each in horizontal (0°), vertical(90°), and diagonal (45°) direction) highpass directional sub-images. The lowpass sub-images are low-resolution versions ofthe original image at different scales. The horizontal, verticaland diagonal sub-images provide the information about thebrightness changes in the corresponding directions respectively.

Initially, the BWT is designed for image compression ofbinary images [19–22]. Further, this concept has beenextended on grayscale image by separating it into binary bitplanes, and then performed the BWT to each individual bitplane of image as shown in Fig. 2.

Table 1 Binary wavelet filters grouping of length being equal to eight

Group Lowpass filter Highpass filter

1 {0, 1, 0, 0, 0, 0, 0,0} {1, 1, 0, 0, 0, 0, 0,0}

2 {1, 1, 1, 0, 0, 0, 0,0} {1, 1, 0, 0, 0, 0, 0,0}

3 {1, 1, 1, 1, 0, 0, 0,1} {1, 1, 0, 0, 0, 0, 0,0}

4 {1, 1, 1, 1, 1, 1, 1,0} {1, 1, 0, 0, 0, 0, 0,0}Fig. 1 In-place implementation of BWT for Group 1 filter

J Med Syst (2012) 36:2865–2879 2867

Author's personal copy

Page 6: subramanym murala

Local PATTERNs

Local binary patterns (LBP)

The LBP operator was introduced by Ojala et al. [26] fortexture classification. Success in terms of speed (no need totune any parameters) and performance is reported in manyresearch areas such as texture classification [26–32], facerecognition [36, 37], object tracking [38], bio-medicalimage retrieval [33–35] and finger print recognition [39].Given a center pixel in the 3×3 pattern, LBP value iscomputed by comparing its grayscale value with itsneighborhoods based on Eqs. 5 and 6:

LBPP;R ¼XPi¼1

2 i�1ð Þ � f I gið Þ � I gcð Þð Þ ð5Þ

f ðxÞ ¼ 1 x � 00 else

�ð6Þ

where I(gc) denotes the gray value of the center pixel, I(gi)is the gray value of its neighbors, P stands for the numberof neighbors and R, the radius of the neighborhood.

After computing the LBP pattern for each pixel (j, k), thewhole image is represented by building a histogram asshown in Eq. 7.

HLBPðlÞ ¼XN1

j¼1

XN2

k¼1

f LBP j; kð Þ; lð Þ; l 2 0; P P � 1ð Þ þ 3ð Þ½ �

ð7Þ

f x; yð Þ ¼ 1 x ¼ y0 else

�ð8Þ

where the size of input image is N1×N2.Figure 3 shows an example of obtaining an LBP from a

given 3×3 pattern. The histograms of these patternscontain the information on the distribution of edges in animage.

Directional binary wavelet patterns (DBWP)

The proposed DBWP encodes the directional edge infor-mation in a neighborhood with the help of BWT. Given an

Fig. 2 2D separable BWT implementation

2868 J Med Syst (2012) 36:2865–2879

Author's personal copy

Page 7: subramanym murala

8-bit grayscale image I, we separated it into eight binary bitplanes as follows:

I ¼X8i¼1

2 i�1ð Þ � I i ð9Þ

where Ii is the ith bit plane of image I.The BWT is performed on each bit plane to extract the

multi-resolution edge information in horizontal (0°), vertical(90°) and diagonal (±45°) directions.

Wilow; Sþ1;W

ihigh 0�ð Þ; Sþ1;W

ihigh 90�ð Þ; Sþ1;W

ihigh �45�ð Þ; Sþ1

h i

¼ BWT Wilow;S

� �; i ¼ 1; 2; ::::8

ð10Þ

Wilow;S ¼ I i if S ¼ 1

Wilow; S else

�ð11Þ

where the function [o] = BWT(x) denotes, the ouput ‘o’ forBWT operation of input ‘x’. S is the number of scales, W isthe lowpass and highpass output in BWT operation.

Fig. 3 Example of obtaining LBP for the 3×3 pattern

Fig. 4 Example of obtaining bit planes, BWT sub-bands and DBWP coding for the given two images (a) and (b)

J Med Syst (2012) 36:2865–2879 2869

Author's personal copy

Page 8: subramanym murala

2870 J Med Syst (2012) 36:2865–2879

Author's personal copy

Page 9: subramanym murala

Given a center pixel in the 3×3 pattern, DBWP value iscomputed by collecting its P neighborhoods based onEq. 12.

DBWP iP;R; S ¼

XPp¼1

2 p�1ð Þ �WiS gp� � ð12Þ

where WiS gp� �

denotes the binary value of its neighbors, Pstands for the number of neighbors and R, the radius of theneighborhood.

After computing the DBWP pattern for each pixel in WiS ,

the whole subband is represented by a histogram usingEq. 7. Finally, these histograms (8×3×4) are calculatedfrom three scales BWT on 8-bit planes and are concatenatedto construct final feature vector.

The local pattern with P neighborhoods results into 2P

combinations of local binary patterns whose feature vectorlength is 2P. The computational cost of this particularfeature vector is very high. To conquer over this statementuniform patterns are considered. It refers to the uniformappearance pattern which has limited discontinuities in thecircular binary presentation. In this paper, the pattern whichhas less than or equal to two discontinuities in the circularbinary presentation is considered as the uniform pattern andremaining as non-uniform patterns.

Algorithm:Input: 8-bit grayscale image; Output: feature vector

1. Load the 8-bit grayscale image.2. Separate the 8-bit planes from the grayscale image.3. Perform the BWT operation of three scales on each

bit plane.4. Construct the DBWP on each sub-band.5. Construct the histograms.6. Concatenate all histograms to construct the final

feature vector.

The proposed DBWP is different from the well-knownLBP [26]. The DBWP captures the multi-resolution edgesbetween any pair of neighborhood pixels in a local regionalong three directions (horizontal, Vertical and diagonal) byBWT, while LBP considers the relationship between a givenpixel and its surrounding neighbors. Therefore, DBWPcaptures more edge information than LBP. Figure 4 illustratesan example to get bit planes, BWT sub-bands and DBWPcoding for the two different MR images are selected from theMR image database. We are displaying four bit planes, onelevel BWT and their DBWP coding due to space limitation.

In order to compare the performance of proposedDBWP with well known LBP, we have calculated theDBWP and LBP for two sample images of Fig. 4(a) and(b). These two images are selected from different groupsof MR image database [40]. The calculated features areshown graphically in Fig. 5. Figure 5 (a)–(d) illustrate thefeatures extracted by DBWP on LL, LH, HL and HHsubband respectively which shows that the featuresextracted from sample image (a) are different to a goodextent as compared to sample image (b). From this wecan differentiate the two sample images very easily.However the extracted LBP features for the same sampleimages are close to each other and are very difficult todifferentiate these two images. The experimental resultsdemonstrates that the proposed DBWP shows better perfor-mance as compared to LBP, indicating that it can capture moreedge information than LBP for texture extraction.

Proposed system framework

Figure 6 shows the flowchart of the proposed imageretrieval system and algorithm for the same is given below:

Algorithm:Input: Image; Output: Retrieval result

1. Load the grayscale image.2. Separate the 8-bit planes from the gray image.3. Perform the BWT on each bit plane.4. Construct the DBWP histograms for all BWT sub-bands.

Fig. 6 Proposed retrievalsystem framework for imageretrieval

�Fig. 5 The features of sample images are calculated and comparedeach other (a) DBWP of LL sub-band, (b) DBWP of LH sub-band, (c)DBWP of HL sub-band, (d) DBWP of HH sub-band and (e) LBP

J Med Syst (2012) 36:2865–2879 2871

Author's personal copy

Page 10: subramanym murala

5. Construct the feature vector by concatenating allhistograms.

6. Compare the query image with the image in thedatabase using Eq. 13.

7. Retrieve the images based on the best matches.

Query matching

Feature vector for query image Q is represented as fQ ¼fQ1 ; fQ1 ; ::::::::fQLg

� �obtained after feature extraction. Simi-

larly, each image in the database is represented with featurevector fDBi ¼ fDBi1 ; fDBi1 ; ::::::::fDBiLg

� �; i ¼ 1; 2; ::::::; DBj j.

The goal is to select n best images that resembles thequery image. This involves selection of n top matchedimages by measuring the distance between query image andimages in the database |DB|. In order to match the imageswe used d1 similarity distance metric computed by Eq. 13.

D Q;DBð Þ ¼XLgi¼1

fDBji � fQi

1þ fDBji þ fQi

�������� ð13Þ

where fDBji is ith feature of jth image in the database |DB|.

Experimental results and discussions

In order to analyze the performance of our proposedmethod for image retrieval three experiments are conducted

on three different medical databases. Results obtained arediscussed in the following subsections.

The abbreviations for extracted features are given below:

LBP Well-known LBP featuresGLBP LBP with Gabor transformDBWP Directional Binary Wavelet PatternsLBP_R_P LBP features collected from the pattern size (P,

R) similar representation is applicable for all

In all experiments, each image in the database is used asthe query image. For each query, the system collects ndatabase images X=(x1, x2,........., xn) with the shortestimage matching distance is given by Eq. 13. If xi; i=1,2,....n belong to the same category of the query image, we saythe system has correctly matched the desired.

The average precision judges the performance of theproposed method which is shown below:

For the query image Iq, the precision and recall aredefined as follows:

Precision Iq; n� � ¼ 1

n

XDBj j

i¼1

d 6 Iið Þ;6 Iq� �� �jRank Ii; Iq

� � � n�� ��

ð14Þ

Re call Iq� � ¼ Precision Iq;NA

� ���NA¼No: relevant images in the database

ð15Þwhere ‘n’ indicates the number of retrieved images, |DB| issize of image database. Φ(x) is the category of ‘x’, Rank(Ii, Iq)returns the rank of image Ii (for the query image Iq)among all images of |DB | and d 6 Iið Þ;6 Iq

� �� � ¼1 6 Iið Þ ¼ 6 Iq

� �0 else

�.

Experiment #1

The Open Access Series of Imaging Studies (OASIS) [40]is a series of magnetic resonance imaging (MRI) dataset

Fig. 7 Sample images from OASIS database (one image per category)

Sequence MP-RAGE

TR (msec) 9.7

TE (msec) 4.0

Flip angle (o) 10

TI (msec) 20

TD (msec) 200

Orientation Sagittal

Thickness, gap (mm) 1.25, 0

Resolution (pixels) 176×208

Table 2 MRI dataacquisition details [40]

2872 J Med Syst (2012) 36:2865–2879

Author's personal copy

Page 11: subramanym murala

Precision (%) (n=10)

Group 1 Group 2 Group 3 Group 4 Total

LBP_8_1 51.77 32.54 33.82 49.06 42.63

LBP_16_2 52.58 38.43 31.68 51.13 44.37

LBP_24_3 45.88 42.64 33.70 49.53 43.44

GLBP_8_1 54.43 37.94 26.51 46.03 42.42

GLBP_16_2 61.12 41.17 29.43 48.11 46.31

GLBP_24_3 72.01 31.37 32.36 47.83 47.69

DBWP_8_1 52.74 37.74 34.38 60.00 47.05

DBWP_16_2 57.74 34.70 30.78 66.69 48.71

DBWP_24_3 52.98 37.15 37.42 71.79 50.59

Table 3 Results of alltechniques in terms of Precisionon OASIS Database

n Number of top matchesconsidered

Fig. 8 Comparison of proposed method (DBWP) with the other existing methods as a function of number of top matches considered on: (a)–(c)OASIS database, (d)–(f) NEMA–CT database

J Med Syst (2012) 36:2865–2879 2873

Author's personal copy

Page 12: subramanym murala

that is publicly available for study and analysis. Thisdataset consists of a cross-sectional collection of 421subjects aged between 18 to 96 years. The MRI acquisitiondetails are given in Table 2.

For image retrieval purpose we grouped these 421images into four categories (124, 102, 89, and 106 images)based on the shape of ventricular in the images. Figure 7depicts the sample images of OASIS database (one image

Fig. 9 Retrieval results of proposed method: (a) DBWP_8_1, (b) DBWP_16_2 and (c) DBWP_24_3

2874 J Med Syst (2012) 36:2865–2879

Author's personal copy

Page 13: subramanym murala

from each category). The performance of the proposedmethod (DBWP_P_R) proves its worth over other existingmethods viz LBP_P_R and GLBP_P_R on OASIS database.

From experiment #1, the following inference is drawnfor the performance of proposed method with othermethods in terms of average retrieval precision (ARP)at n=10.

1. ARP of DBWP_8_1 (47.03%) is more as compared toLBP_8_1 (42.63%) and GLBP_8_1 (42.42%).

2. ARP of DBWP_16_2 (48.71%) is more as compared toLBP_16_2 (44.37%) and GLBP_16_2 (46.31%).

3. ARP of DBWP_24_3 (50.59%) is more as compared toLBP_24_3 (43.44%) and GLBP_24_3 (47.69%).

Figure 8 (a)–(c) show the graphs depicting the retrievalperformance of proposed method and other existingmethods as function of number of top matches. FromTable 3, Fig. 8 and above observations, it is evident that theproposed method is outperforming the other existingmethods. Figure 9 illustrates three retrieval results of theproposed method by considering five top matches. In Fig. 9(a) the query image is selected from fourth group andresults show that the first four images (relevant) are

Table 4 Data acquisition details of NEMA–CT image database

Class No. Data No. of slices Resolution In-plane resolution Slice thickness Tube voltage (kV) Tube current (mA)

1 CT0057 104 512×512 0.187500 1.00 130 30

2 CT0060 75 512×512 0.312500 0.50 130 30

3 CT0082 59 512×512 0.742188 5.00 130 30

4 CT0080 253 512×512 0.820312 1.25 130 30

5 CT0001 54 512×512 0.597656 3.00 130 30

6 CT0003 364 512×512 0.625000 0.625 130 30

7 CT0020 555 512×512 0.488281 0.625 130 30

8 CT0083 69 512×512 0.703125 15.80 130 30

Fig. 10 Sample images from NEMA database (one image per category)

J Med Syst (2012) 36:2865–2879 2875

Author's personal copy

Page 14: subramanym murala

retrieved properly from the same group of query but fifthimage (irrelevant) is retrieved from the second group.

Experiment #2

The digital imaging and communications in medicine(DICOM) standard was created by the National ElectricalManufacturers Association (NEMA) (ftp://medical.nema.org/medical/Dicom/Multiframe/) to aid the distribution andviewing of medical images, such as computer tomography(CT) scans, MRIs, and ultrasound. For this experiment, wehave collected 681 CT scans of different parts of humanbody and these are grouped into 13 categories (45, 59, 46,29, 36, 18, 37, 14, 139, 46, 143, 33, and 36 images). TheCT scan data acquisition details are given in Table 4.

Figure 10 depicts the sample images of NEMA database(one image from each category).

The retrieval performance of proposed method(DBWP) and other existing methods (LBP and GLBP)as function of number of top matches are given inFig. 8 (d)–(f). In this experiment GLBP is showing somesimilar performance to the proposed method becauseGabor transform also extracts good directional informa-tion from this database. However, the computationalcomplexity of Gabor transform is very high as comparedto the proposed method (see in “Computational com-plexity”), which is an important requirement for onlineapplications. From Fig. 8 (d)–(f), it is concluded that theproposed method DBWP outperforms other existingmethods.

Fig. 11 Sample images from PolyU-NIRFD database (one image per category)

2876 J Med Syst (2012) 36:2865–2879

Author's personal copy

Page 15: subramanym murala

Experiment #3

In experiment #3, we set up a subset from the PolyU-NIRFD database (http://www4.comp.polyu.edu.hk/~biometrics/polyudb_face.htm). This subset consists of2000 face images: 100 photographs of 20 distinctsubjects. For some of them, the images were taken atdifferent times, with different lighting, facial expressions(open/closed eyes, smiling/not smiling) and facial details(glasses/no glasses). All images were taken against adark homogenous background with the subjects in anapproximately frontal position. From these 2000 imageswe cropped the face portions for experimentation.Figure 11 shows the 20 sample face images one fromeach subject. The retrieval results by nine methods areillustrated in Fig. 12 as a function of number of topmatches considered (n=10, 20, ….., 100) and thefollowing points are observed to compare the perfor-mance of proposed method with other methods in termsof average retrieval precision (ARP) at n=10 andaverage retrieval recall (ARR) at n=100.

1. The DBWP_8_1 (84.99%) is showing more perfor-mance (16% and 9%) as compared to LBP_8_1(68.53%) and GLBP_8_1 (75.86%) in terms of ARPrespectively.

2. The DBWP_8_1 (44.04%) is showing more perfor-mance (13.25% and 12.3%) as compared to LBP_8_1(30.79%) and GLBP_8_1 (31.74%) in terms of ARRrespectively.

3. ARP of DBWP_16_2 (89.02%) is of 9% and 4.5%which is more as compared to LBP_16_2 (79.86%) andGLBP_16_2 (84.48%) respectively.

4. ARR of DBWP_16_2 (46.27%) is (8.9% and 6.5%)which is high as compared to LBP_16_2 (37.28%) andGLBP_16_2 (39.76%) respectively.

5. The DBWP_24_3 (91.64%) is outperforming theLBP_24_3 (84.15%) and GLBP_24_3 (89.25%) interms of ARP.

6. The DBWP_24_3 (47.07%) is outperforming theLBP_24_3 (4054%) and GLBP_24_3 (44.93%) interms of ARR.

From Fig. 12 and above observations, it is evident thatthe proposed method outperforms other existing methods.This is because DBWP can capture more directional edgeinformation with the help of BWT, while LBP onlyconsiders the relationship between a given pixel and it’ssurrounding neighbors. The method DBWP_24_3 showsbetter performance as compared to DBWP_16_2 andDBWP_8_1 which is shown in Fig. 12. From this it isclear that the DBWP_24_3 extracts more edges ascompared to DBWP_16_2 and DBWP_8_1.

Computational complexity

For a given query image I of size N1×N2, the outputresponse of Gabor wavelet transform in M scales and Ndirections is M × N subbands and BWT in M scales is eightsubbands of size N1×N2. The computational complexity forGLBP is M×N and DBWP calculation is eight times moreas compared to LBP. From this we can observe that thecomputation complexity of proposed method is alwayssame for whatever the scales of BWT decomposition whileGLBP depends on number of scales (M) and number ofdirections (N). Therefore, the computational complexity ofGLBP is M�N

8

� �times as compared to proposed method.

Fig. 12 Comparison of proposed method (DBWP) with the otherexisting methods as a function of number of top matches consideredon PolyU-NIRFD database

J Med Syst (2012) 36:2865–2879 2877

Author's personal copy

Page 16: subramanym murala

The experimentation is carried out on core2Duo com-puter with 2.66 GHz and all methods are implemented onthe MATLAB 7.6 software. The CPU time for featureextraction of image size 256×256 is 0.19 s by proposedmethod but GLBP is taking 0.97 s for the same image.From this we can observe that the proposed method is fivetimes faster than GLBP. This is very important requirementfor online retrieval applications.

Conclusions

A novel method employing DBWP operator is proposed fortexture based biomedical image retrieval. DBWP extractsthe information from images using edges which arecalculated by applying BWT on each bitplane of grayscaleimage. Further, the features are extracted by performingLBP operation on each sub-band of BWT. The effectivenessof the proposed method is tested by conducting three set ofexperiments out of which two are meant for medical imageretrieval and one for face retrieval on different imagedatabases thereby, significantly improving the performancein terms of their respective evaluation measures.

Acknowledgments This work was supported by the Ministry ofHuman Resource and Development India under grant MHR-02-23-200 (429). The authors would like to thank the anonymous reviewersfor insightful comments and helpful suggestions to improve thequality, which have been incorporated in this manuscript.

References

1. Mueen, A., Zainuddin, R., and Sapiyan Baba, M., MIARS: Amedical image retrieval system. J. Med. Syst. 34:859–864, 2010.

2. Chu, W., Hsu, C., Cardenas, C., and Taira, R., Acknowledge-basedimage retrieval with spatial and temporal constructs. IEEE Trans.Knowl. Data Eng. 10(6):872–888, 1998.

3. Shyu, C., Kak, A., Kosaka, A., Aisen, A., and Broderick, L.,ASSERT: A physician-in-the-loop content-based inage retrievalsystem for HRCT image databases. Comput. Vis. Image Underst.75:111–132, 1998.

4. Müller, H., Lovis, C., Geissbuhler, A., Medical image retrieval—the MedGIFT project. Medical Imaging and Telemedicine, 2–7,2005.

5. Rui, Y., and Huang, T. S., Image retrieval: Current techniques,promising directions and open issues. J. Vis. Commun. ImageRepresent. 10:39–62, 1999.

6. Smeulders, A. W. M., Worring, M., Santini, S., Gupta, A., andJain, R., Content-based image retrieval at the end of the earlyyears. IEEE Trans. Pattern Anal. Mach. Intell. 22(12):1349–1380,2000.

7. Kokare, M., Chatterji, B. N., and Biswas, P. K., A survey oncurrent content based image retrieval methods. IETE J. Res. 48(3&4):261–271, 2002.

8. Lew, M. S., Sebe, N., Djerba, C., and Jain, R., Content-basedmultimedia information retrieval: State of the art and challenges.ACM Trans. Multimedia Comput., Commun., Appl. 2(1):1–19,2006.

9. Liu, Y., Zhang, D., Guojun, Lu, and Ma, W.-Y., Asurvey ofcontent-based image retrieval with high-level semantics. J.Pattern Recognition 40:262–282, 2007.

10. Müller, H., Michoux, N., Bandon, D., and Geisbuhler, A., Areview of content-based image retrieval systems in medicalapplications–Clinical benefits and future directions. J. Med. Inf.73(1):1–23, 2004.

11. Manjunath, K. N., Renuka, A., and Niranjan, U. C., Linear modelsof cumulative distribution function for content-based medicalimage retrieval. J. Med. Syst. 31:433–443, 2007.

12. Woo Chaw Seng, and Seyed Hadi Mirisaee, Evaluation of a content-based retrieval system for blood cell images with automatedmethods. J. Med. Syst. doi:10.1007/s10916-009-9393-3.

13. Fahimeh Sadat Zakeri, Hamid Behnam, Nasrin Ahmadinejad,Classification of benign and malignant breast masses based onshape and texture features in sonography images. J. Med. Syst.doi:10.1007/s10916-010-9624-7.

14. Yang, L., Student, Jin, R., Mummert, L., Sukthankar, R., Goode,A., Zheng, B., Hoi, S. C. H., and Satyanarayanan, M., A boostingframework for visuality-preserving distance metric learning andits application to medical image retrieval. IEEE Trans. PatternAnal. Mach. Intell. 32(1):33–44, 2010.

15. Quellec, G., Lamard, M., Cazuguel, G., Cochener, B., and Roux,C., Wavelet optimization for content-based image retrieval inmedical databases. J. Med. Imag. Anal. 14:227–241, 2010.

16. Traina, A, Castanon, C, Traina, C Jr., Multiwavemed: A systemfor medical image retrieval through wavelets transformations.Proc. 16th IEEE Symp. Comput.-Based Med. Syst., New York,USA, 150–155, 2003.

17. Felipe, J. C., Traina, A. J. M., Traina, C. Jr., Retrieval by content ofmedical images using texture for tissue identification. 16th IEEESymp. Comput.-Based Med. Syst., New York, USA, 175–180, 2003.

18. Müller, H., Rosset, A., Vallét, J. -P., Geisbuhler, A., Comparingfeature sets for content-based image retrieval in a medical casedatabase. Proc. SPIE Med. Imag., PACS Imag. Inf., San Diego,USA, 99–109, 2004.

19. Swanson,M. D., and Tewfik, A. H., A binary wavelet decompositionof binary images. IEEE Trans. Image Process. 5:1637–1650, 1996.

20. Kamstra, L., The design of linear binary wavelet transforms andtheir application to binary image compression. IEEE Inter. Conf.Image Processing, ICIP’03, 241–244, 2003.

21. Kamstra, L., Nonlinear binary wavelet transforms and theirapplication to binary image compression. Proc. 2003 IEEE Inter.Conf. Image Processing, ICIP’02, 3 593–596, 2002.

22. Gerek, Ö. N., Çetin, A. E., Tewfik, A. H., Subband coding ofbinary textual images for document retrieval. Proc. 2003 IEEEInter. Conf. Image Processing, ICIP’96, 899–902, 1996.

23. Pan, H., Jin, L.-Z., Yuan, X.-H., Xia, S.-Y., and Xia, L.-Z.,Context-based embedded image compression using binary wavelettransform. J. Image Vision Computing 28:991–1002, 2010.

24. Pan, H., Siu, W. C., and Law, N. F., Lossless image compressionemploying binary wavelet transform. IET Image Process. 1(4):353–362, 2007.

25. Law, N. F., and Siu, W. C., A filter design strategy for binary fieldwavelet transform using the perpendicular constraint. J. SignalProcess. 87(11):2850–2858, 2007.

26. Ojala, T., Pietikainen, M., and Harwood, D., A comparative sudyof texture measures with classification based on feature distributions.J. Pattern Recognition 29(1):51–59, 1996.

27. Ojala, T., Pietikainen, M., andMaenpaa, T., Multiresolution gray-scaleand rotation invariant texture classification with local binary patterns.IEEE Trans. Pattern Anal. Mach. Intell. 24(7):971–987, 2002.

28. Pietikainen, M., Ojala, T., Scruggs, T., Bowyer, K. W., Jin, C.,Hoffman, K., Marques, J., Jacsik, M., and Worek, W., Overview ofthe face recognition using feature distributions. J. PatternRecognition 33(1):43–52, 2000.

2878 J Med Syst (2012) 36:2865–2879

Author's personal copy

Page 17: subramanym murala

29. Li, M., and Staunton, R. C., Optimum Gabor filter design and localbinary patterns for texture segmentation. J. Pattern Recognition29:664–672, 2008.

30. Guo, Z., Zhang, L., and Zhang, D., Rotation invariant textureclassification using LBP variance with global matchning. J.Pattern Recognition 43:706–716, 2010.

31. Liao, S., Law, M. W. K., and Chung, A. C. S., Dominant localbinary patterns for texture classification. IEEE Tans. Image Proc.18(5):1107–1118, 2009.

32. Guo, Z., Zhang, L., and Zhang, D., A completed modeling oflocal binary pattern operator for texture classification. IEEE Tans.Image Proc. 19(6):1657–1663, 2010.

33. Peng, S., Kim, D., Lee, S., and Lim, M., Texture feature extractionon uniformity estimation for local brightness and structure in chestCT images. J. Compt. Bilogy Medic. 40:931–942, 2010.

34. Unay, D., Ekin, A., and Jasinschi, R. S., Local structure-basedregion-of-interest retrieval in brain MR images. IEEE Trans. Infor.Tech. Biomedicine 14(4):897–903, 2010.

35. Sørensen, L., Shaker, S. B., and de Bruijne, M., Quantitativeanalysis of pulmonary emphysema using local binary patterns.IEEE Trans. Medical Imaging 29(2):559–569, 2010.

36. Ahonen, T., Hadid, A., and Pietikainen, M., Face description withlocal binary patterns: Applications to face recognition. IEEETrans. Pattern Anal. Mach. Intell. 28(12):2037–2041, 2006.

37. Zhao, G., and Pietikainen, M., Dynamic texture recognition usinglocal binary patterns with an application to facial expressions.IEEE Trans. Pattern Anal. Mach. Intell. 29(6):915–928, 2007.

38. Ning, J., Zhang, L., Zhang, D., and Chengke, W., Robust objecttracking using joint color-texture histogram. Int. J. PatternRecogn. Artif. Intell. 23(7):1245–1263, 2009.

39. Nanni, L., and Lumini, A., Local binary patterns for a hybridfingerprint matcher. J. Pattern Recognition 41:3461–3466, 2008.

40. Marcus, D. S., Wang, T. H., Parker, J., Csernansky, J. G., Morris, J. C.,and Buckner, R. L., Open access series of imaging studies (OASIS):Crosssectional MRI data in young, middle aged, nondemented, anddemented older adults. J. Cogn. Neurosci. 19(9):1498–1507, 2007.

J Med Syst (2012) 36:2865–2879 2879

Author's personal copy