15
Multispectral image enhancement for effective visualization Noriaki Hashimoto, 1,Yuri Murakami, 2 Pinky A. Bautista, 3 Masahiro Yamaguchi, 2 Takashi Obi, 1 Nagaaki Ohyama, 2 Kuniaki Uto, 1 and Yukio Kosugi 1 1 Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 2268502, Japan 2 Imaging Science and Engineering Laboratory, Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama 2268503, Japan 3 Department of Pathology Massachusetts General Hospital, Harvard Medical School, 101 Merrimac Street, Suite 820, Boston, Massachusetts 02114, USA [email protected] Abstract: Color enhancement of multispectral images is useful to visualize the image’s spectral features. Previously, a color enhancement method, which enhances the feature of a specified spectral band without changing the average color distribution, was proposed. However, sometimes the enhanced features are indiscernible or invisible, especially when the enhanced spectrum lies outside the visible range. In this paper, we extended the conventional method for more effective visualization of the spectral features both in visible range and non-visible range. In the proposed method, the user specifies both the spectral band for extracting the spectral feature and the color for visualization respectively, so that the spectral feature is enhanced with arbitrary color. The proposed color enhancement method was applied to different types of multispectral images where its effectiveness to visualize spectral features was verified. © 2011 Optical Society of America OCIS codes: (100.2000) Digital image processing; (100.2980) Image enhancement; (110.4234) Multispectral and hyperspectral imaging. References and links 1. Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters. I. A semianalytical model,” Appl. Opt. 37, 6329–6338 (1998). 2. J. A. Gualtieri and R. F. Cromp, “Support vector machines for hyperspectral remote sensing classification,” Proc. SPIE 3584, 221–232 (1999). 3. B.-C. Gao, M. J. Montes, Z. Ahmad, and C. O. Davis, “Atmospheric correction algorithm for hyperspectral remote sensing of ocean color from space,” Appl. Opt. 39, 887–896 (2000). 4. M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, and N. Ohyama, “Color image reproduction based on the multispectral and multiprimary imaging: Experimental evaluation,” Proc. SPIE 4663, 15–26 (2002). 5. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41, 2532–2548 (2002). 6. A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. I. Decorrelation and HSI contrast stretches,” Remote Sens. Environ. 20, 209–235 (1986). 7. J. Ward, V. Magnotta, N. C. Andreasen, W. Ooteman, P. Nopoulos, and R. Pierson, “Color enhancement of multispectral MR images: Improving the visualization of subcortical structures,” J. Comput. Assist. Tomogr. 25, 942–949 (2001). 8. M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral image using the Karhunen-Loeve transform,” Opt. Rev. 12, 69–75 (2005). #140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011 (C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9315

Multispectral image enhancement for effective visualization

  • Upload
    titech

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Multispectral image enhancement foreffective visualization

Noriaki Hashimoto,1,∗ Yuri Murakami,2 Pinky A. Bautista,3

Masahiro Yamaguchi,2 Takashi Obi,1 Nagaaki Ohyama,2

Kuniaki Uto,1 and Yukio Kosugi1

1Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology,4259 Nagatsuta-cho, Midori-ku, Yokohama 2268502, Japan

2Imaging Science and Engineering Laboratory, Tokyo Institute of Technology,4259 Nagatsuta-cho, Midori-ku, Yokohama 2268503, Japan

3Department of Pathology Massachusetts General Hospital, Harvard Medical School,101 Merrimac Street, Suite 820, Boston, Massachusetts 02114, USA

[email protected]

Abstract: Color enhancement of multispectral images is useful tovisualize the image’s spectral features. Previously, a color enhancementmethod, which enhances the feature of a specified spectral band withoutchanging the average color distribution, was proposed. However, sometimesthe enhanced features are indiscernible or invisible, especially when theenhanced spectrum lies outside the visible range. In this paper, we extendedthe conventional method for more effective visualization of the spectralfeatures both in visible range and non-visible range. In the proposedmethod, the user specifies both the spectral band for extracting the spectralfeature and the color for visualization respectively, so that the spectralfeature is enhanced with arbitrary color. The proposed color enhancementmethod was applied to different types of multispectral images where itseffectiveness to visualize spectral features was verified.

© 2011 Optical Society of America

OCIS codes: (100.2000) Digital image processing; (100.2980) Image enhancement;(110.4234) Multispectral and hyperspectral imaging.

References and links1. Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow

waters. I. A semianalytical model,” Appl. Opt. 37, 6329–6338 (1998).2. J. A. Gualtieri and R. F. Cromp, “Support vector machines for hyperspectral remote sensing classification,” Proc.

SPIE 3584, 221–232 (1999).3. B.-C. Gao, M. J. Montes, Z. Ahmad, and C. O. Davis, “Atmospheric correction algorithm for hyperspectral

remote sensing of ocean color from space,” Appl. Opt. 39, 887–896 (2000).4. M. Yamaguchi, T. Teraji, K. Ohsawa, T. Uchiyama, H. Motomura, Y. Murakami, and N. Ohyama, “Color image

reproduction based on the multispectral and multiprimary imaging: Experimental evaluation,” Proc. SPIE 4663,15–26 (2002).

5. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunablefilter,” Opt. Eng. 41, 2532–2548 (2002).

6. A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. I. Decorrelationand HSI contrast stretches,” Remote Sens. Environ. 20, 209–235 (1986).

7. J. Ward, V. Magnotta, N. C. Andreasen, W. Ooteman, P. Nopoulos, and R. Pierson, “Color enhancement ofmultispectral MR images: Improving the visualization of subcortical structures,” J. Comput. Assist. Tomogr. 25,942–949 (2001).

8. M. Mitsui, Y. Murakami, T. Obi, M. Yamaguchi, and N. Ohyama, “Color enhancement in multispectral imageusing the Karhunen-Loeve transform,” Opt. Rev. 12, 69–75 (2005).

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9315

9. M. Yamaguchi, M. Mitsui, Y. Murakami, H. Fukuda, N. Ohyama, and Y. Kubota, “Multispectral color imagingfor dermatology: application in inflammatory and immunologic diseases,” in Proceedings of 13th Color ImagingConference (Society for Imaging Science and Technology/Society for Information Display, 2005), pp. 52–58.

10. Y. Murakami, T. Obi, M. Yamaguchi, N. Ohyama, and Y. Komiya, “Spectral reflectance estimation from multi-band image using color chart,” Opt. Commun. 188, 47–54 (2001).

11. P. A. Bautista, T. Abe, M. Yamaguchi, and N. Ohyama, “Multispectral image enhancement for H&E stainedpathological tissue specimens,” Proc. SPIE 6918, 691836 (2008).

12. N. Kosaka, K. Uto, and Y. Kosugi, “ICA-aided mixed-pixel analysis of hyperspectral data in agricultural land,”IEEE Trans. Geosci. Remote Sens. 2, 220–224 (2005).

13. D. Scribner, P. Warren, J. Schuler, M. Satyshur, and M. Kruer, “Infrared color vision: an approach to sensorfusion,” Opt. Photon. News 9, 27–32 (1998).

14. D. Scribner, P. Warren, and J. Schuler, “Extending color vision methods to bands beyond the visible,” MachineVision Appl. 11, 306–312 (2000).

15. M. Vilaseca, J. Pujol, M. Arjona, and F. M. Martınez-Verdu, “Color visualization system for near-infrared mul-tispectral images,” J. Imaging Sci. Technol. 49, 246–255 (2005).

16. N. P. Jacobson and M. R. Gupta, “Design goals and solutions for display of hyperspectral images,” IEEE Trans.Geosci. Remote Sens. 43, 2684–2692 (2005).

17. P. A. Bautista, T. Abe, M. Yamaguchi, Y. Yagi, and N. Ohyama, “Digital staining for multispectral images ofpathological tissue specimens based on combined classification of spectral transmittance,” Comput. Med. Imag-ing Graph. 29, 649–657 (2005).

18. S. Itano, T. Akiyama, H. Ishida, T. Okubo, and N. Watanabe, “Spectral characteristics of aboveground biomass,plant coverage, and plant height in Italian Ryegrass (Lolium multiflorum L.) meadows,” Grassland Sci. 46, 1–9(2000).

1. Introduction

Multispectral imaging uses more than 3 spectral filters to capture images that include spectralinformation which is useful for remote sensing [1–3], color reproduction [4, 5], image anal-ysis [6–9] and so on. High fidelity color reproduction [4, 5], which is difficult to accomplishwith conventional RGB systems due to the limited information contained in RGB images, ismade possible by using multispectral images of visible spectral range. Moreover, spectral colorfeatures that are invisible to the human eyes can be also captured and employed for object de-tection, recognition, or quantification. Color enhancement is an effective tool to explore thespectral features contained in multispectral images. For example, Gillespie et al. [6], Ward etal. [7] and others proposed color enhancement methods for multispectral images. In most cases,the enhancement results are pseudo-color images in which the natural colors of the objects arenot preserved. However, the natural color of the objects is also important to interpret the spectralfeatures when the multispectral image includes the visible spectral range.

Mitsui et al. [8,9] proposed a multispectral color enhancement method in which the enhancedresults are overlaid to the original natural-colored images. In this method, the differences be-tween the original multispectral image and its approximation by a few principal components atspecified spectral bands are amplified. Then, the indiscernible spectral feature in the multispec-tral image is visualized without changing the average color distribution. However, sometimesthe enhanced feature could not be observed, especially when the specified spectral band is notvisually significant, for example, near ultraviolet or infrared. Also when an image has a largenumber of spectral bands, the enhanced results are not clear.

In this paper, we extended the conventional method [8] by modifying the visualization al-gorithm to effectively visualize the enhanced spectral features of a multispectral image, whichcould not be visualized well in the conventional method. In the proposed method, the user canspecify the spectral band to extract the spectral feature and the color for visualization indepen-dently so that the desired spectral feature is enhanced with the specified color. This allows theenhanced spectral features to be visualized clearly even if the enhanced feature is in the invisi-ble range or the image has a large number of spectral bands such like hyperspectral images. Forsuch purpose, we present three methods to determine the color for visualization. In the experi-

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9316

Start enhancing N-bandmultispectral image

Input image (Original image)

Estimate the original image with m-KL vectors

Subtract the estimate

from the original image

Multiply the difference by weighting factor matrix

Add the result

to the original image

End

Output image (Enhanced image)

Fig. 1. The flow of color enhancement.

ment, we applied the proposed methods to various types of multispectral images such as a skinimage, a microscopic image and a rice paddy image, and we have verified that the proposedmethod could effectively enhance the indiscernible spectral feature in the multispectral images.

2. Method

2.1. Multispectral color enhancement

The color enhancement presented in this paper is mainly based on the method proposed byMitsui et al. [8]. This method enhances the color difference from dominant Karhunen-Loeve(KL) component without changing the color determined by the dominant component. The al-gorithm of the color enhancement procedure is shown in Fig. 1. First, a set of spectral data isextracted from the image in order to derive the dominant component. The data can be extractedfrom the entire image, or from part of the image (e.g. region of non-interest), depending on therequirement of the application. Then, a covariance matrix is derived from the extracted spectralsamples to calculate for the KL basis vectors. The first few KL vectors are used to estimate thedominant component of the image.

In the N-band multispectral image, the enhanced signal value vector for j-th pixel ge j (N-dimensional vector) is represented as,

ge j = W(g j − s j)+g j, (1)

where W is an N ×N matrix for the enhancement, g j is the original multispectral signal valueof j-th pixel and s j is the signal value estimated with dominant KL vectors, and is written as,

s j =m

∑i=1

αi jui + g, (2)

where m is the number of basis vectors used in the estimation (m<N), ui is i-th KL basis vector(N-dimensional vector) and g is the average vector of the set of pixel data for KL basis vectors;g j −s j is considered to be a residual component. Furthermore, αi j is i-th KL coefficient for j-thpixel expressed as,

αi j = uTi (g j − g). (3)

The matrix W determines the result of the enhancement. In Ref. 8, the element in p-th row andq-th column [W]pq is given by,

[W]pq =

{k p = q = n0 otherwise

, (4)

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9317

where n is an index for the enhanced band and k is a coefficient to amplify the residual com-ponent. The amplified residual in the n-th band, is added to the original signal value in the n-thband according to Eq. (1). In addition, from the relationship of Eqs. (1), (2) and (3) we have,

ge j = [W(E−UUT )+E]g j −W(E−UUT )g, (5)

where E is an N ×N identity matrix, and U is an N ×N matrix whose column vectors arerepresented by the KL basis vectors and the vector at its q-th column is expressed as,

[U]q =

{uq q ≤ m0 otherwise

. (6)

In Eq. (5), the second term in the right-hand side is a constant vector. Thus, the spectral en-hancement is easily derived by matrix multiplications and additions.

The enhanced multispectral image ge j is transformed into the spectral reflectance or trans-mittance by spectral estimation technique [10], and the color image is generated by using acolor-matching function (CMF) such as CIE 1931 XYZ CMF, an illumination spectrum and amatrix for XYZ to RGB transform.

2.2. Modification of weighting factor matrix

In order to overcome the limitation of the conventional method, we extended the definition ofthe matrix W in Eq. (4) [11], such that the band at which to extract the spectral features and thecolor for visualization can be specified independently. In this paper, the modified version of thematrix W is called weighting factor matrix, whose q-th column vector is designed as follows;

[W]q =

{k(gd −ga) q = n

0 otherwise, (7)

where gd is the spectral data of the target color to be visualized and ga is the spectral data ofthe background in the image. According to Eqs. (1) and (7), the spectrum (gd − ga) amplifiedby the residual component at each pixel is added to the original signal value g j. Setting theproper coefficient k allows the color of the enhanced region to change towards the target colordetermined by gd [Eq. (1)]. The spectral data of the background color, ga, can be the averagespectral data of the entire image.

There are several approaches to determine gd, which is the spectral data of the target color,and we show in the following three possible methods.

Method I. In the first method, the relationship between the wavelength of the multispectralimage and the color for the visualization is defined. Then the spectrum of the color assignedto the n-th band is derived by spectral estimation technique, and is used as gd when n-th bandis specified for the enhancement. For example, hue between blue through red is assigned tothe band between the shortest and the longest wavelengths of the multispectral image. In thismethod, the spectrum gd is calculated by employing a spectrum estimation technique as follows,

gd = HC+

⎡⎣ Xd

Yd

Zd

⎤⎦ , (8)

where C+ is the pseudo-inverse matrix of CMF and the tristimulus value (Xn,Yn,Zn), whichcorresponds to the color for enhancement of n-th band, is used as (Xd,Yd,Zd). H is the systemmatrix which fulfills the relationship between pixel signal values g and spectral data f,

g = Hf. (9)

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9318

Fig. 2. The multispectral image of a human skin.

Method II. In the second method, arbitrary color or spectrum is specified based on a user’sintent. A user chooses the color for visualization with a tool like a color picker, then the spec-trum corresponding to the chosen color is estimated using Eq. (8). In this case, (Xd,Yd,Zd) isthe tristimulus value transformed from the RGB vector of the color selected by user. If a userdesires the color or the spectrum of a physical object as the enhanced result, the spectrum ofthe target object can be selected from a spectral image with a spectrum-picker tool.

Method III. Hue is a parameter in uniform color spaces such as HSV, HLS or CIE L∗C∗h,and the opposite hue in such color spaces means perceptual inverse. Using this feature, thespectrum for visualization, gd, can be determined from the hue distribution of an image. Thismethod sets, gd, automatically using the average hue of an image, and it might be effectivewhen pixels’ hues in the image are similar. The spectrum is calculated using L∗, a∗ and b∗which are the average values of L∗, a∗ and b∗ in the entire image. The color which has oppositehue is represented as,

a∗d =−a∗, b∗d =−b∗, (10)

and the spectrum gd is estimated with the tristimulus value transformed from L∗, a∗d and b∗d. Itmight be more effective to change the luminance L∗ in some cases.

3. Experiment

In the experiment, we applied the proposed color enhancement method to the multispectralimages of a human skin captured by a filter-wheel multispectral camera [9], a pathologicalslide captured by a multispectral microscope [11] and a rice paddy image by hyperspectralimager mounted on a cargo crane [12].

3.1. Application to a skin image

In the application to the skin image, we used the image of a palm shown in Fig. 2 and tried toenhance the spectral features of several wavelengths including near-infrared with the methodI. The palm image was captured by a 16-band multispectral camera, which has the centerwavelengths and bandwidths of each spectral band shown in Table 1, and its image size was1000× 750 pixels, reduced and trimmed from the original 2048× 2048 pixel image. It hasbeen reported that melanine, capillary vessel and vein have spectral features in short, middleand long wavelength, respectively [Figs. 3(a), 3(b) and 3(c)]. For example, the long wavelengthlight penetrates relatively deeper and is affected by the absorption of the deoxy-hemoglobinin the vein, so the shape of the vein is enhanced when the 11th band is enhanced. Figure 3(d)

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9319

Table 1. Center Wavelength and Bandwidth of Each Spectral Band of the MultispectralCamera

Band 1 2 3 4 5 6 7 8Center wavelength [nm] 425 445 465 480 490 510 530 545

Bandwidth [nm] 30 20 15 15 15 15 15 20

Band 9 10 11 12 13 14 15 16Center wavelength [nm] 565 580 600 620 635 655 685 710

Bandwidth [nm] 15 20 20 20 20 20 25 30

(a) (b)

(c) (d)

Fig. 3. The results of color enhancement for a skin image with the conventional methodusing three basis (k= 30). (a) 445 nm, (b) 545 nm, (c) 600 nm and (d) 710 nm are enhanced.

shows that a band of longer wavelength is not well enhanced by the conventional method be-cause the sensitivity of CMF in longer wavelength is small. In this experiment, the method Iexplained in the previous section was applied. Figure 4 illustrates the procedure of the method.Each wavelength in the visible range was assigned to every hue between blue (hstart = 240◦)through red (hend = 0◦) in L∗C∗h color space, and the spectrum of the selected hue was used asgd. The spectrum corresponding to each hue was derived by applying spectral estimation tech-nique, as follows; Lightness and chroma were held fixed and hue was changed at an interval ofΔh, to calculate L∗, a∗ and b∗ corresponding to each wavelength, and each a∗ and b∗ for eachhue was calculated as,

a∗h =C∗ cosh, b∗h =C∗ sinh. (11)

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9320

Calculate Nhue spectral data

Transform intoL*a*b*, then to XYZ

End

Hue spectral datacorresponding to h

h = h + Δh

h = hstart anddefine L* and C*

Spectral estimation

h = hendNo

Yes

Fig. 4. The flow of the color mapping method to each spectral band.

The calculated L∗, a∗ and b∗ were transformed into XYZ color space and the spectral datafor gd were estimated from the XYZ tristimulus values with Eq. (8). During this step, hue issampled at 16 degrees, namely h = 240,224,208, · · · which corresponds to band 1, 2, 3, · · ·and 16, of the 16-band multispectral camera used in our experiment. In addition, luminanceand chroma were defined as L∗ = 50 and C∗ = 80. In this experiment n = 2,8,11,16, that is,we set h = 224◦,128◦,80◦,0◦ as the hue respectively. The spectrum for visualization, gd, wascalculated from one of these hues depending on which band is enhanced, and the average vectorof the entire image was used as the background color ga.

The results of enhancing the skin image with the proposed method are shown in Fig. 5. Inthe results, the spectral features in 445 nm, 545 nm and 600 nm which were also enhanced bythe conventional method as shown in Fig. 3, are visualized. Additionally, the spectral featurein 710 nm which were not visible with the conventional method was successfully enhancedand the structure of the vein is clearly observed. This result showed that the proposed methodcould visualize the spectral feature even in the invisible range. The artifacts on the edge of thefingers resulted from the motion of the object during the image capture using the filter-wheelmultispectral camera.

Moreover, we evaluated these methods numerically by comparing the color differences be-tween the normal skin regions and vein regions in the original image, the enhanced imageswith the conventional method and the proposed method when the 16th band is enhanced. Theaverage CIE L∗a∗b∗ color differences between the normal skin regions and the vein regionsare shown in Table 2. In the conventional method, the color difference between the two re-gions is almost the same as the original image. And the color differences in these results arisemainly from the luminance differences. However, in the proposed method, the color differenceincreases by comparing to that of the original image, especially Δa∗ is greatly changed. Thisindicates that the proposed method can enhance the image more effectively.

Scribner et al. [13, 14], Vilaseca et al. [15], and Jacobson et al. [16] have discussed thevisualization of spectral features in the invisible range, but their results were mostly pseudo-colored images. Our enhancement method can keep the natural color of the background in theimage, which could make it easier to see.

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9321

(a) (b)

(c) (d)

Fig. 5. The results of color enhancement for a skin image with the proposed method usingthree basis (k = 30). (a) 445 nm, (b) 545 nm, (c) 600 nm and (d) 710 nm are enhanced.

Table 2. The Color Differences Between the Normal Skin Region and the Vein Region inEnhancing 16th Band

Image ΔE ΔL∗ Δa∗ Δb∗The original image 10.6 9.91 2.80 2.56

The enhanced image (conventional) 10.4 9.64 2.87 2.73The enhanced image (proposed) 12.9 10.4 7.51 1.88

3.2. Application to a pathological image

In the application to the pathological image, we considered enhancing the fiber region in the16-band H&E (Hematoxylin-Eosin) stained liver-tissue specimen image captured using a mul-tispectral microscope [11] which has the spectral specification shown in Table 3. The fiber re-gion is hardly differentiated in the H&E stained image shown in Fig. 6(a), hence MT (Masson-Trichrome) staining technique is normally used to see the fiber region as shown in Fig. 6(b). Ithas been reported that spectral imaging provides information for discriminating the fiber regionin an H&E stained image [17]. In this experiment, we applied color enhancement to the H&Estained image to clearly visualize the fiber region where the spectrum gd for the visualizationwas determined from the color of the MT stained fiber region according to method II. The sizeof the images used in the experiment is 2048× 2048 pixels. 400 spectral transmittance sam-ples, each of which is the pooled average of the pixels transmittance within a 5×5 pixel ROI,

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9322

Table 3. Center Wavelength and Bandwidth of Each Spectral Band of the MultispectralCamera for Microscope

Band 1 2 3 4 5 6 7 8Center wavelength [nm] 420 450 470 480 500 515 535 550

Bandwidth [nm] 35 20 20 15 15 15 20 20

Band 9 10 11 12 13 14 15 16Center wavelength [nm] 565 585 600 620 645 665 690 720

Bandwidth [nm] 15 20 20 20 20 20 25 35

(a) (b)

Fig. 6. The multispectral images of the liver tissue specimens. (a) The H&E stained tissuespecimen. (b) The MT stained specimen of serial section.

were obtained for the different tissue components such as the nucleus, cytoplasm, red bloodcells, except the fiber, to generate the KL vectors for enhancement. The average of the spectraldata was used as the background spectrum ga. The spectrum of the fiber region in MT stainedspecimen shown in Fig. 7(a) was employed for the spectrum gd for the visualization.

Here, color enhancement method was implemented in spectral transmittance space to removenon-uniformity in illumination. The spectral transmittance is calculated as follows,

t(λ ) =i(λ )ig(λ )

, (12)

where i(λ ) is the signal value of tissue and ig(λ ) is that of glass. Figure 7(b) shows the averageresidual component (g j − s j) for the different tissue components when using six KL basis vec-tors. From Fig. 7(b), it is seen that the fiber region has a large residual at the 8th band. So wedetermined the band to be enhanced as n = 8. The resultant images of the color enhancementare shown in Fig. 8. Because of the shape of the H&E stained transmittance of fiber, Fig. 7(a),the color hardly changes even if the spectral transmittance in 8th band is enhanced. While theenhanced features are not readily visible in the conventional method, the fiber region in theH&E stained image was enhanced to blue, which is similar to its color in the MT stained im-age, in the proposed method. Since the color is visualized similar to that of MT stained tissuespecimen, it would be easier for pathologists to evaluate the result by comparing it with theconventional physical staining technique.

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9323

0

0.2

0.4

0.6

0.8

1

2 4 6 8 10 12 14 16

Spe

ctra

l tra

nsm

ittan

ce

Spectral band

H&E stainedMT stained

−0.02

−0.01

0

0.01

0.02

0.03

0.04

2 4 6 8 10 12 14 16

Res

idua

l err

or

Spectral band

CytoplasmFiber

NucleusRed Blood Cell

White

(a) (b)

Fig. 7. Spectral data. (a) Average spectral transmittances of the fiber regions in an H&E andan MT stained liver-tissue image. (b) Average residuals of the different tissue componentsfound in the H&E liver-tissue image. Each plot represents the average of 100 samples.

(a) (b)

Fig. 8. The enhanced results of the H&E stained tissue (n= 8,k = 30). (a) The conventionalmethod. (b) The proposed method.

Method III was also applied to the same H&E stained pathological image. In method III thespectrum gd for the visualization is determined automatically based on the hue in CIE L∗C∗hcolor space. First, the average L∗, a∗ and b∗ are calculated from the average spectrum of animage. Then, as written in Eq. (10), a∗ and b∗ are transformed into -a∗ and -b∗ which indicateopposite hue or complementary color. Finally, they are transformed into XYZ tristimulus valuesand the spectrum gd is derived by Eq. (8). In this method, the spectrum gd has the opposite hueto the average hue of the entire image, regardless of the enhanced band n.

We determined the band to be enhanced as n = 8 again. The color gd for visualization andthe background color ga were calculated automatically. It is believed that the enhanced resultis improved by changing the luminance of the spectrum gd in cases when the luminance of theentire image is high. So additional enhancement processing was also performed, where L∗ forthe spectrum gd was set as L∗ = 50. In the enhanced results shown in Fig. 9, the fiber regionsare enhanced with a green color which is the perceptual opposite color to the average color of

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9324

(a) (b)

Fig. 9. The enhanced results of the H&E stained tissue by automatic definition (n = 8,k =30). (a) L∗ for the spectrum gd is not changed. (b) L∗ = 50.

Table 4. The Color Differences Between the Cytoplasm Region and the Fiber Region

Image ΔEThe original image 8.69

The enhanced image (conventional) 19.3The enhanced image (proposed, method II) 47.8

The enhanced image [proposed, method III (a)] 67.9The enhanced image [proposed, method III (b)] 55.7

the H&E stained image. When the hues of all pixels in the entire image are similar, as in thepresent case, automatic color determination enables effective enhancement without intricacy ofselecting the spectrum for visualization.

Table 4 shows the average color differences between the cytoplasm and fiber region, whichare both stained with eosin in an H&E stained image. In this table, Method III (a) and (b)correspond to the results shown in Figs. 9(a) and 9(b), respectively. Both of these proposedmethods resulted to larger color differences than the conventional method which indicate theireffectiveness.

3.3. Application to a hyperspectral image

In the conventional method, when an image has a large number of bands, such as hyperspectralimages, amplified value in enhanced band is not clearly visualized because the impact of am-plifying a single band is small. However, the proposed methods are effective on such images.In our experiment we applied the method II to the hyperspectral image of a rice paddy, shownin Fig. 10, and explored spectral features by observing enhanced results. The image was ob-tained by using a cargo crane with the hyperspectral sensor, ImSpector V10 made by SpecimCo., which has 121 bands from 400-1000 nm, 3 nm of spectral resolution, and 5 nm of sam-pling interval [12]. However, the components of the longer wavelengths after 900 nm were notused as they include much noise. Each pixel value in the image was transformed into spectralreflectance with reference to the pixel value of the standard white board in the same image.

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9325

weed crop

Fig. 10. Natural color presentation of a rice paddy image under D65 light source. The imagesize is 2000×400 pixels that was trimmed from the original image.

The hyperspectral image in Fig. 10 mainly consisted of crop, weed and soil, and we investi-gated their spectral features using the color enhancement in method II. KL basis vectors weregenerated from the region of the image where weed and soil are included. However, the regionextracted for spectral samples consisted mainly of weeds, hence we assumed that one KL vec-tor was sufficient to estimate the spectra of weeds. In this case, we used the first KL vector.The spectrum gd is the spectrum of magenta obtained from the Macbeth Color Checker imagecaptured by the same hyperspectral camera. Given this condition, we enhanced the rice paddyimage in the bands from 500-900 nm at 50 nm sampling interval.

The enhanced hyperspectral images are shown in Figs. 11 and 12. In Fig. 11(b), the weedregion and part of the crop region are enhanced with magenta color. Because only one KL ba-sis vector was used and the spectra of the weed region vary widely, the weed region, which isused for generating basis vectors, was also enhanced. The average residual components of thedifferent regions in the rice paddy image are shown in Fig. 13. In Fig. 13, ”Crop 1” representsthe residual of the crop region which is not enhanced in Fig. 11(b), and ”Crop 2” represents theenhanced crop region. The spectral variations in crop regions are mainly due to the differencein their illumination condition such as shading. As shown in Fig. 13, the soil region has a largeresidual component around 700 nm, and the crop region has large residual error around 800 nmwhich could correspond to the biomass content [18]. The enhanced spectral features betweenthese wavelengths are shown in Figs. 11(e) and 12(b) as enhanced regions. Figure 14 showsthe magnified part of the rice paddy image whose spectral feature at 700 nm and 725 nm wasenhanced. We can see better contrast between the leaves and the crops in Fig. 14(b). This isdue to the negative residual component of the crop region at 725 nm (Fig. 13). The originalspectral data of each region are shown in Fig. 15. Here we see that the spectra of crop andweed regions have large difference in 680-750 nm, called ”red edge”, which originated fromthe spectral feature of chlorophyll, which is not observed in the soil region. Furthermore, theresidual components in the crop regions are due to the differences of their spectral shapes in thenear-infrared wavelength. As the above results show, the salient spectral features in the hyper-spectral image of rice paddy were successfully visualized by the proposed color enhancement,and such features can be applied to discriminate each region. Further investigation of spectralfeatures in hyperspectral images using the proposed enhancement method could result to a newindex for advanced vegetation analysis.

4. Conclusion

This paper proposes a method for the effective visualization of the enhanced spectral features, inwhich the design of a weighting factor matrix is modified so that the enhanced feature appearswith arbitrary color. Some examples on the methods to determine the color for visualizationare also presented. Even if an image has a salient spectral feature in the invisible wavelengthrange or has a large number of spectral bands, the spectral feature can still be enhanced and

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9326

(a)

(b)

(c)

(d)

(e)

Fig. 11. The enhanced result of the rice paddy images (k = 20). (a) 500 nm, (b) 550 nm, (c)600 nm, (d) 650 nm, (e) 700 nm are enhanced, respectively.

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9327

(a)

(b)

(c)

(d)

Fig. 12. The enhanced result of the rice paddy images (k = 20). (a) 750 nm, (b) 800 nm,(c) 850 nm, (d) 900 nm are enhanced, respectively. The weed and crop regions, which werenot clearly differentiated in the original image (see Fig. 10), are now differentiated.

−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

400 500 600 700 800 900

Res

idua

l err

or

Wavelength [nm]

Crop 1Crop 2Weed

Soil

Fig. 13. The average residual components of the different regions in the rice paddy image.Each plot represents the average of 100 samples.

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9328

(a) (b)

Fig. 14. 600 × 400 pixels cropped from the magnified version of the rice paddy imageenhanced at: (a) 700 nm; (b) 725 nm.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

400 500 600 700 800 900

Spe

ctra

l ref

lect

ance

Wavelength [nm]

Crop 1Crop 2Weed

Soil

Fig. 15. The average spectral reflectances of the different regions in the rice paddy image.Each plot represents the average of 100 samples.

effectively visualized with the proposed method. The method will be useful in exploring thespectral features masked in multispectral or hyperspectral images.

Acknowledgments

The authors greatly acknowledge Dr. Yukako Yagi in Harvard Medical School, Boston, MA,U.S. for helpful advices and discussion.

#140550 - $15.00 USD Received 11 Jan 2011; revised 7 Apr 2011; accepted 15 Apr 2011; published 28 Apr 2011(C) 2011 OSA 9 May 2011 / Vol. 19, No. 10 / OPTICS EXPRESS 9329