Hrebesh_Jurnal

Embed Size (px)

Citation preview

  • 7/24/2019 Hrebesh_Jurnal

    1/6

    Profilometry with compact single-shot low-coherence time-domain interferometry

    Molly Subhash Hrebesh *, Yuuki Watanabe, Manabu Sato

    Graduate School of Science and Engineering, Yamagata University, 4-3-16 Jonan, Yonezawa, Yamagata 992-8510, Japan

    a r t i c l e i n f o

    Article history:

    Received 2 February 2008

    Received in revised form 19 May 2008

    Accepted 4 June 2008

    Keywords:

    Interferometry

    Profilometry

    Height measurement

    Optical instrument

    Metrology

    Polarization

    Low-coherence

    Shape measurement

    a b s t r a c t

    We describe the performance of a compact single-shot low-coherence interferometric scheme that can be

    capable of measuring three-dimensional surface profiles and shape. This technique utilizes a polarizing

    Michelson interferometer and a four-channel polarization phase-stepper optics, which is based on a

    paired wedge prism, a combined wave plate and a Wollaston prism. The coherence gated surface image

    can be calculated by the simultaneous acquisition of two interferograms and a DC image on a single CCD

    camera. The image calculation is based on a novel algorithm to calibrate the imbalanced intensity as well

    as the deviated arbitrary relative phase of each of the imaging channels. The system can display the trans-

    verse cross-sectional images in real-time. To demonstrate the feasibility of this system, a Japanese coin is

    presented as a 3-D shape measurement example with an image size of 4 mm (horizontal) 4 mm (ver-

    tical)160 lm (depth).

    2008 Elsevier B.V. All rights reserved.

    1. Introduction

    Optical interferometric schemes can allow a variety of surface

    measurements from optical elements such as mirrors to highly

    scattering biological specimens with very high resolution. Numer-

    ous type of interferometric techniques have been reported using

    many kinds of light sources coupled to the interferometer, and

    the way to analyze the interference patterns [1,2]. Laser based

    phase-shifting interferometry (PSI)[3] can perform three-dimen-

    sional (3-D) surface measurements with subnanometer resolution

    by analysis of a few sequences of phase-shifted monochromatic

    interferograms. PSI can be accomplished either by sequentially

    introducing a temporal phase-shifting method or by spatially split-

    ting the beam into parallel channels for simultaneous phase-shift-

    ing method. In conventional temporal phase-shifting

    interferometry, the interferometers have an element to introduce

    three or more known phase shifts in the path of the reference light.

    Measuring the interference image at each of these phase shifts and

    by analyzing, the phase distribution of signal lights can be quanti-

    tatively calculated. However, the interferometer using temporal

    phase-shifting method is very sensitive to vibration, because the

    various phase-shifted frames of interferometric data are taken at

    different times and vibration causes the phase shift between the

    data frames to be different from the desired value. Instead, in

    simultaneous phase-shifting interferometry, the spatially phase-

    shifted three or four interferograms were simultaneously acquire

    in a time several order of magnitude less than temporal phase-

    shifting, thus eliminates the measurement errors caused by the

    sample vibrations or movements.

    Several types of simultaneous phase-shifting method have been

    developed over the years and these techniques utilize conventional

    beam splitters and polarization optics to produce three or four

    phase-shifted images on a single or multiple charge coupled device

    (CCD) for simultaneous acquisition [4,5]. Most of these methods re-

    quire relatively complex optical and electronic arrangements and

    have had limited practical applications. Recently, novel simulta-

    neous PSIs that uses diffractive elements to simultaneously image

    three or more interferograms on to a single CCD sensor have been

    reported by several authors[68]. These techniques are consider-

    ably more compact and less expensive compare to the multi-cam-

    era arrangement. However, the diffractive elements are available

    only over a small spectral band due to dispersion and chromatic

    distortion inherent in their design. Thus they are not capable of

    working with white light or short coherence length source interfer-

    ometers [9]. More recently, single-shot PSI based on pixellated

    phase mask was introduced in which micropolarizers were used

    to spatially multiplex the phase-shifted interferograms [10,11]

    and which works well over a large wavelength band. However,

    such type of phase mask is difficult to manufacture accurately

    and need highly sophisticated technologies[9].

    The monochromatic PSI technique using coherent light source

    degrades the accuracy for rough and discontinuous surfaces be-

    cause of the occurrence of speckle and phase ambiguity problem,

    0030-4018/$ - see front matter 2008 Elsevier B.V. All rights reserved.doi:10.1016/j.optcom.2008.06.004

    * Corresponding author.

    E-mail address: [email protected](M.S. Hrebesh).

    Optics Communications 281 (2008) 45664571

    Contents lists available at ScienceDirect

    Optics Communications

    j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / o p t c o m

    mailto:[email protected]://www.sciencedirect.com/science/journal/00304018http://www.elsevier.com/locate/optcomhttp://www.elsevier.com/locate/optcomhttp://www.sciencedirect.com/science/journal/00304018mailto:[email protected]
  • 7/24/2019 Hrebesh_Jurnal

    2/6

    respectively. Scanning white light interferometry (WLI) [12,13]

    overcomes this limitation and which is one of the most commonly

    used method for 3-D shape measurement. In WLI the interference

    patterns can be treated by two ways in order to get the surface and

    subsurface information of a sample. The first way processes inter-

    ferograms by phase calculation algorithm to provide a resolution

    better than 1 nm[14,15]. The second way consists of analyzing

    the fringe visibility, and this technique named optical coherencetomography (OCT) is widely used for getting depth resolved infor-

    mation from highly scattering biological specimens[16]with spa-

    tial resolutions of the order of 10 lm. In OCT the depth or height

    information are given by the position of the reference mirror. How-

    ever, each measurement techniques has its own unique

    advantages.

    Recently, there is considerable interest in wide-field (full-field)

    coherence gated imaging by means of light source with short

    coherence length for many applications, which can acquire trans-

    verse cross-sectional (en-face) images of biological tissues and

    materials with high resolution. Wide-field (full-field) OCT consist

    of 2D interferometer with CCD camera were used to obtain en-face

    sectional image. A single-shot phase-stepped wide-field OCT utiliz-

    ing simultaneous phase-shifting optics and a single CCD camera

    has been reported[17], which can allow the reconstruction of 3-

    D depth resolved object profiles and shape with high time resolu-

    tion. Using this technique the measurement of optical phase distri-

    bution is also demonstrated. The temporal resolution of the system

    depends only on the frame rate of the CCD camera. However, the

    four-channel phase-stepper optics in the proposed system is quite

    complicated for practical implementation. Moreover, the image

    reconstruction algorithm requires geometric (image position and

    magnification) correction and non-uniformity compensation

    parameters for the accurate reconstruction of theen-face OCT im-

    age, and this correction parameters were determined by manually.

    In this paper, we describe the performance of a compact low-

    coherence single-shot spatial phase-stepping interferometeric

    technique for 3-D shape measurement. The technique is based on

    a polarizing Michelson interferometer and a four-channel polariza-

    tion stepper optics, which uses a paired wedge prism (PWP) and a

    combined wave plate (CWP) for capturing 4 images simultaneously

    on a single CCD camera. The fundamental characteristics of this

    system such as shifted relative phase, intensity ratio and spatial

    resolutions have already been reported [18]. The main objective

    of this paper is to demonstrate the feasibility of this single-shotlow-coherence interferometer for practical applications such as

    profilometry and 3-D shape measurement. This technique need

    only two quadraturely phase-stepped interferograms and a DC im-

    age for extracting the interferometric component. Moreover, this

    technique is simple, compact and more robust than that of the pre-

    viously reported studies.

    2. Basic principle of single-shot interferometry

    2.1. Optical scheme

    Fig. 1 shows the schematic of the experimental setup. The

    experimental setup consists of a 2D polarizing Michelson interfer-

    ometer followed by four-channel phase stepper optics. The light

    from an LED (Hitachi, HE8404SG) with a central wavelength (k0)

    of 846 nm and a spectral bandwidth (Dk) of 46 nm (FWHM) is par-

    tially collimated using lens L1 (focal length 12 mm). The calculated

    coherence length [lc= 0.44(k0)2/Dk] of the source was around

    6.9 lm. Using a polarizer (P-1), the light is linearly polarized at

    +45 and is coupled to the polarization interferometer. The inter-

    ferometer is based on a free-space Michelson configuration with

    a nonpolarizing beam splitter (NPBS), which splits the incoming

    light into two. After the round-trip transmission through the pola-

    rizer (P-2) in the sample arm, the signal light is linearly polarized

    at an angle of +45 at the output of the interferometer. Similarly,

    at the reference arm the reference light becomes right circularly

    polarized after the round-trip transmission through the polarizer

    Fig. 1. Experimental setup: P1P3: polarizers, L1L3: lenses, QWP: quarter wave plate, HWP: half-wave plate, RM: reference mirror, PWP: paired wedge prism, WP:Wollaston prism and CWP: combined wave plate. Inset shows the CCD image acquired with a USAF test chart as object.

    M.S. Hrebesh et al./ Optics Communications 281 (2008) 45664571 4567

  • 7/24/2019 Hrebesh_Jurnal

    3/6

    (P-3) and the quarter wave plate (QWP) with its fast axis oriented

    at 0with respect to thex-axis, then at the out put of the interfer-

    ometer the polarization state become left circular because of the

    reflection by the nonpolarizing beam splitter.

    At the output of the interferometer, a four-channel phase-step-

    per optics is constructed, which consists of a relay optic with lens

    L2 (focal length 125 mm) and L3 (focal length 75 mm), a PWP, a

    CWP, and a Wollaston prism (aperture 1515 mm

    2

    , separationangle 4.8). The PWP is fabricated using two wedge prisms with

    a deviation angle of 1, as shown inFig. 1, which perform the ver-

    tical splitting of the incoming beam inxzplane. The distance be-

    tween the sample and the objective lens L2 is the focal length of

    the lens L2. The PWP is aligned with L2 at a distance of 162 mm

    to obtain uniform images.

    The CWP consists of two-quarter wave plates, QWP1 (size

    1020 mm2) and QWP2 (1010 mm2) with their fast axis ori-

    ented at 0and t45, respectively, with respect to thex-axis, and

    a half-wave plate (HWP, 1010 mm2) with its fast axis oriented

    at 22.5with respect to thex-axis, as shown inFig. 1. The distance

    between the CWP and L3 is 118 mm. The distance between L2 and

    L3 is kept at 300 mm.

    The light exiting from the interferometer is then collimated

    onto the input plane of the four-channel phase stepper optics.

    The PWP splits the incoming light into up-down direction in the

    xz plane, then the up-directed beam from the PWP passed

    through QWP 1, and the down-directed beam passes through the

    QWP 2 and the HWP 1. The Wollaston prism then splits the incom-

    ing beam into two orthogonally polarized components. Finally the

    four phase-stepped channels are projected on the CCD (Hamama-

    tsu Photonics, C4880-80, 656494 pixels, pixel size

    9.99.9 lm2, resolution 12 bit, frame rate 28 frames/s) camera

    by the imaging lens L3.

    2.2. Principle of phase stepper optics

    We analyzed the polarization states of the lights to calculate the

    intensities of four spatially splitted images using the Jones matrixand vectors. The electric fields of the signal and reference lights

    that enter the phase-stepper can be expressed as:

    Signal light (linear + 45),

    ESES0ej/Sffiffiffi2

    p 11

    ES0ej/SJS 1

    Reference light (left circular),

    ERER0ej/Rffiffiffi2

    p 1j

    ER0ej/RJR 2

    whereES0, ER0, /S, /Rare the amplitudes and phases of signal

    and reference lights, respectively. JS and JR represent the corre-

    sponding Jones vector for sample and reference arm signals. After

    L3, the electric fields of the signal and reference lights for Beam

    1 and Beam 2 are given by:

    For beam-1 through QWP 1 at 0

    EB1S12ES0e

    j/SJB1S1

    2ES0e

    j/S1ffiffiffi

    2p 1j

    3

    EB1R12ER0 e

    j/RJB1R1

    2ER0e

    j/R1ffiffiffi

    2p 11

    4

    For beam-2 through QWP 2 at 45

    EB2S12ES0e

    j/SJB2S1

    2ES0e

    j/S1ffiffiffi

    2p 1

    1

    5

    EB2R

    1

    2ER0 e

    j/RJB2R

    1

    2ER0e

    j/R1

    ffiffiffi2p

    1 j

    0

    6

    For beam-2 through HWP 1 at 22.5

    E0B2S1

    2ES0e

    j/SJ0B2S1

    2ES0e

    j/S1

    0

    7

    E0B2R1

    2ER0 e

    j/RJ0B2R1

    2ER0e

    j/R1

    2

    1 j1 j

    8

    HereJB1S,JB1R,JB2S,JB2R,J0B2S and J

    0B2Rare the Jones vectors of sig-

    nal and reference lights for Beam 1 and Beam 2, which can be writ-ten as:

    JB1STQWP1JS; JB1RTQWP1JR;JB2STQWP2JS; JB2RTQWP2JR;J

    0B2STHWP1JS; J0B2RTHWP1JR 9

    where TQWP1, TQWP2 and THWP1 are Jones matrixes for QWP 1

    (fast axis at 0 to the x-direction), QWP 2 (fast axis at 45 to the

    x-direction) and HWP 1 (fast axis at 22.5 to the x-direction),

    respectively, which can be expressed as follows:

    TQWP-1 1 0

    0 j

    ; TQWP-21

    2

    1 j 1 j1 j 1 j

    ; THWP-1 1

    ffiffiffi2

    p 1 11 1

    10

    The Wollaston prism, which laterally separates the incoming

    beam into two orthogonal polarization components with a separa-

    tion angle. Thus the interferograms of images A to D are calculated

    using Jones vector and matrix are represented by

    IAE2S

    4E

    2R

    4ESER2

    Cos /S/R 11

    IBE2S

    4E

    2R

    4ESER

    2 Cos/S/R 90

    12

    ICE2S

    8E

    2R

    8 13

    ID3E2S

    8 3E

    2R

    8 1

    ffiffiffi2p ESERCos/S/R 45

    14

    The relative phase of images A and 90, and that of images A andD is 45, and the image C corresponds to the DC intensity of both

    the reference and signal lights.

    2.3. Image reconstruction

    For extracting the interferometric component, we use the two

    quadraturely phase-stepped images (image A and image B) and

    the DC image C. The basic equation to extract the interference com-

    ponents S is given by

    Sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiIA 2IC2 IB 2IC2

    q 15

    The intensity ration of the phase-shifted images at the output of

    the phase-stepper optics was not exactly same as that of the theo-

    retical ration. This unbalanced intensity of each phase-stepped

    images will introduce a residual dc-offset intensity in the recon-

    structed image. In order to compensate this residual intensity,

    we introduce three intensity compensation parameters. Moreover,

    a phase calibration parameter is also introduced to compensate the

    slight relative phase deviation from 90between image A and im-

    age B. Thus the modified algorithm with compensation parameters

    is as below:

    SMffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiACPIA 2CCPIC2 ACPIA 2CCPIC

    Sing :Cosg BCPIB 2CCPIC

    Sing

    2s ;

    16whereACP, BCP and CCP are the parameters used to calibrate the

    intensities of image A, B and C, respectively. For calculating thesecalibration parameters, after capturing the 3 phase-stepped

    4568 M.S. Hrebesh et al. / Optics Communications 281 (2008) 45664571

  • 7/24/2019 Hrebesh_Jurnal

    4/6

    images, the mean intensity of each images were measured and

    then the least mean valuehIibetween image A and image B is cal-

    culated first. Using this least mean value, the calibration parame-

    ters for image A and image B can be obtained as follows;

    ACP hIi=hAi; BCP hIi=hBi; 17

    After calibrating the intensity of image A and image B, the cal-

    ibration parameter for image C is obtained as follows:

    CCP hIi=2hCi 18wherehi denote the mean value. Theg in Eq.(16)is the practi-

    cal relative phase between image A and B. Our custom software

    developed in LabVIEW-IMAQ domain can calculate these intensity

    coefficients in real-time. The real-time calculation and compensa-

    tion is very essential for practical application, because the mean

    intensity of each phase-stepped images should vary during depth

    scan. The relative phase parameterg can be feed manually duringmeasurement to reduce the residual fringe in the reconstructed

    image. The Image acquisition, calculations of mean image intensi-

    ties, image processing of Eq. (16)and the real-time display of the

    obtained en-face images can be done with a frame rate of

    28 frames/s.

    3. Performances

    3.1. Basics parameters

    For optimizing the system performance, first we aligned to split

    the image using a USFA 1951 test target with field of view

    33 mm2, the magnification was measured around 0.67, which

    is almost corresponded to the calculated value of 0.68 (85 mm/

    125 mm). The distance between lens L3 and CCD camera is

    85 mm. The horizontal image separation distance on the CCD plane

    depends upon the deviation angle of the Wollaston prism and its

    distance from the CCD plane. Similarly, the vertical separation dis-

    tance depends upon the wedge deviation angle and lens positions.Using a USAF test targetthe relative phasesbetween image A and

    B, and between A and D were measured at 88and 37, respectively

    by the cross-correlations of intensity profiles of the interferograms.

    For a particular alignment condition, the mean pixel intensities for

    image A to D were measured at 1540, 1571, 994, and 2242, respec-

    tively, and this intensity rations are not exactly same as that of the

    theoretical ratio of 1:1:0.5:1.5, in order to compensatethis intensity

    unbalance, the corresponding intensity calibration parametersACP,

    BCP and CCP were calculated at 1, 0.98 and 0.77, respectively. The

    main reason of this intensity imbalance and slight discrepancies in

    thephase is dueto theimperfect designof PWPand thetilted optical

    beam path through the CWP, respectively.

    The axial resolution calculated using the central wavelength

    and spectral bandwidth was around 6.9lm, and the measuredone was 10 lm [18]. This incongruity is due to the dispersion

    imbalance between the reference and signal lights. For the mea-

    surement of lateral resolution en-face image of a standard USAF

    1951 test target was taken and could be distinguish the 5th groups

    6th elements, which correspond to a lateral resolution of 35 lm.

    The lateral resolution is limited by the CCD pixel size and the

    low magnification of the optical system.Fig. 2shows the four spa-

    tially splitted phase-stepped images A to D of a Japanese 5-yen

    coin as the test sample.

    3.2. Imaging of rough surface

    To demonstrate the imaging capability of this system, we mea-

    sured the depth resolveden-faceimages of the surface imprint of a5-yen Japanese coin. The coin is made of 70% of Cu and 30% of Zn.

    The measured kanji imprint is marked (44 mm2) inFig. 3. The

    irradiation power was nearly 120 lW at the sample surface and

    the exposure time set was 7 ms. The input relative phasegwas ad-justed at 88to obtain residual fringe free image. For verifying the

    influence of the implemented intensity compensation algorithm,

    we obtain the coherence gated image of the bottom surface of

    the coin with and without the intensity parameters.Fig. 4a and b

    shows the coherence gated bottomen-facesurfaces and its profiles

    of a coin with and without the intensity parameters, respectively.

    By using the intensity compensation algorithm the dc-offset inten-sity could be reduced by a factor of 10 times.

    Fig. 5a and b shows coherence gated en-face surface profile

    images of the top and bottom surface of the coin, respectively.

    The size of the field of view was 44 mm2 (200200 pixels) as

    shown by white square mark in Fig. 2 with a magnification of

    0.5. For measuring the 3-D volumetric image, eachen-face images

    were sequentially acquired by scanning the reference mirror along

    the optical axis with a high precision motorized translation stage.

    Fig. 6shows the 3-D volumetric image of the kanji imprint recon-

    structed by a stack of 160en-faceimages with a depth scan interval

    of 1 lm. When the coherence gate is above the top surface, ideally

    the measured image must be null. However, due to the depolariza-

    Fig. 2. Four spatially splitted image of a 5-yen Japanese coin with a field of view of

    44 mm2 and the pixel size of each channel is 200 200 pixels.

    Fig. 3. (a) Image of a Japanese 5-yen coin. The black rectangular mark shows themeasured area and (b) shows the enlarged image.

    M.S. Hrebesh et al./ Optics Communications 281 (2008) 45664571 4569

  • 7/24/2019 Hrebesh_Jurnal

    5/6

    tion and specular reflection at the rough surface, there was an

    influence of background image and which exist constantly through

    out the scanning range. In order to alleviate this problem, thatbackground image was subtracted from all of calculated images.

    The measured height between top and bottom was 80 lm. The

    accuracy of the measurement is mainly depends upon the orienta-

    tion and alignment condition of CWP and PWP, pixel match of the

    phase-shifted interferograms and g, the practical relative phase be-tween image A and B. Once the alignment conditions were opti-

    mized, a 3-D shape measurement can be performed with the

    same accuracy and repeatability without any vibration control.

    4. Summary

    We have presented a compact surface measurement system

    using the simultaneous capture of two phase-shifted interfero-

    grams and a DC image. We measured the axial and lateral resolu-tion at 10 lm and 35 lm, respectively using an LED with a central

    wavelength of 846 nm. Compared to the previously reported sim-

    ilar techniques, this technique needs only three images for extract-

    ing the interferometric components and it uses relatively more

    simple image reconstruction algorithm. Moreover, it presents the

    advantages of more compactness and robustness. The system is

    also compactable for using cheap thermal sources. The feasibility

    of imaging rough surfaces has been demonstrated with an expo-

    sure time of 7 ms, a frame rate of 28 frames/s and a field of view

    of 44 mm2 (200200 pixels). Furthermore, the better optical

    design of CWP and PWP, and the use of high power broadbandoptical source may improve the sensitivity of this system for appli-

    Fig. 4. Reconstructed surface images and its profiles of the bottom surface of the coin: (a) without intensity compensation parameter and (b) with intensity compensation

    parameters. The intensity profiles are corresponding to the black solid lines in the figures.

    Fig. 5. (a) and (b) are the reconstructed depth resolved images of top and bottom

    surfaces, respectively.

    Fig. 6. The reconstructed 3-D volume rendered image of a set of 160 en-faceplanes

    measured.

    4570 M.S. Hrebesh et al. / Optics Communications 281 (2008) 45664571

  • 7/24/2019 Hrebesh_Jurnal

    6/6

    cations such as optical ranging or full-field OCT (FF-OCT) with high

    time resolution.

    References

    [1] D. Malakara, M. Servin, Z. Malakara, Interferogram Analysis for Optical Testing,Marcel Dekker, New York, 1998.

    [2] J.E. Greivenkamp, J.H. Bruning, Phase shifting interferometers, in: D. Malacara(Ed.), Optical Shop Testing, Wiley, New York, 1992.

    [3] K. Creath, Phase measurement interferometry techniques, in: E. Wolf (Ed.),Progress in Optics, vol. 24, Elsevier Science Publishers, Amsterdam, 1988, p.349.

    [4] R. Smythe, R. Moore, Opt. Eng. 23 (1984) 361.[5] C. Koliopoulis, Proc. SPIE 1531 (1992) 119.[6] B. Barrientos, A.J. Moore, C. Perez-Lopez, L.L. Wang, T. Tschudi, Fringe,

    Akademie Verlag, 1997. 317.

    [7] A. Hettwer, J. Kranz, J. Schwider, Opt. Eng. 39 (2000) 960.[8] B.B. Garcia, A.J. Moore, C. Perez-Lopez, L. Wang, T. Tschudi, L. Tong, Opt. Eng. 38

    (1999) 2069.[9] James Millerd, Neal Brock, John Hayes, Michael North-Morris, Matt Novak,

    James Wyant, Proc. SPIE 5531 (2004) 304.[10] B. Kimbrough, J. Millerd, J. Wyant, J. Hayes, M. North-Morris, M. Novak, in:

    Proc. SPIE 6292 (2006) 62920F-1-12.[11] K.L. Baker, E.A. Stappaerts, Opt. Lett. 31 (2006) 733.[12] B.S. Lee, T.C. Strand, Appl. Opt. 29 (1990) 3784.[13] T. Dresel, G. Hausler, H. Venzke, Appl. Opt. 31 (1992) 919.

    [14] P. Sandoz, G. Tribillon, J. Mod. Opt. 40 (1993) 1691.[15] P. de Groot, L. Deck, J. Mod. Opt. 42 (1995) 389.[16] D. Huang, E.A. Swanson, C.P. Lin, J.S. Schuman, W.G. Stinson, W. Chang, M.R.

    Hee, T. Flotte, K. Gregory, C.A. Puliafito, J.G. Fujimoto, Science 254 (1991) 1178.[17] C. Dunsby, Y. Gu, P. French, Opt. Express 11 (2003) 105.[18] M.S. Hrebesh, Y. Watanabe, M. Sato, Jpn. J. Appl. Phys. 46 (2007) 369.

    M.S. Hrebesh et al./ Optics Communications 281 (2008) 45664571 4571