5
Mathematical refocusing of images in electronic holography Karl A. Stetson Karl Stetson Associates, LLC, 2060 South Street, Coventry, Connecticut 06238, USA ([email protected]) Received 10 March 2009; revised 7 May 2009; accepted 9 June 2009; posted 10 June 2009 (Doc. ID 108580); published 22 June 2009 This paper presents an illustration of mathematical refocusing of images obtained by the HoloFrin- ge300K electronic holography program. The purpose is to demonstrate that this form of electronic holo- graphy is equivalent to image-plane, phase-stepped digital holography. The mathematical refocusing method used here differs from those in common use and may have some advantages. © 2009 Optical Society of America OCIS codes: 090.0090, 350.3250. 1. Introduction In the last 15 years, digital holography has become an active field of research, and it may be described, roughly, as the mathematical reconstruction of images from digitally recorded holograms. The first example of this was by Goodman and Lawrence [1] and dates back to 1967; however, the work of Schnars and Jüptner [2] is generally cited as the origin of the modern approach. Nonetheless, it is instructive to consider early systems aimed at electronic hologra- phy that were developed in the early 1970s, such as the electronic speckle pattern interferometer, [3] or ESPI system. ESPI systems generated displays of real-time vibration patterns and live static displa- cement fringes; however, they did not generate images that could be mathematically propagated as hologram reconstructions. Closer to the point was the system demonstrated by Macovski et al. [4] that used an image dissector, a device that did not integrate images but provided an array of real-time data sig- nals that could be scanned. Because image dissectors did not integrate their exposures, they were very in- sensitive and restricted to images of small objects. Modern electronic holography originated in the late 1980s with a pipeline image processor and a pro- gram called ELHolo [5]. This was followed by a se- quence of image processor systems called MBHolo, PCHolo, PCHolo32, and finally a completely software-based program called HoloFringe300K. All of these systems performed what can be described as image-plane digital holography. Their key feature was the use of a phase-stepped reference beam and real-time digital computation to generate the equiva- lent of the reconstruction from holographic re- cording. The utility of phase stepping in digital holo- graphy was recognized by Yamaguchi and Zhang [6] in 1997. The essential distinction between digital hologra- phy, as descended from Schnars and Jüptner and electronic holography descended from the ELHolo systems, is that the former separates the image plane from the detector plane and then recovers the image by numerical processing. With the ELHolo de- rived systems, it was never considered necessary to vary the focal plane numerically because of the avail- ability of a wide variety of excellent lenses. From the onset, however, they provided recordings that could be manipulated to accomplish that operation. These recordings are obtained from a routine called Time Lapse Save, which saves the two components of a ho- logram recording that would be used either as a re- ference for a static displacement interferogram, or to study a time-evolving displacement, such as creep. The components are constructed from sets of four 90° phase-stepped recordings by subtracting the first and third and the second and fourth in the sequence. When combined as the real and imaginary parts of a 0003-6935/09/193565-05$15.00/0 © 2009 Optical Society of America 1 July 2009 / Vol. 48, No. 19 / APPLIED OPTICS 3565

Mathematical refocusing of images in electronic holography

  • Upload
    karl-a

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Mathematical refocusing of imagesin electronic holography

Karl A. StetsonKarl Stetson Associates, LLC, 2060 South Street, Coventry, Connecticut 06238, USA ([email protected])

Received 10 March 2009; revised 7 May 2009; accepted 9 June 2009;posted 10 June 2009 (Doc. ID 108580); published 22 June 2009

This paper presents an illustration of mathematical refocusing of images obtained by the HoloFrin-ge300K electronic holography program. The purpose is to demonstrate that this form of electronic holo-graphy is equivalent to image-plane, phase-stepped digital holography. The mathematical refocusingmethod used here differs from those in common use and may have some advantages. © 2009 OpticalSociety of America

OCIS codes: 090.0090, 350.3250.

1. Introduction

In the last 15 years, digital holography has becomean active field of research, and it may be described,roughly, as the mathematical reconstruction ofimages from digitally recorded holograms. The firstexample of this was by Goodman and Lawrence [1]and dates back to 1967; however, the work of Schnarsand Jüptner [2] is generally cited as the origin of themodern approach. Nonetheless, it is instructive toconsider early systems aimed at electronic hologra-phy that were developed in the early 1970s, suchas the electronic speckle pattern interferometer, [3]or ESPI system. ESPI systems generated displaysof real-time vibration patterns and live static displa-cement fringes; however, they did not generateimages that could be mathematically propagated ashologram reconstructions. Closer to the point was thesystem demonstrated by Macovski et al. [4] that usedan image dissector, a device that did not integrateimages but provided an array of real-time data sig-nals that could be scanned. Because image dissectorsdid not integrate their exposures, they were very in-sensitive and restricted to images of small objects.Modern electronic holography originated in the

late 1980s with a pipeline image processor and a pro-gram called ELHolo [5]. This was followed by a se-quence of image processor systems called MBHolo,

PCHolo, PCHolo32, and finally a completelysoftware-based program called HoloFringe300K.All of these systems performed what can be describedas image-plane digital holography. Their key featurewas the use of a phase-stepped reference beam andreal-time digital computation to generate the equiva-lent of the reconstruction from holographic re-cording. The utility of phase stepping in digital holo-graphy was recognized by Yamaguchi and Zhang [6]in 1997.

The essential distinction between digital hologra-phy, as descended from Schnars and Jüptner andelectronic holography descended from the ELHolosystems, is that the former separates the imageplane from the detector plane and then recovers theimage by numerical processing. With the ELHolo de-rived systems, it was never considered necessary tovary the focal plane numerically because of the avail-ability of a wide variety of excellent lenses. From theonset, however, they provided recordings that couldbe manipulated to accomplish that operation. Theserecordings are obtained from a routine called TimeLapse Save, which saves the two components of a ho-logram recording that would be used either as a re-ference for a static displacement interferogram, or tostudy a time-evolving displacement, such as creep.The components are constructed from sets of four90° phase-stepped recordings by subtracting the firstand third and the second and fourth in the sequence.When combined as the real and imaginary parts of a

0003-6935/09/193565-05$15.00/0© 2009 Optical Society of America

1 July 2009 / Vol. 48, No. 19 / APPLIED OPTICS 3565

complex function, the result corresponds directly toan optical wavefront that can be propagated by nu-merical methods. The purpose of this paper is to pres-ent examples of numerical refocusing using datafrom the current program for this system, Holo-Fringe300K.

2. Experimental Setup

The setup for these recordings is shown in Fig. 1,which illustrates the components of the K100/HOLoptical head for electronic holography.For this work, the usual video zoom lens was re-

placed with a Computar TEC-M55 lens that providedhighly magnified images with a working distance ofabout 135mm. The high magnification was neces-sary in order to obtain sufficient depth variation inthe focal plane so that two objects could be viewedwith one in focus and the other out of focus. Relativeto the diagram of Fig. 1, the camera module wasangled so that it could view an object within the illu-mination beam. The object consisted of two 0:635mmdiameter vertical metal pins, coated with a thin layerof white powder (Spot Check Developer) and spaced4:0mmapart in depth. They were angled so that bothwere in the view of the camera and placed on a trans-lation stage so that they could be focused manually.Figure 2 shows an image recorded of the object

with the camera focused halfway between the twopins so that each image is out of focus. Figure 3 showsimages with each pin mechanically focused. Note theshift of the images to the left as the focus is movedfrom the left pin to the right pin. This is becausethe translation stage was not exactly aligned alongthe lens axis. Figure 4 shows the results of numericalrefocusing of data obtained with the camera focusedin the plane corresponding to Fig. 2. Note that herethere is no lateral shift of the images with refocusingsince each used data with the camera focused at theplane between the pins shown in Fig. 2.

3. Mathematical Analysis

The mathematical process used here differs slightlyfrom those presented in the literature [7,8]. Thismethod is based upon the representation of the opti-cal field by a spectrum of plane waves, the theoreticalbasis of which was derived by Lalor [9]. Given an op-

tical field, Sðx; yÞ, in an x-y plane as shown in Fig. 5,we may obtain its plane wave spectrum by taking itsFourier transform with respect to the propagationvariables, kx and ky. Let Sðkx; kyÞ be the transformdefined by

Fig. 1. Component layout for the holographic optical head.

Fig. 2. Image of two pins where the camera is focused betweenthem.

Fig. 3. Mechanical focusing of two pins: (a) shows the rear pin infocus while (b) shows the front pin in focus.

3566 APPLIED OPTICS / Vol. 48, No. 19 / 1 July 2009

Sðkx; kyÞ ¼Z�∞

�∞

Zþ∞

�∞

Sðx; yÞ expð−ik•rÞdxdy: ð1Þ

The vector k is defined as k ¼ ikx þ jky þ kkz ¼ð2π=λÞði cos θx þ j cos θy þ k cos θzÞ, with i, j, and kbeing unit vectors in the x, y, and z directions wherethe space vector r ¼ ixþ jyþ kz. The magnitude of kis 2π=λ, where λ is the wavelength of light. From thisspectrum, the field at a displaced distance, z, may becomputed as

Sðx; y; zÞ ¼Zþ∞

�∞

Zþ∞

�∞

Sðkx; kyÞ expðik•rÞdkxdky: ð2Þ

Defining cos θx, cos θy, and cos θz as the direction co-sines for propagation of a plane wave in the coordi-nate system shown in Fig. 5, we may write theargument of the exponential function as

k • r ¼ kxxþ kyyþ kzz

¼ ð2π=λÞðx cos θx þ y cos θy þ z cos θzÞ: ð3Þ

Digitizing the optical field is equivalent to passingit through a set of apertures, as shown in Fig. 5, lo-cated at the centers of the camera detectors. Repre-senting the complex optical field Sðx; yÞ as f ðm;nÞ, itsdigitized values at the locations of the camera pixels,we may compute its digital spectrum by a fast Four-ier transform (FFT) as

Fðj; kÞ ¼Xm¼M�1

m¼0

Xn¼N�1

n¼0

f ðm;nÞ exp�−i2π

�jmM

þ knN

��:

ð4Þ

Given this discrete spectrum, we may calculate, ap-proximately, the field propagation in either directionalong the z axis by taking the inverse fast Fouriertransform (IFFT) of Fðj; kÞ times the proper exponen-tial phase factor. Although this will not give the trueoptical field, it will yield a reasonable approximation.

The problem is now to make the proper identifica-tion among the components of Eqs. (1)–(4). Substitut-ing Eq. (3) into Eq. (2) and comparing terms, we maywrite

2πλ cos θxx ¼ 2π jmC

M; ð5aÞ

Fig. 4. Mathematical focusing of the two pins: (a) shows mathe-matical focusing of the left pin and (b) of the right pin.

Fig. 5. Coordinate system for a digitized optical field. The circlesmark the locations of the camera pixels in the x-y plane.

1 July 2009 / Vol. 48, No. 19 / APPLIED OPTICS 3567

2πλ cos θyy ¼ 2π knC

N: ð5bÞ

The constant C is introduced to allow the indices mand n to correspond to actual displacements of x andy. Figure 6 illustrates one horizontal line in Fig. 5with the black circles representing the camera pixelsand the two diagonal lines representing propagatingwavefronts. At the maximum angle of propagation,θmax, there will be one wavelength separation be-tween the wavefronts passing through adjacentpixels, which are separated by the spacing p. Fromthis we may write

cos θmax ¼ λ=p: ð6Þ

From Eqs. (5), however, we have

cos θmax ¼ jmC=M ¼ knC=N: ð7Þ

The summations in Eq. (4) are from 0 toM and N formathematical convenience. Physically, we will con-sider the plane waves as propagating equally in po-sitive and negative directions relative to the x and yaxes. For this reason, we will take the maximum pro-pagation directions as corresponding to the indicesm ¼ M=2 and n ¼ N=2. From this, we may solvefor C as

C ¼ 2λ=p: ð8Þ

From this we may write

cos θx ¼ 2jλ=pM; cos θy ¼ 2kλ=pN: ð9Þ

Since the sum of the squares of the three directioncosines is unity, Eq. (9) allows solving for the para-meter kz as

kz ¼4πp

��p2λ

�2−

�jM

�2−

�kN

�2�1=2

: ð10Þ

Equation (10) makes it possible to compute the ma-trix of values:

expðikzðj; kÞzÞ ¼ exp�iz4πp

��p2λ

�2��

jM

�2

��kN

�2�1=2

�: ð11Þ

The propagated field may now be written as

Sðx; y; zÞ ¼ IFFTfFFT½Sðx; yÞ� expðikzðj; kÞzÞg: ð12Þ

4. Procedure

The Time Lapse Save routine of the HoloFringe300Kprogram saves holographic images as files with thedesignation *0_###.HOL and *1_###.HOL. The starindicates any identifying name and the # signs indi-cate numbers in the sequence of holograms capturedby the program. The numbers 0 and 1 indicate thereal and imaginary parts of the complex field ampli-tude. The DADiSP program reads these files as a ser-ies of numbers into two of its windows. The serialdata in these windows is raveled into a matrix in theneighboring windows. The two are combined as realand imaginary parts in the next window with thedata from 1_###.HOL being multiplied by i, thesquare root of minus 1. A two-dimensional trans-form, FFT2, is performed on this complex functionin the next window. The matrix of kz values is com-puted in a separate worksheet and imported into aseparate window. The next window reads the kzvalues and constructs the exponential function ofEq. (10), multiplies it times the results of the FFT2,and performs the inverse transform IFFT2. Themag-nitude of this result is computed in the next window,and this is unraveled into a series for writing to anoutput file. The parameter z in Eq. (10) is adjusted bytrial and error for the best image in the magnitudewindow.

One caution must be noted. The *.HOL files storedby the HoloFring300K program are inverted relativeto a normal image display because they were origi-nally not intended for external use. As a result,the DADiSP output images, when read by an exter-nal image processing program, appear upside down.This can be easily corrected by most image proces-sing programs.

5. Conclusions

It has been demonstrated that data captured by theHoloFringe300K electronic holography program canbe mathematically processed to refocus recordedimages. A mathematical method for doing this is pre-sented that uses a FFT to decompose the initial fielddata into a spectrum of plane waves, models propa-gation of these plane waves, and reconstructs the im-age after propagation via an IFFT.

References1. J. W. Goodman and R. W. Lawrence, “Digital image formation

from electronically detected holograms,” Appl. Phys. Lett. 11,77–79 (1967).

Fig. 6. Wavefronts propagating relative to the camera pixels.

3568 APPLIED OPTICS / Vol. 48, No. 19 / 1 July 2009

2. U. Schnars and W. Jüptner, “Direct recording of holograms bya CCD target and numerical reconstruction,” Appl. Opt. 33,179–181 (1994).

3. J. N. Butters and J. A. Leendertz, “Holographic and video tech-niques applied to engineering measurements,” Meas. Control4, 349–354 (1971).

4. A. Macovski, S. D. Ramsey, and L. F. Schaefer, “Time-lapse in-terferometry and contouring using television systems,” Appl.Opt. 10, 2722–2727 (1971).

5. K. A. Stetson, W. R. Brohinsky, J. Wahid, and T. Bushman, “Anelectro-optic holography system with real-time arithmeticprocessing,” J. Nondestruct. Eval. 8, 69–76 (1989).

6. I. Yamaguchi and T. Zhang, “Phase shifting digital hologra-phy,” Opt. Lett. 22, 1268–1270 (1997).

7. G. Pedrini and H. J. Tiziani, “Digital holographic interferome-try,” in Digital Speckle Pattern Interferometry and RelatedTechniques, P. K. Rastogi, ed. (Wiley, 2001), pp. 337–362.

8. W. Osten and P. Ferraro, “Digital holography and its applica-tion in MEMS/MOEMS inspection,” in Optical Inspection ofMicrosystems, Vol. 109 of Optical Science and EngineeringSeries, W. Osten, ed. (CRC Press, 2006), pp. 351–425.

9. E. Lalor, “Conditions for the validity of the angularspectrum of plane waves,” J. Opt. Soc. Am. 58, 1235–1237(1968).

1 July 2009 / Vol. 48, No. 19 / APPLIED OPTICS 3569