8
Depth from Dynamic (de)focused Projection Intuon Lertrusdachakul, Yohan D. Fougerolle, and Olivier Laligant Le2i Laboratory, 12 rue de la fonderie, Le Creusot 71200, France. Email:[email protected] Abstract This paper presents a novel 3D reconstruction approach using dynamic (de)focused light. The method combines both depth from focus (DFF) and depth from defocus (DFD) techniques. To overcome drawbacks of surface reflectivity found in traditional methods, different optimized illumination patterns are projected on the object in order to enforce strong dominant texture and provide the projection covering the object surface as much as possible. The image acquisition system is specifically constructed to keep the whole object sharp in all captured images. Therefore, only projected patterns experience the different defocused deformation according to the object depths. The light pattern is projected onto the object at certain focused ranges similar to DFF approach, while the blur levels used to generate depth map are calculated basing on Point Spread Function (PSF) strategy as in DFD method. The final depth is then assigned to specific pixel coordinates by assumption of pre-defined overlapping pixel weights. With this approach, the final reconstruction is supposed to be prominent to the one obtained from DFD because at least one focus or near-focus image within depth of field exists in the computation. Moreover, it is also less computational extensive comparing to DFF that requires numerous input images. Experimental results on real images demonstrate effective performance of our method in which it provides reliable depth estimation and compromised time consumption. Keywords: focus, depth from defocus, active illumination pattern, range sensor, blur estimation, 3D reconstruction 1 Introduction Among expansive area in the field of computer vision, 3D reconstruction is one of the most im- portant topic that continuously attracts the re- searchers’ interest. It is the problem to recover depth information from 2D images. Various ap- plications can be found in real world examples, starting from large scale such as feature extraction in video surveillance, to small scale such as micro- biological analysis. Several approaches have been broadly developed however, there is still no unique satisfactory solution for all kind of scenes. In the literature, there is a clear distinction be- tween passive and active range sensors depending whether an active illumination pattern is occupied or not. Passive techniques such as stereo and shape from motion use at least two images to perform multiple views correspondence matching [1]. The depth is extracted from either disparity or motion vectors after matching. The main drawbacks of these techniques are that they are computationally expensive to either perform correspondence match- ing or feature tracking as well as the occlusion problems in scene areas that are visible only by one camera. Other passive techniques include shape from shading and shape from texture. By using only a single image, the depth ambiguities can be retrieved. However, these techniques are only com- plementary to other strategies. Recently, another passive technique, shape from focus/defocus has gained remarkable attention. Depth from focus (DFF) requires several images to be taken with small incrementing focus settings [2], [3]. Depth is estimated by searching for best focused point through the image stack. Meanwhile, depth from defocus (DFD) can only use two images with dif- ferent optical geometric settings to evaluate the difference of blur level between each point in de- focused images[4], [5], [6], [7]. Therefore, DFD has advantage over DFF during image acquisition pro- cess when scene objects may change their position dynamically. However, it is also computationally expensive to return reliable depth map. Overall, the common bottleneck shared among all passive techniques is that the depth cannot be computed accurately in the case of weak texture or texture- less scenes [8]. Active techniques use active illu- mination and are generally based on the principle of structured light and time of flight. Among the structured light methods, light striping method, Moir´ e interferometry, and Fourier-transform pro- filometry are the most popular [9]. Depth can be extracted from the image deformation of projected

[IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

  • Upload
    olivier

  • View
    215

  • Download
    3

Embed Size (px)

Citation preview

Page 1: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

Depth from Dynamic (de)focused Projection

Intuon Lertrusdachakul, Yohan D. Fougerolle, and Olivier Laligant

Le2i Laboratory, 12 rue de la fonderie, Le Creusot 71200, France.Email:[email protected]

Abstract

This paper presents a novel 3D reconstruction approach using dynamic (de)focused light. The methodcombines both depth from focus (DFF) and depth from defocus (DFD) techniques. To overcome drawbacksof surface reflectivity found in traditional methods, different optimized illumination patterns are projectedon the object in order to enforce strong dominant texture and provide the projection covering the objectsurface as much as possible. The image acquisition system is specifically constructed to keep the wholeobject sharp in all captured images. Therefore, only projected patterns experience the different defocuseddeformation according to the object depths. The light pattern is projected onto the object at certainfocused ranges similar to DFF approach, while the blur levels used to generate depth map are calculatedbasing on Point Spread Function (PSF) strategy as in DFD method. The final depth is then assigned tospecific pixel coordinates by assumption of pre-defined overlapping pixel weights. With this approach,the final reconstruction is supposed to be prominent to the one obtained from DFD because at leastone focus or near-focus image within depth of field exists in the computation. Moreover, it is also lesscomputational extensive comparing to DFF that requires numerous input images. Experimental results onreal images demonstrate effective performance of our method in which it provides reliable depth estimationand compromised time consumption.

Keywords: focus, depth from defocus, active illumination pattern, range sensor, blur estimation, 3D reconstruction

1 Introduction

Among expansive area in the field of computervision, 3D reconstruction is one of the most im-portant topic that continuously attracts the re-searchers’ interest. It is the problem to recoverdepth information from 2D images. Various ap-plications can be found in real world examples,starting from large scale such as feature extractionin video surveillance, to small scale such as micro-biological analysis. Several approaches have beenbroadly developed however, there is still no uniquesatisfactory solution for all kind of scenes.

In the literature, there is a clear distinction be-tween passive and active range sensors dependingwhether an active illumination pattern is occupiedor not. Passive techniques such as stereo and shapefrom motion use at least two images to performmultiple views correspondence matching [1]. Thedepth is extracted from either disparity or motionvectors after matching. The main drawbacks ofthese techniques are that they are computationallyexpensive to either perform correspondence match-ing or feature tracking as well as the occlusionproblems in scene areas that are visible only by onecamera. Other passive techniques include shapefrom shading and shape from texture. By using

only a single image, the depth ambiguities can beretrieved. However, these techniques are only com-plementary to other strategies. Recently, anotherpassive technique, shape from focus/defocus hasgained remarkable attention. Depth from focus(DFF) requires several images to be taken withsmall incrementing focus settings [2], [3]. Depthis estimated by searching for best focused pointthrough the image stack. Meanwhile, depth fromdefocus (DFD) can only use two images with dif-ferent optical geometric settings to evaluate thedifference of blur level between each point in de-focused images[4], [5], [6], [7]. Therefore, DFD hasadvantage over DFF during image acquisition pro-cess when scene objects may change their positiondynamically. However, it is also computationallyexpensive to return reliable depth map. Overall,the common bottleneck shared among all passivetechniques is that the depth cannot be computedaccurately in the case of weak texture or texture-less scenes [8]. Active techniques use active illu-mination and are generally based on the principleof structured light and time of flight. Among thestructured light methods, light striping method,Moire interferometry, and Fourier-transform pro-filometry are the most popular [9]. Depth can beextracted from the image deformation of projected

intuon.lertrusdachak
Typewriter
978-1-4244-9631-0/10/$26.00 ©2010 IEEE
Page 2: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

pattern based on triangulation [10],[14]. Neverthe-less, it is only a relative depth not an absolutedepth. Other techniques use time of flight. Thedepth is estimated by measuring the time of themodulated laser beam after sending and returningback from object surface [9]. This method providesuseful rough depth map for typical distance scenebut it can take very long scanning time.

Our approach, novel active range sensor, combinesboth depth from focus and depth from defocus.Projected light pattern images are acquired withincertain focused ranges similarly to DFF approach,while the focus measures across these images arecalculated for depth estimation by using DFDman-ner [11]. The key idea is that we assume PointSpread Function (PSF) to be Gaussian function.Therefore, once the projected patterns are out offocus, we can retrieve depth information throughthe relationship between the extracted Gaussianspread parameters and the size of blurs.

The structure of the paper is as follows: we beginwith a theory of shape from focus and defocus.Based on this fundamental analysis, in section 3,we explain our novel dynamic defocused method.We describe the implementation and obtained ex-perimental results in section 4. In the last section,we conclude this paper by summarizing the perfor-mance expectations of this new technique and wediscuss further research perspectives. The contri-butions of this paper are the demonstration of anovel 3D retrieval approach based on active struc-ture of light and the investigation on their benefitsover other 3D recovery methods.

2 A theory of Shape from Focusand Defocus

Figure 1 represents the image formation in theconvex lens. The object appears sharp if each pointon the object plane is projected onto the imageplane. However, if the sensor plane and the imageplane are misaligned, the image is distributed overa circular patch called Circle of Confusion (CoC)on the sensing element resulting in blurring image[8]. The blur level is a function of the distanceof the sensor plane and the image plane. Thefurther distance causes the higher blur. Therefore,the blur level can be determined from the radiusof CoC. As we assume the PSF model to be aGaussian Function, the spread parameters can beobtained by fitting the PSF of defocused image tothe Gaussian model. The depth can be estimatedthrough the relationship between extracted spreadparameter and radius of blur, respectively.

Consider the camera geometry in figure 1. In orderto define the radius of the circle of confusion, the

knowledge of similar triangles is applied

tan α =D/2

v=

d/2

s− v(1)

By employing the Lens law,

1

f=

1

u+

1

v(2)

The diameter and radius of CoC becomes

d = Ds( 1

f−

1

u−

1

s

)

(3)

R =Ds

2

( 1

f−

1

u−

1

s

)

. (4)

The depth can also be derived in terms of opticalparameters as the following

FOV

u=

ς

v1

v=

1

f−

1

u→ v =

fu

u− f(LensLaw)

(5)

where

{

FOV is field of viewς is sensor size

Object distance u becomes

u =FOV × f

ς+ f (6)

In addition, we know that precise measurement re-quired at least two pixels to represent the smallestfeature (or pattern) needed to be detected. Thiscan be interpreted in the following equation:

sensor resolution (S) = FOVresolution

× 2

= FOVsize of smallest feature

× 2

(7)

In this case, size of smallest pattern means the di-ameter of light pattern projecting onto the object.Therefore, from Equation 3, field of view can berearranged into

FOV =S × d

2(8)

Therefore, we now can derive the working distancewhich directly relates to the real depth of the ob-ject as

u =Sdf

2ς+ f (9)

By taking into account the magnification of opticalsystem,

Magnification ratio(γ) =d′

d=

w′

w→ d =

d′

γ(10)

Page 3: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

Figure 1: Camera geometry

where

d is size of pattern seen on objectd′ is size of pattern seen on sensorw is distance between object and lensw′ is distance between lens and sensorγ is magnification ratio

Therefore, the object distance becomes

u =Sd′f

2ςγ+ f (11)

According to our derived formula, the size of pat-tern d and d′ (seen on object and seen on cam-era, respectively), and the working distance u arecalculated in mm. However, in practical, outputimage from sensor is represented in pixel. We candevelop the transformation between units by us-ing information of camera specification. Accuracyis the measure of the difference between workingdistances (e.g. two depths of the object) that cor-respond to only single pixel change on the patternimage.

Accuracy =KSf

2ςγ(12)

whereK is the ratio of sensor resolution and sensorsize (constant).

3 The Proposed Shape Recov-

ery Method

3.1 Dynamic Light Pattern

Effective solution to reflectivity surface problem isto introduce an active structured light. A weaktextured and textureless surface provide insufficientdetails for depth estimation because both focusand defocus give the same representation. Thismain inherent weakness is found in all passive tech-niques. Highly textured light patterns are forced

onto the object, improving the overall depth recov-ery system to be reliable and more precise. More-over, in order to avoid rotation variance, it shouldbe designed in symmetrical or semi-symmetricalarrangement. In our experiment, we employ bothhorizontal and vertical stripes patterns with differ-ent shifting in order to cover as much as possiblethe reconstructed area of the object.

3.2 Proposed approach

A prototype of range sensor has been developed.The key ideas are the combination of depth fromfocus and defocus with the use of dynamic lightpattern. In our experiment, we use a video projec-tor as a light source because it can produce strongprojecting light patterns. Moreover, it eases thefurther pattern modifications to suit different kindsof surface textures.

As shown in figure 2, the overall design of themodel, a semi-transparent screen is used after thelight source to reduce intensity saturation of pow-erful light from video projector on the object. Italso helps solving magnification problem causedby the fact that the projected light patterns areoriginally so small from the video projector com-pared to the ones projected on the object(withoutpassing though the screen). The beam splitter isused to transmit the light patterns onto the object.Besides that, it is also used to sense the image tothe camera. The whole set up can be separatedinto two systems:

First, the defocus system starts from light pat-tern projecting through specific constructed opti-cal system onto the object. The optical path ofthe system is illustrated in line (a) of figure 2. Ingeneral, for ideal case where the object is placed inor very close to the surface of best focus, an output

Page 4: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

Figure 2: Proposed approach model

image formed on the sensor is sharp or identical toan input. The relationship between them can besimplified to

I = I0 ∗ δF−→ I = I0 (13)

However, our concern is deformation, where screendisplacements are varied at different depth of fieldranges. The blurring function has an influenceon the system and therefore needs to be takeninto account. The defocused output image canbe rewritten as the convolution between the inputimage and a blurring function h as follows:

I = I0 ∗ h(e, u), (14)

where

e are the camera parametersu is the distance from light pattern on

the screen to the additional lens.

Another system is an imaging system starting fromthe object with projected light pattern to the sen-sor via beam splitter. The optical path of thesystem is illustrated in line (b) of figure 2. This sys-tem is no longer concerned with image deformationsince the sensor directly outputs correct images asthey appear on the object. To be precise, eitherin the case that patterns are sharp or blurred, thecamera will interpret exactly the same.

4 Implementations and Results

In our experiment, Dell DLP 2300 MP video pro-jector with resolution of 1024x768 is used as a lightsource. The horizontal stripe illumination patternswith a width of 1 pixel and 20 pixels for spacing areused. The beam of projecting light pattern thenreaches the semi-transparent screen and an addi-tional lens (canon 135 mm). The light rays passing

through the lens are split into two directions by theuses of beam splitter. One is projected onto theobject and another is transmitted the projected ob-ject onto the sensor. The scene object is capturedusing a Canon EOS-1Ds camera with attached lens50 mm. The data flowchart illustrated in figure 3describes the main operations.

Figure 3: Flowchart

4.1 Image Acquisition

The camera setting (e.g. F, ISO, shutter speed,etc.) are carefully tuned such that the systemkeeps the whole object sharp in all images andonly the defocused patterns are experienced varieddeformation according to the object’s depth. Allthe optical components in this set up are fixed.Only the semi-transparent screen is moved result-ing in several scene images with different blur lev-els. With this specific setting, we can analyze thedefocus of light pattern projecting on the objectunlike the traditional approaches that require mov-

Page 5: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

ing several components (e.g. object, lens, sensor).Moreover, some image preprocessing are also per-formed to get the ready input images to the system.First, we convert image from RGB to grayscalelevel. Then, we crop the images selecting only theeffective area which are only regions within beamsplitter. Adaptive Wiener filtering is also appliedfor smoothing and noise reduction.

4.2 Image Profile Analysis

The profile analysis is performed in order to extractthe intensity values along multiline path of theimages. The algorithm computes equally spacedpoints along the specified path and uses interpo-lation to determine the image intensities for eachpoint. It is done in orthogonal direction to theaxis of the pattern projection. To be precise, whenthe projected pattern is the horizontal stripes, thevertical profile analysis will be applied column bycolumn. While for the vertical stripes pattern, thehorizontal profile will be analyzed row by row. Theoutput are stored in the profile stacks regardingtheir intensities and pixel coordinates.

4.3 Pattern Localization by Absolute PeakDetermination

The technique is used to define approximated co-ordinate of single profile(obtained previously). Foreach profile, we extract local maximum and min-imum by absolute peak detection algorithm. Weprefer non-derivative method since finding zero-crossing of first derivative yields false result in thepresence of noise. The method simply searches forthe peak which is the highest point between valleys,and the valley which is the lowest point betweenthe peaks. To determine the peaks and valleysas local maxima and local minima, we require thedifference of these peaks or valleys and their sur-roundings to be at least a preset threshold. Thealgorithm is very powerful and works well even inextreme case. In most of the cases, image profileslocated near these local minima and maxima con-tain lot of intensity fluctuation due to the noise atthe adjacencies between stripe patterns. In orderto solve this problem, we simply shift minima tonew consideration points on both left and right bycertain constant pixel (delta) as shown in figure 4.In addition, when the object shape is varied overwide range of depths, the blur levels in the far focusregion are considered as noise. Therefore, in orderto eliminate these high defocused profile, the con-trast criteria is also applied. By setting the signifi-cant contrast, we can discard the defocused profilein which the ratio between maxima and minima islower than the contrast. After we determine propercutoffs in each profile, we then obtain isolated lightpatterns to compute PSF individually.

4.4 Gaussian Fitting

According to the characteristic of the blurring func-tion, it also called the pillbox since the light en-ergy radiated by the surface point (or the illumi-nation pattern in this case) is collected and seenuniformly distributed over a circular patch on theobject. This PSF is generally assumed to be Gaus-sian when paraxial geometric optic is used anddiffraction effects are negligible. PSF has its opti-cal transfer function (OTF) as their Fourier trans-formation. Therefore, the defocused output imagecan be written as follows:

I = I0 ∗ PSFF−→ I = I0 ×OTF. (15)

We assume the PSF as a Gaussian function,

PSF = h(x, y) =1

2πσ2exp

(−x2 + y2

2σ2

)

. (16)

By performing Fourier transform, we obtain OTF:

OTF = H(ω1, ω2; e, u) = exp(

−1

2ρ2(ω1, ω2)σ

2(e;u))

,

(17)

H(ω1, ω2) = exp(

− (ω2

1+ ω2

2

2)σ2

)

, (18)

where

σ is spread parameter,σ = CR′ = 1√

2R′, C > 0

R is the radius of circle of confusion.

Therefore, from equation 4, the defocused outputimage becomes

I = I0(

exp(

−1√2

ω2

1+ ω2

2

2

Ds

2

( 1

f−

1

u−1

s

)))

(19)

From different pattern patches isolated by localminima, we determine their PSF individually. Thespread parameters σ are then extracted from thefitting between PSF and Gaussian model as exem-plified in figure 5.

Figure 5: Example of Gaussian fitting

Page 6: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

Figure 4: Local maxima and minima in pattern localization

The spread parameter, which is associated to theradius of CoC, can indicate the blur level in defo-cused images. Therefore, the depth can be deducedand assigned back to the pixel coordinate of thepeak(local maxima) defined earlier. We iterate thisalgorithm for all light patterns covering the wholeobject.

4.5 Weighted Overlapping Pixel Algo-rithm

When the semi-transparent screen is moved tooclose or too far from the object, the degree of bluris increased inadmissibly. Only particular screendisplacements provide important blur images fordepth estimation. Therefore, in order to avoidthese problems, we defined the confidence weightedmask of size 1x3 aiming to specify the priorityto the certain defocused images. This mask con-tains confidence weights with the highest confi-dence value at the center(highest priority) and de-creasing in both sides. The mask passes throughthe stack of depth map row-wise before calculatingaverage summation of overlapping pixels as the fi-nal depths. Figure 6 demonstrates how the weightedoverlapping pixels algorithm works. Then we caneventually draw the final reconstruction in 3D.

4.6 Experimental Results

We conducted experiments using a simple metallicstaircase as an object. Sequence of light patternimages are acquired at different physical screen dis-placements (distance of 0.5 cm between each screenposition). Due to some constraints of our opticalsetup (e.g. additional lens’ distortion and beamsplitter size), the effective reconstruction areas are

Figure 6: Weighted overlapping pixel algorithm

limited only at the center of the beam splitter. Theinput images are then put into the stack for profileanalysis. Consider every single intensity profile, weindicated the pattern location by using absolutepeak determination. After we isolated each lightpattern, we define PSF of each blur patch andextract spread parameters by fitting the PSF to theGaussian model as shown in figure 5. The depth isthen calculated by our derived Equation 11. In thefinal step, we improve the result and evaluate finaldepth by taking priority weight into consideration.

In Figure 7(a), the rough 3D model presents somepreliminary result obtained from our implementa-tion. The depth map illustrated in Figure 7(b)demonstrates the effective performance of the me-thod in the case of planar structure, in which bothreal object depth and our estimated depth lie withinproximity. We can provide quantitative evaluationas follow: Our object is a 4 cm metallic stair (1cmper step),Average Error: 0.0741 cmRMSE: 0.0937Standard Deviation: 0.0509The point cloud for our object reconstruction con-

Page 7: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

(a)

(b)

Figure 7: Rough 3D reconstruction (a) and Depth map(b)

tains 1500 points. Denser point cloud and higherquality 3D reconstruction can be obtained oncevariation and number of projected light patternsincreases. Moreover, with non-optimized Matlabcode, the program takes less than one minute com-putational time (including profile analysis) per im-age.

5 Conclusion

Our proposed method can be employed as a stand-alone strategy which does not suffer neither fromthe correspondence problem nor the occlusion prob-lem. Example of applications can be found in thebiological specimen analysis, defect metallic com-ponent detection, etc. Furthermore, by introduc-ing the dynamic light pattern projected onto theobject, the method can deal with difficult surfaces

including weakly (or textureless) texture and par-tial reflective surface. However, there are somelimitations in our experiment. Since this work isa novel approach, we have been spending time fo-cusing on mathematical model and experimentalsetup. Our current work is mainly dedicated tothe improvement of the method, as well as the per-formance evaluation, but unfortunately these areworks in progress and raise some issues. The wholesetup such as image acquisition system, opticalcomponents, and light patterns has been designedspecifically. Therefore, it is hard to reproduce samesystem under the same environment to test dif-ferent competing approaches. Nevertheless, as apreliminary work, we are now developing the ap-proach to perform comparison in term of accuracy.Several components in the setup also limit the sizeof the object itself and the maximum change of

Page 8: [IEEE 2010 25th International Conference of Image and Vision Computing New Zealand (IVCNZ) - Queenstown, New Zealand (2010.11.8-2010.11.9)] 2010 25th International Conference of Image

object depths. However, it is only the matter ofscale of the system. This issue can be fixed byadjusting smaller or larger optical components forsmaller or larger object respectively while the algo-rithm remains unchanged. Therefore, these restric-tions do not concern with the approachs method-ology. Some extension to this work should takeinto consideration as further improvement is theillumination patterns. In order to suit all kinds ofscenes, choices of light patterns should be appliedprogressively on different scene points dependingon natural of the texture surface.

References

[1] Y. Schechner and N. Kiryati, “Depth fromdefocus vs. stereo: How different really arethey?” International Journal of ComputerVision, vol. 39, no. 2, pp. 141–162, 2000.

[2] J. Ens and P. Lawrence, “An investigation ofmethods for determining depth from focus,”IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 2, pp. 97–108, 1993.

[3] S. Nayar and Y. Nakagawa, “Shape from fo-cus,” IEEE Transactions on Pattern Analysisand Machine Intelligence, vol. 16, no. 8, pp.824–831, 1994.

[4] A. Rajagopalan and S. Chaudhuri, “A varia-tional approach to recovering depth from defo-cused images,” IEEE Transactions on PatternAnalysis and Machine Intelligence, pp. 1158–1164, 1997.

[5] M. Subbarao and G. Surya, “Depth fromdefocus: a spatial domain approach,” Inter-national Journal of Computer Vision, vol. 13,no. 3, pp. 271–294, 1994.

[6] Y. Xiong and S. Shafer, “Depth from fo-cusing and defocusing,” in IEEE Conferenceon Computer Vision and Pattern Recognition.Citeseer, 1993, pp. 68–68.

[7] P. Favaro, U. Edinburgh, A. Duci, and G. Ro-stock, “A theory of defocus via Fourier analy-sis,” in IEEE Conference on Computer Visionand Pattern Recognition, 2008, pp. 1–8.

[8] O. Ghita, P. Whelan, and J. Mallon, “Com-putational approach for depth from defocus,”Journal of Electronic Imaging, vol. 14, p.023021, 2005.

[9] M. Watanabe, S. Nayar, and M. Noguchi,“Real-time computation of depth from defo-cus,” in Proceedings of SPIE the internationalsociety for optical engineer. Citeseer, 1996,pp. 14–25.

[10] M. Noguchi and S. Nayar, “Microscopic shapefrom focus using a projected illumination pat-tern,” Mathematical and computer modelling,vol. 24, no. 5-6, pp. 31–48, 1996.

[11] I. Lertrusdachakul, Y. Fougerolle, and O. Lali-gant, “A novel 3D reconstruction approach bydynamic (de) focused light,” in Proceedings ofSPIE, vol. 7538, 2010, p. 75380L.

[12] P. Besl, “Active, optical range imaging sen-sors,” Machine vision and applications, vol. 1,no. 2, pp. 127–152, 1988.

[13] A. Pentland, T. Darrell, M. Turk, andW. Huang, “A simple, real-time range cam-era,” in IEEE Conference on Computer Vi-sion and Pattern Recognition, 1989, pp. 256–261.

[14] P. Graebling, C. Boucher, C. Daul, andE. Hirsch, “3D sculptured surface analysisusing a structured-light approach,” in Pro-ceedings of SPIE, vol. 2598, 1995, p. 128.