Marker Encoded Fringe Projection Profilometry for Efficient 3D

Embed Size (px)

DESCRIPTION

This paper presents a novel marker encoded fringe projection profilometry (FPP) scheme for efficient 3-dimensional (3D) model acquisition. Traditional FPP schemes can introduce large error to the reconstructed 3D model when the target object has an abruptly changing height profile. For the proposed scheme, markers are encoded in the projected fringe pattern to resolve the ambiguities in the fringe images due to that problem. Using the analytic complex wavelet transform, the marker cue information can be extracted from the fringe image, and is used to restore the order of the fringes. A series of simulations and experiments have been carried out to verify the proposed scheme. They show that the proposed method can greatly improve the accuracy over the traditional FPP schemes when reconstructing the 3D model of objects with abruptly changing height profile. Since the scheme works directly in our recently proposed complex wavelet FPP framework, it enjoys the same properties that it can be used in real time applications for color objects

Citation preview

  • Marker Encoded Fringe Projection Profilometry for

    Efficient 3D Model Acquisition

    Budianto,1 Daniel P.K. Lun,1,* Tai-Chiu Hsung2

    1 Centre for Signal Processing, Department of Electronic and Information Engineering, the Hong Kong Polytechnic University,

    Hung Hom, Kowloon, Hong Kong 2 The Discipline of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong

    *Corresponding author: [email protected]

    Received Month X, XXXX; revised Month X, XXXX; accepted Month X,

    XXXX; posted Month X, XXXX (Doc. ID XXXXX); published Month X, XXXX

    This paper presents a novel marker encoded fringe projection profilometry (FPP) scheme for efficient 3-dimensional (3D) model

    acquisition. Traditional FPP schemes can introduce large error to the reconstructed 3D model when the target object has an abruptly

    changing height profile. For the proposed scheme, markers are encoded in the projected fringe pattern to resolve the ambiguities in

    the fringe images due to that problem. Using the analytic complex wavelet transform, the marker cue information can be extracted

    from the fringe image, and is used to restore the order of the fringes. A series of simulations and experiments have been carried out to

    verify the proposed scheme. They show that the proposed method can greatly improve the accuracy over the traditional FPP schemes

    when reconstructing the 3D model of objects with abruptly changing height profile. Since the scheme works directly in our recently

    proposed complex wavelet FPP framework, it enjoys the same properties that it can be used in real time applications for color

    objects. 2014 Optical Society of America OCIS codes: (150.6910) Three-dimensional sensing; (100.2650) Fringe analysis; (100.5070) Phase retrieval; (100.5088) Phase

    unwrapping; (110.6880) Three-dimensional image acquisition; (100.7410) Wavelet. http://dx.doi/org/10.1364/AO.99.099999

    1. Introduction

    Due to the relatively low cost and high efficiency, fringe

    projection profilometry (FPP) has been employed for 3-

    dimensional (3D) model acquisition in various applications

    such as quality control system [1], real world 3D scene

    reconstruction [2,3], and medical tomography [4]. For FPP,

    fringe patterns are first projected onto the target object. Due

    to the height profile of the object, the fringe patterns as

    shown on the object surface are deformed as compared to the

    projected ones. Thus by measuring the amount of

    deformation, the objects 3D model can be readily

    reconstructed.

    Traditional FPP methods can be further divided into two

    categories: temporal multiplexing [1,513] and frequency

    multiplexing [4,1418]. Temporal multiplexing methods

    make use of two or more fringe patterns projecting onto the

    target object sequentially. It allows accurate reconstruction of

    the 3D model of an object but can have severe distortion if

    the object is moving during the image capturing process.

    Hence they are not suitable to dynamic 3D model

    reconstruction applications.

    The frequency multiplexing methods on the other hand

    only needs to project one fringe pattern to the target object in

    order to reconstruct its 3D model. One of the important

    frequency multiplexing FPP methods is the Fourier transform

    profilometry (FTP) [4,14]. For FTP, a high frequency black-

    and-white stripe pattern is projected onto the object and the

    deformed fringe pattern is captured by a camera [14]. The

    deformation due to the object height profile can be evaluated

    by measuring the phase shift of the stripes in the captured

    fringe image. Similar to many phase detection problems

    [1,511], the phase analysis process of the FTP method

    gives only the wrapped phase data that has to be unwrapped

    to obtain the true phase shift information usable for 3D

    model reconstruction [14]. Traditional phase unwrapping

    algorithms estimate the true phase shift by integrating the

    wrapped phase differences [1922]. However, due to the

    various artifacts in the fringe images, some of the wrapped

    phase data can be missing. The same can also happen when

    the target object has a sharp change in height. Directly

    carrying out the unwrapping process with such erroneous

    data will lead to severe distortion to the final reconstructed

    3D model. Recent quality-guided phase unwrapping

    methods, such as [21], will also fail particularly when the

  • fringes have phase discontinuity along the object boundary

    with respect to the reference background. It is the case when

    the object does not have a direct contact with the reference

    background or the object itself has a curvature (such as a

    bowl) such that some parts of it cannot be seen from the

    angle of the camera.

    In fact, the above problem stems from Itohs analysis [19]

    that assumes the phase can be estimated by integrating the

    wrapped phase difference. The estimated true phase shift will

    thus have much error if some of the phase data are missing.

    To solve this problem, recent approaches try to embed period

    order information in the projected fringe patterns. By period

    order, we refer to the number of 2 jumps in the phase angle

    that is hidden in the wrapped phase data. If the period order

    is known, phase unwrapping can be achieved even when

    some of the wrapped phase data are missing. Traditional

    approaches embed phase order information to the fringe

    patterns using, for instance, multiple cameras [2] [12], multi-

    wavelength fringe pattern [23], fringe pattern with additional

    information, such as colors [24], speckles [25] [26], and

    markers [5,27,28], multiple frequencies [29], multiple

    patterns [30] [31], and gray coded fringe patterns [1], [9],

    etc. However, the approaches in [2], [12], and [23] require

    additional hardware systems. The performance of the

    approach in [24,28,29] can be seriously affected by the color

    pattern of the object, whereas the approaches in [5] and [27]

    can only be applied to a simple scene (e.g., a single object).

    The approaches in [1,9], [30], and [31] require additional

    fringe projections. They are not suitable to dynamic

    applications similar to the temporal multiplexing methods

    mentioned above. In addition to the above approaches, it was

    recently reported in [32] that a dual frequency scheme can

    be used to embed the period order information to the fringe

    patterns in a phase measurement profilometry process (PMP-

    DF). In such method, high frequency fringe patterns are

    generated to encode the objects height information as the

    traditional phase shifting profilometry (PSP) approaches [6].

    In addition to that, low frequency signals are added to the

    fringe patterns to encode the period order information.

    Similar to other PSP methods, it requires at least 5 fringe

    patterns to obtain both the wrapped phase data and the period

    order information. Hence it is also not suitable to dynamic

    applications. Besides, our experiment shows that the method

    is highly sensitive to the quality of the fringe images. It will

    be exemplified in Section V.

    In this paper, we propose a new marker encoding and

    detection algorithm using the Dual Tree Complex Wavelet

    Transform (DTCWT) [33]. The objective of the proposed

    algorithm is to provide an efficient method for estimating the

    phase order information of the fringes in order to assist the

    phase unwrapping process when working in the FTP based

    FPP environment. We have shown in [18,34] that the

    DTCWT is an effective tool for denoising and removing the

    bias of fringe images [35,36]. For the proposed algorithm,

    we add to the original DTCWT FPP framework the markers

    encoding and detection algorithm that allows the period order

    information to be embedded into the projected fringe pattern

    and extracted from the captured fringe image to assist in the

    phase unwrapping process [37]. As different from the

    traditional marker encoding schemes, the proposed approach

    is applicable to objects with color texture. Furthermore, this

    approach is more localizable compared to the approaches

    using single strip marker or single point marker. It can

    provide the period order information not only at the center

    but along the fringe pattern. Thus it is applicable to the scene

    that contains several objects with sudden jumps in height.

    Unlike the approaches that use multiple frame patterns, the

    proposed scheme is applicable to dynamic applications

    because it requires only a single fringe pattern for the entire

    operation. Finally, the algorithm is not sensitive to the quality

    of the fringe image. As different from the PMP-DF, the

    period order information can be detected accurately with

    noisy fringe images or images with abnormal brightness.

    With the period order information, we can reconstruct the 3D

    model of the object even when some of the fringe data are

    missing due to the artifacts of the fringe image or the

    irregularity of the object shape.

    This paper is organized as follows. In Section II, the basic

    principle of the FTP using the DTCWT is introduced. We

    then describe in Section III and IV, respectively, the

    proposed marker encoding and detection algorithm. The

    simulation and experiment results are discussed in Section V.

    Finally, the conclusions of this work are presented in Section

    VI.

    2. Background

    A. Principle of the FTP Method

    In this section, we first outline the basic concept of the conventional FTP method [14]. The typical setup of the FTP

    method is shown in Fig.1. In the figure, a projector at pE

    projects a fringe pattern and a camera captures the deformed

    fringe pattern as shown on the object surface at point cE .

    The point pE and cE are placed at a distance 0l from a

    reference plane R and they are at a distance of 0d in the

    direction parallel to the reference plane R. The deformed

    fringe pattern captured by the camera at cE can be modeled

    mathematically as a sinusoidal function as follows [38]:

    Fig. 1. FPP setup using the FTP scheme in crossed optical axes geometry

    [14] .

  • ( , ) ( , ) ( , )cos ( , ) ( , )g x y a x y b x y x y n x y (1)

    In (1), ( , )g x y represents the pixels, in x and y directions,

    of the fringe image captured by the camera. ( , )a x y is the

    bias caused by the surface illumination of the object; ),( yxb

    is the local amplitude of the fringe; and n( x, y ) is the noise

    generated in the image capture process.

    0( , ) 2 ( , )x y f x x y is the phase angle in which 0f is the

    carrier frequency, and ( , )x y is the phase shift of the fringe.

    ( , )x y has a close relationship with the object height profile

    ( , )h x y as follows:

    0

    0 0

    ( , ) ( , )2

    lh x y x y

    f d

    (2)

    In (2), ( , ) ( , ) ( , )ox y x y x y , where ( , )o x y is the

    ( , )x y when there is no object. It is assumed to be known in

    the initial calibration process. Hence if ( , )x y is known,

    ( , )x y and also ( , )x y can be determined. Then the object

    height profile and in turn the 3D model of the object can be reconstructed.

    B. The Need of Phase Unwrapping

    However, any trigonometric methods that directly retrieve

    ( , )x y from ( , )g x y will end up with only the wrapped

    version of ( , )x y , which is always in the range of to . It

    is caused by the fact that, as shown in (1), all ( , )x y

    separated by 2 will have the same ( , )g x y . Let us denote

    the wrapped version of ( , )x y be ( , )x y . A phase

    unwrapping process is needed for retrieving ( , )x y from

    ( , )x y . Many of the existing unwrapping algorithms [20]

    are based on Itohs analysis [19], which suggests to integrate

    the wrapped phase differences in order to calculate the

    desired phase ( , )x y . Denote ( ) ( , )y x x y and

    ( ) ( , )y x x y . The phase unwrapping process based on

    Itohs analysis can be described as follows:

    1( ) (0) ( )

    m

    y y yxx x

    (3)

    where ( ) ( ) ( 1)y y yx x x and ( )y x . However,

    (3) is valid only if ( )y x is known for all x. If for a particular

    point x such that ( ')y x is estimated with error or even

    missing, the unwrapped ( )y x will have error for all x x.

    Such situation unfortunately is common in typical FPP setups due to the artifacts of the captured fringe images or the irregularity of the object shape.

    C. Fringe Image Enhancement Based on the DTCWT

    As mentioned above, the estimation of ( )y x is erroneous

    in practice due to the artifacts such as noises and bias in the

    fringe images. To reduce the artifacts, we have shown in

    [18] [34] [39] an efficient image enhancement algorithm

    based on the DTCWT [33]. As shown in Fig. 2, the 2D-

    DTCWT is realized by four 2D discrete wavelet transform

    (DWT) trees. The outputs of the transforms are combined to

    generate wavelet coefficients of 6 orientations at different

    resolutions [40]. Besides, due to the specially designed

    wavelet functions, the transform is approximately analytic.

    That is, it gives nearly zero negative frequency response.

    Based on this special property, it is shown in [18] that

    normal fringe images have a piecewise smooth magnitude

    response in the DTCWT domain, which is much different

    from that of noises. Besides, we show in [35] that we can

    also detect and remove the bias in the fringe image in the

    DTCWT domain. They lead to an improved FPP algorithm as

    shown in the bottom 4 blocks of Fig. 3. In the figure, the

    noisy fringe image is first transformed to the DTCWT

    domain where the bias removal and denoising operations are

    carried out [35]. The processed DTCWT coefficients are

    then transformed back to the spatial domain. Traditional

    phase unwrapping algorithm is then used to recover the

    original phase information and the 3D model of the object

    can be reconstructed. The above fringe image enhancement

    method can effectively reduce the estimation error of ( )y x .

    However in case that some of the ( )y x are missing due to

    occlusion or other reasons, such enhancement method still

    cannot avoid the severe distortion appeared in the final

    reconstructed 3D model. In this paper, we propose an

    improved FPP algorithm which also adopts the DTCWT

    based fringe image enhancement method in [18]. In addition,

    a period order estimation process is added (as shown in Fig.

    Fpp

    g

    Fpq

    Fqp

    Fqq

    Hpp,

    Vpp,

    Dpp

    Hpq,

    Vpq,

    Dpq

    Hqp,

    Vqp,

    Dqp

    Hqq,

    Vqq,

    Dqq

    Fhh

    Fhg

    Fgh

    Fgg

    Hhh,

    Vhh,

    Dhh

    Hhg,

    Vhg,

    Dhg

    Hgh,

    Vgh,

    Dgh

    Hgg,

    Vgg,

    Dgg

    Fhh ...

    ...

    ...

    ...

    Level 1 Level 2

    Fig. 2. Realization of the DTCWT using 4 DWT trees

    Fig. 3. The DTCWT FPP framework with the proposed period order

    estimation algorithm

  • 3) to facilitate the correct implementation of phase

    unwrapping even when some of the phase information is

    missing. Details are shown in Section III and IV.

    3. Proposed Marker Coding for Consistent Phase Unwrapping

    In fact, the relationship between ( )y x and ( )y x can be

    written as follows [19]:

    ( ) ( ) ( )2y y yx x k x (4)

    where ( )yk x is an integer. It is the so-called period order that

    determines the number of 2 jumps required to unwrap the

    wrapped phase. If ( )yk x is known, ( )y x can always be

    computed even if some of the wrapped phase information is

    missing. In this paper, we propose to embed a set of

    structured markers into the projected fringe pattern to

    facilitate the estimation of the period order ( )yk x from the

    fringe image. The marker encoded fringe projection pattern is

    defined as follows:

    ( ) ( ) ( )my y yp x p x m x (5)

    where ( )yp x is the original sinusoidal fringe projection

    pattern and ( )ym x is the marker signal added to the fringe

    projection pattern. As illustrated in Fig. 4, the markers are realized as a sequence of impulses (dash line) added to the original sinusoidal fringe projection pattern (solid line) at different phase angles. These phase angles are carefully selected such that they encode the order number of the

    sinusoidal period. Ideally, for every sinusoidal period in yp

    with period order number yk , a marker should be added to it

    at phase angle ( )y y yk M k , where M(.) is a unique mapping function. It is however difficult to achieve in practice since each sinusoidal period is represented by a limited number of pixels of the fringe projection pattern. It is

    not possible to have many different ( )y k . That is, there can

    only be a limited number of unique markers. Suppose that every sinusoidal period is represented by To pixels of the fringe projection pattern and every marker has a size of Tm pixels. Then at most /m o mN T T unique markers can be

    made. Here we assume To is an integer multiple of Tm. In this case, a marker will be added to a sinusoidal period at phase

    angle ( )m

    y y y Nk M k , where

    ba refers to a modulo b.

    In our experiment, we choose To = 36 and Tm = 4. Hence, 9 unique markers can be inserted into 9 different sinusoidal periods respectively. The set of markers will be repeated for the next 9 sinusoidal periods. Such arrangement allows phase unwrapping using (4) when up to 8 consecutive sinusoidal fringe periods are missing due to whatever reasons. This resolvability is sufficient in normal applications of the FPP.

    To facilitate the detection of the markers, the mapping function M(.) should be designed to maximize the difference

    of ( )y yk between two neighboring markers. A natural choice

    is as follows:

    1 2. .2

    1 2. .

    2

    m m

    m

    m

    my y y yN N

    N m

    my

    N m

    Nk M k k

    N

    Nk

    N

    (6)

    Here we assume Nm is an odd number. It is shown in

    Appendix A that the mapping function in (6) ensures

    neighboring markers will have a difference in ( )y yk with

    value at least 1m

    m

    N

    N

    , which is about the maximum

    possible value (i.e. ). An example of the marker encoded fringe projection pattern is shown in Fig. 5. In the figure, the

    thick black and white columns are the sinusoidal fringe

    projection pattern. The markers are characterized by the

    sharp black and white lines. As mentioned above, we select

    Nm = 9 in our experiment. Following (6), 9 markers are

    inserted into 9 consecutive sinusoidal periods at 9 different

    phase angles 0,5 ,1 ,6 ,2 ,7 ,3 ,8 ,4 , where 2 / 9 ,

    respectively. With this arrangement, any 2 neighboring

    markers are separated by at least 4. Such arrangement maximizes the difference in ( )y yk of neighboring markers. It

    will improve the performance of the later marker detection

    process. Based on (6), we can also define ( )ym x of (5) as

    follows:

    ( ) *y

    y

    y k

    k

    m x f x (7)

    where

    1

    1 . .2( )

    0

    m

    y m

    y m

    m o y y Nk N

    k Nx T T k k

    f x

    otherwise

    (8)

    Fig. 4. The original sinusoidal fringe projection pattern (for a particular

    row) and the markers (top). Resulting fringe projection pattern with markers

    embedded (bottom).

    Fig. 5. A fringe pattern with markers located at different phase angles of the sinusoidal fringes.

  • In (7), the symbol * denotes linear convolution and can be

    any short support impulsive function. In our experiment, we

    set to be the first derivative of an impulse function. Each

    marker is represented by 4 pixels in the projected fringe

    pattern as mentioned above.

    4. Proposed Period Order Detection Algorithm

    In this section, we discuss how the embedded markers and

    the period order information are detected from the fringe

    image captured by the camera. As mentioned in Section II,

    the captured fringe image with markers embedded will be

    processed based on a DTCWT FPP framework as shown in

    Fig. 3. Then the wavelet coefficients of 6 orientations at

    different levels will be generated. Recall that the markers are

    signals of sharp changes in magnitude. They induce strong

    wavelet coefficients particularly in the first few levels. On

    the other hand, normal fringe pattern usually does not have

    high frequency contents. Hence their wavelet coefficients can

    often be found in higher levels of the wavelet transform. For

    the proposed algorithm, we examine the first 2 levels of the

    wavelet transform to detect the positions of the markers. Note

    also that the markers are added row-wise to the fringe

    pattern, they will not introduce wavelet coefficients of all 6

    orientations. To be more specific, only the subbands of 45o,

    75o, 105o, and 135o will contain significant wavelet

    coefficients of the markers.

    Those wavelet coefficients will be sent to the Period Order

    Estimation function block as shown in Fig. 3. Denote the

    wavelet coefficients at level j and orientation subband m as

    ( , )d j m . The marker cue information Q is first computed in

    the Period Order Estimation function block using the

    following formulation,

    o o

    o o

    2

    {45 , 75 ,

    105 , 135 }

    | ( , ) |jjj i m

    Q d j m

    (9)

    where ( , )d j m is the magnitude of the complex wavelet

    coefficients. Parameter and are used to control the contribution of wavelet coefficients to the marker cue

    function. We empirically select = 1 and = 1, although our experiments show that the final result is not sensitive to their

    selection. In (9), ( )j is an interpolation function (e.g.,

    bilinear interpolation) applied to the accumulation results of each level such that they have the same size as the original fringe image. Due to the shift invariance property of the DTCWT [40], the singularities that characterize the markers in the fringe image will generate strong coefficients at similar positions at all levels in the DTCWT domain. The Gaussian-like magnitude response of the wavelet functions [40] will also ensure that each marker will have only one maximum of Q. Thus the position of the maxima in Q is strongly related to the position of the markers.

    However, the first two levels of the wavelet transform are

    also swamped by the coefficients of noises. It means that the

    maxima in Q can also be contributed by noises. Hence in

    practical setting, detecting markers by using only the maxima

    of Q will get false detection results, as illustrated in Fig. 6.

    To identify the maxima of the markers, we threshold both the

    magnitude and phase of the complex wavelet coefficients at

    the positions where the maxima of Q are found. The

    magnitude of the markers wavelet coefficients in general

    should be much higher than that of noises. So a thresholding

    operation to the magnitude of the wavelet coefficients can be

    performed first to remove the maxima contributed by noises

    of small magnitude. Given the wavelet coefficients ( , )d j m

    at j = 1 and 2 and m = {45o, 75o, 105o, and 135o}, the

    following operation is carried out:

    ,, ,,

    ( , ) if ( , ),

    0 otherwise

    j mx y x y

    x y

    d j m d j md j m

    (10)

    where ,

    ( , )x y

    d j m is the ( , )d j m at position {x, y} and

    , , ,2logn

    j m j m j mN

    and median / 0.6745 nj ,m d( j,m ) (11)

    In (11), is the so-called universal threshold [41] used in

    many wavelet denoising applications [4143]; ,j mN is the

    number of coefficients at level j and orientation subband m. n is the standard deviation of noise which is estimated

    based on the robust statistics [44].

    To further improve the detection of the maxima of the

    markers, we consider the local relative phase of the complex

    wavelet coefficients [4547]. By definition, the local relative

    (a) (b)

    Fig. 6. Marker detection results: (a) using only the maxima of Q; (b) the

    zoom in version which shows many falsely detected markers (circled).

    (a) (b)

    Fig. 7. A comparison of local phase and local relative phase: (a) local phase;

    (b) local relative phase.

  • phase ( , )j m of the complex wavelet coefficients ( , )d j m

    is given by [45] [47],

    , 1,( , ) ( , ) ( , )

    x y x yj m d j m d j m

    (12)

    where ,

    ( , )x y

    d j m is the local phase angle at position {x, y}.

    While the local phase of the complex wavelet is known to be

    arbitrary irrespective to the structure of the image, there is a

    strong relationship between the local relative phase and the

    orientation of the edges in natural images [47]. Our

    experiment shows that it also provides a good description of

    the markers. Since all markers have the same structure, they

    incur complex wavelet coefficients of similar local relative

    phase. It however is not case for those of noises. An example

    is shown in Fig. 7. As it is seen in Fig. 7(a), the local phases

    (arrows) around the markers maxima have unpredictable

    directions. However, the local relative phases (arrows) as

    shown in Fig. 7(b) around the markers maxima have a

    similar horizontal direction, while it is not the case for those

    of noises. For the proposed algorithm, we first compute the

    mean relative phase of the complex wavelet coefficients.

    More specifically, we apply a 2D rectangular mask of 3x3

    centered at every complex wavelet coefficient ( , )d j m .

    Then the mean of the relative phase from the set i for

    index 1,...,i n is computed by,

    1 1

    arctan cos / sinn n

    i i

    i i

    (13)

    where n is the number of maxima in a particular mask.

    Finally, the following thresholding procedure with respect to

    is carried out:

    , ,,

    ( , ) if ( , ),

    0 otherwise

    x y x y

    x y

    d j m j md j m

    (14)

    where ,

    ( , )x y

    j m is the mean relative phase at level j and

    orientation subband m at position {x, y}; and is a very small real number.

    The thresholded wavelet coefficients are then used in (9)

    for computing the marker information cue Q and in turn

    detecting the position of the markers. The detection accuracy

    is greatly improved by using the abovementioned

    thresholding techniques based on the magnitude and relative

    phase of the complex wavelet coefficients. An example of the

    end result is shown in Fig. 8. It can be seen that almost all of

    the maxima of noises are removed. The maxima retained

    clearly show the positions of the markers in the fringe image.

    When the maxima of the markers are identified, the next

    step is to determine the period order ky of each marker by

    identifying y of the marker from the fringe image (see (6)

    for the relationship between ky and y ). To do so, we first use

    the flood fill algorithm [21] to find the regions in the

    wrapped phase map where the phase difference is bounded

    by 2. Hence within the region, the period order should be

    the same. Let us define jY to be the set of all y in such

    regions with j as the region index. An exhaustive search is

    then performed to estimate the period order ky based on the

    maxima detected in region j as follows:

    '

    1

    1min '

    j

    j

    yi

    y Yj

    y Y

    i iN

    k M i

    (15)

    where is defined in Section 3; Nj is the total number of

    maxima that can be detected in region j; and y is the phase

    angle of the maxima in row y, which is obtained directly by

    inspecting the fringe image based on the maxima position

    indicated in Q. However, due to the various artifacts in the

    fringe image, the y obtained is always slightly different

    from the true y . (15) thus helps to identify the correct i

    based on y . Another problem when implementing (15) is

    that the flood fill algorithm may accidentally include the

    maxima from neighboring regions into the computation. It is

    particularly the case when the fringe image is of low quality.

    Recall that when embedding the markers, arrangement has

    been made to maximize the difference in y of neighboring

    markers. So in practice before implementing (15), we first

    carry out a screening process to all y such that those having

    large difference from the rest will be ignored. Once we get

    the period order ky from (15), the phase unwrapping problem

    can be solved using (4) and the 3D model of the object can

    be readily reconstructed.

    5. Simulation and Experiment

    To evaluate the computational efficiency and accuracy of

    the proposed algorithm, a simulation using a computer

    generated fringe pattern was first carried out. Fig. 9(a) and

    (b) show a computer generated cone object and the deformed

    marker encoded fringe pattern, respectively, which were used

    in the simulation. They serve as the ground truth for the

    evaluation. To simulate the real working environment, white

    Gaussian noise (variance 1.0) is added to the fringe pattern.

    The simulation code is written in MATLAB running on a

    personal computer at 3.4 GHz. The resolution of the testing

    (a) (b)

    Fig. 8. (a) Marker detection results after using the proposed thresholding

    method; (b) the zoom in version.

  • fringe image is 20482048 pixels.

    We compare the proposed algorithm with the traditional

    Window Fourier Filtering (WFF) method [48] [49] and the

    DTCWT method without markers embedded [18] [35]. For

    DTCWT, we use the filter coefficient proposed in [50]. The

    length of the wavelet filters used is 12, which is typical for

    wavelet filter. For both approaches, the phase unwrapping is

    done by using the Goldstein algorithm [22]. All algorithms

    are implemented in MATLAB. Table 1 shows the

    comparison results in terms of the execution time and SNR at

    different noise levels. As shown in the table, the proposed

    algorithm is faster by approximately 10 times than the

    WFF+Goldstein method with similar, if not better, SNR.

    Compared to the DTCWT+Goldstein method (without

    markers), the proposed algorithm gives a similar performance

    both in the execution time and SNR. They show that the use

    of markers does not introduce much burden to the

    computation of the algorithm.

    The real advantage of using markers is that it allows

    correct phase unwrapping even when some of the phase

    information is seriously corrupted or even missing. To

    demonstrate it, we conducted a series of experiments using

    real objects. More specifically, we implemented our

    proposed algorithm with an FPP hardware setup that contains

    a DLP projector and a digital SLR camera. The projector has

    a 2000:1 contrast ratio with light output of 3300 ANSI

    lumens and the camera has a 22.2 x 14.8mm CMOS sensor

    and a 17-50mm lens. Both devices are connected to the

    computer with a 3.4GHz CPU and 16GB RAM for image

    processing. They are placed at a distance of 700mm-1200mm

    from the object.

    In our experiment, a marker encoded fringe pattern at the

    (a) (b)

    (c)

    Fig. 11. (a) Texture images (b) markers encoded fringe image; (c) one of the PMP-DF fringe images.

    (a) (b)

    (c) (d)

    (e) (f)

    Fig. 10. Comparison of the proposed algorithm and the traditional phase unwrapping method. (a) texture image; (b) fringe pattern illumination;

    (c) reconstructed 3D shape with texture using the proposed method;

    (d) reconstructed 3D shape with height profile using the proposed method;

    (e) reconstructed 3D shape with texture using the traditional

    DTCWT+Goldstein; and (f) reconstructed 3D shape with height profile using

    the traditional DTCWT+Goldstein

    Table 1. Comparison between the proposed method, the conventional

    DTCWT+Goldstein method, and the WFF+Goldstein in terms of execution time and SNR

    Proposed Conventional

    DTCWT WFF

    Time (s) SNR Time (s) SNR Time (s) SNR

    0.2 2.27 39.55 2.13 39.37 33.39 43.12

    0.4 2.26 38.40 2.12 38.22 33.14 38.13

    0.6 2.27 36.95 2.12 36.79 33.11 34.84

    0.8 2.28 35.49 2.13 35.35 33.53 32.32

    1 2.28 33.86 2.12 33.99 33.42 30.29

    (a) (b)

    Fig. 9. The object used in the simulation. (a) A computer generated 3D cone (ground truth); (b) the deformed fringe pattern

  • resolution of 12801024 is generated and projected onto the

    target object. The fringe pattern consists of about 35

    sinusoids in x-direction; each has a length of 36 pixels. A

    marker is embedded to each sinusoid with 4 pixels width.

    There are 9 unique markers and repeated in every 9

    sinusoids.

    In the first experiment, we compare the performance of the

    proposed algorithm (using markers) with the conventional

    DTCWT+Goldstein method (without markers) in the

    situation that there are phase jumps in the fringe image. To

    create such testing environment, a paper plane and two small

    boxes of different height are used, as illustrated in Fig. 10a.

    (a) (f)

    (b) (g)

    (c) (h)

    (d) (i)

    (e) (j)

    Fig. 12. The first column is the comparison of the proposed algorithm

    and the PMP-DF for images captured with ISO 100 and shutter speed 1/15s: (a) a texture image, (b) height profile by the proposed method, (c)

    3D shape with texture by the proposed method, (d) height profile by the

    PMP-DF, (e) 3D shape with texture by the PMP-DF. The second column is the comparison of the proposed method and the

    PMP-DF for images captured with the ISO 100 and shutter speed 1/30s:

    (f) a texture image, (g) height profile by the proposed method, (h) 3D shape with texture by the proposed method, (i) height profile by the PMP-

    DF, (j) 3D shape with texture by the PMP-DF.

    (a) (f)

    (b) (g)

    (c) (h)

    (d) (i)

    (e) (j)

    Fig. 13. The first column is the comparison of the proposed algorithm and the PMP-DF for images captured with ISO 1600 and shutter speed

    1/80s: (a) a texture image, (b) height profile by the proposed method, (c)

    3D shape with texture by the proposed method, (d) height profile by the PMP-DF, (e) 3D shape with texture by the PMP-DF.

    The second column is the comparison of the proposed method and the

    PMP-DF for images captured with the ISO 1600 and shutter speed 1/125s: (f) a texture image, (g) height profile by the proposed method,

    (h) 3D shape with texture by the proposed method, (i) height profile by

    the PMP-DF, (j) 3D shape with texture by the PMP-DF.

  • Basically no fringe patterns can be found on the edges of the

    boxes, as shown in Fig. 10b. Thus phase jumps are

    introduced to the fringe images. As shown in Fig. 10c-f, both

    approaches can correctly reconstruct the paper plane since

    they both use the DTCWT FPP framework as shown in Fig.

    3. However, only the proposed method can make a correct

    estimation of the height of the boxes. It is expected since the

    period order information obtained from the markers allows us

    to restore the unwrapped phase even when there are phase

    jumps in the fringe image.

    In the second experiment, we used the same objects as in

    the first experiment but comparing the proposed algorithm

    with the PMP-DF [32]. As it is mentioned in Section I, the

    PMP-DF method uses a low frequency signal added to the

    high frequency fringes for embedding period order

    information. Thus it serves similar to the markers of the

    proposed algorithm. As required by the PMP-DF, 6 frames of

    image with phase shifted sinusoidal fringes are used for the

    reconstruction of one 3D model. It is in contrast to the

    proposed algorithm which requires only 1 fringe image due

    to the use of the DTCWT FPP framework. Fig. 11 shows a

    scene captured by the camera, the marker encoded fringe

    image used in the proposed algorithm and one of the fringe

    images used in the PMP-DF. Fig. 12 and Fig. 13 show the

    results at different ISOs and shutter speeds when capturing

    the fringe images. When the ISO value is high (e.g. ISO 1600

    for our camera), noise will be introduced to the fringe

    images. By changing the shutter speed, the brightness of the

    fringe images will be changed. These experiments show the

    robustness of both methods. It can be seen in the figures that

    both approaches can perform satisfactorily in the cases of

    ISO 100 and shutter speed 1/15s. When the image becomes

    darker (shutter speed is changed to 1/30s), the height profile

    reconstructed by the PMP-DF is seriously distorted. It is also

    the case when the ISO value is changed to 1600. The PMP-

    DF has problem in both cases (shutter speed 1/80s and

    1/125s), while the proposed algorithm performs normally. It

    can be observed that the PMP-DF is rather sensitive to the

    quality of the fringe images. It is because if the image is too

    bright or too dark, the detected low frequency signal will

    have its magnitude decreased or may even be distorted.

    Particularly when the image is noisy, the detected period

    order information based on the low frequency signal becomes

    very unreliable. On contrary, the detection of the markers is

    not affected by the brightness of the fringe image.

    In the final experiment, we used a free form object, a

    human hand. Reconstructing the 3D model of free form

    objects like human hands is very challenging because of the

    abruptly changing surface and the discontinuity around the

    edges. Nevertheless the proposed algorithm is able to

    reconstruct the 3D model satisfactorily (see Fig. 14).

    6. Conclusion

    In this paper, a new marker encoding and detection

    algorithm for the fringe projection profilometry (FPP) is

    proposed. Based on our previously developed dual tree

    (a) (b)

    (c)

    (d)

    (e)

    (f)

    Fig. 14. 3D model reconstruction of a human hand. (a) texture image; (b) the fringe image; (c) wrapped phase of the hand; (d) the detected markers; (e) the

    reconstructed 3D model with texture image; (f) the reconstructed 3D model.

  • complex wavelet (DTCWT) FPP framework, the proposed

    system can reconstruct the 3D model of an object using only

    one projection fringe image. The system can also handle the

    bias and noise problem in the image. In this paper, a marker

    encoding scheme is developed to embed the period order

    information to facilitate phase unwrapping even when there

    are phase jumps in the image. Based on our proposed

    algorithm, the marker cue information can be extracted and

    the period order information can be estimated accurately with

    parsimonious. The system can be built with merely a

    conventional projector and a camera with no additional

    hardware requirement. Experimental results show that the

    proposed algorithm is robust to the quality of the fringe

    image and does not introduce much burden computationally

    to the original DTCWT FPP framework. Future work will be

    focused on achieving high quality 3D model reconstruction

    of more complex objects.

    Appendix A

    Prove: Given Nm is an odd integer, show that the mapping function in (6) ensures neighboring markers will have a difference in ( )y yk with value at least

    1mm

    N

    N

    .

    Proof: It is given in (6) that

    1 2. .2

    1 2. .

    2

    m m

    m

    m

    my y y yN N

    N m

    my

    N m

    Nk M k k

    N

    Nk

    N

    Hence the neighboring marker for the period ky+ 1 will have,

    1 2

    1 1 . .2

    m

    my y y

    mN

    Nk k

    N

    Let

    1 1

    1 . and .2 2

    m m

    m my y

    N N

    N Nk a k b

    Since Nm is odd, a must be either greater than or smaller than b.

    If then 0ma b N a b .

    Then

    2 2

    2 1 11 . .

    2 2

    2 1 11 . .

    2 2

    12 1 2 1

    2 2

    m

    m mm

    m

    m

    Nm m

    m my y

    N NmN

    m my y

    Nm

    mm m

    Nm m m

    a b a bN N

    N Nk k

    N

    N Nk k

    N

    NN N

    N N N

    If then 0mb a N b a . Then

    2 2

    1 12. 1 .

    2 2

    1 12. 1 .

    2 2

    1 12 2

    2 2

    112

    2

    m

    m mm

    m

    m m

    Nm m

    m my y

    m N NN

    m my y

    m N

    m mm

    m mN N

    mm

    m m

    b a b aN N

    N Nk k

    N

    N Nk k

    N

    N NN

    N N

    NN

    N N

    Since 1 1m mm m

    N N

    N N

    , the statement is proved.

    (Q.E.D.)

    This work is fully supported by the Hong Kong Research

    Grant Council under research grant B-Q38N

    References

    1. Z. Song, R. Chung, and X.-T. Zhang, "An Accurate and Robust

    Strip-Edge-Based Structured Light Means for Shiny Surface Micromeasurement in 3-D," IEEE Trans. Ind. Electron. 60, 10231032 (2013).

    2. R. R. Garcia and A. Zakhor, "Consistent Stereo-Assisted Absolute

    Phase Unwrapping Methods for Structured Light Systems," IEEE

    J. Sel. Top. Signal Process. 6, 411424 (2012). 3. F. Sadlo, T. Weyrich, R. Peikert, and M. Gross, "A practical

    structured light acquisition system for point-based geometry and

    texture," in Eurographics/IEEE VGTC Symposium Proceedings in Point-Based Graphics (2005), pp. 89145.

    4. B. Shi, B. Zhang, F. Liu, J. Luo, and J. Bai, "360 Fourier

    Transform Profilometry in Surface Reconstruction for Fluorescence Molecular Tomography," IEEE J. Biomed. Heal.

    Inform. 17, 681689 (2013). 5. S. Zhang and S.-T. Yau, "High-resolution, real-time 3D absolute

    coordinate measurement based on a phase-shifting method," Opt.

    Express 14, 2644 (2006).

  • 6. P. S. Huang and S. Zhang, "Fast three-step phase-shifting

    algorithm," Appl. Opt. 45, 50865091 (2006). 7. F. Yang and X. He, "Two-step phase-shifting fringe projection

    profilometry: intensity derivative approach," Appl. Opt. 46, 71727178 (2007).

    8. S. Zhang and S.-T. Yau, "High-speed three-dimensional shape

    measurement system using a modified two-plus-one phase-shifting

    algorithm," Opt. Eng. 46, 136031136036 (2007). 9. G. Sansoni, M. Carocci, and R. Rodella, "Three-Dimensional

    Vision Based on a Combination of Gray-Code and Phase-Shift

    Light Projection: Analysis and Compensation of the Systematic Errors," Appl. Opt. 38, 6565 (1999).

    10. Y. Wang, K. Liu, Q. Hao, D. L. Lau, and L. G. Hassebrook,

    "Period Coded Phase Shifting Strategy for Real-time 3-D Structured Light Illumination," IEEE Trans. Image Process. 20,

    30013013 (2011). 11. T.-W. Hui and G. K.-H. Pang, "3-D Measurement of Solder Paste

    Using Two-Step Phase Shift Profilometry," IEEE Trans. Electron.

    Packag. Manuf. 31, 306315 (2008). 12. Y. Wang, K. Liu, Q. Hao, X. Wang, D. L. Lau, and L. G.

    Hassebrook, "Robust Active Stereo Vision Using Kullback-Leibler

    Divergence," IEEE Trans. Pattern Anal. Mach. Intell. 34, 548563 (2012).

    13. M. Gupta, Q. Yin, and S. K. Nayar, "Structured Light in Sunlight,"

    Proc. IEEE Int. Conf. Comput. Vis. (2013).

    14. M. Takeda and K. Mutoh, "Fourier transform profilometry for the automatic measurement of 3-D object shapes," Appl. Opt. 22,

    39773982 (1983). 15. D. Ali, . Serhat, and F. N. Ecevit, "Continuous wavelet transform

    analysis of projected fringe patterns," Meas. Sci. Technol. 15,

    1768 (2004). 16. A. Z. Abid, M. A. Gdeisat, D. R. Burton, M. J. Lalor, and F.

    Lilley, "Spatial fringe pattern analysis using the two-dimensional

    continuous wavelet transform employing a cost function," Appl. Opt. 46, 6120 (2007).

    17. C. Quan, C. J. Tay, and L. Chen, "Fringe-density estimation by

    continuous wavelet transform," Appl. Opt. 44, 23592365 (2005). 18. T.-C. Hsung, D. Pak-Kong Lun, and W. W. L. Ng, "Efficient

    fringe image enhancement based on dual-tree complex wavelet

    transform," Appl. Opt. 50, 39733986 (2011). 19. K. Itoh, "Analysis of the phase unwrapping algorithm," Appl. Opt.

    21, 2470 (1982).

    20. D. C. G. and M. D. Pritt, Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software (John Wiley & Sons, 1998).

    21. S. Zhang, X. Li, and S.-T. Yau, "Multilevel quality-guided phase

    unwrapping algorithm for real-time three-dimensional shape reconstruction.," Appl. Opt. 46, 507 (2007).

    22. R. M. Goldstein, H. A. Zebker, and C. L. Werner, "Satellite radar

    interferometry: Two-dimensional phase unwrapping," Radio Sci. 23, 713720 (1988).

    23. J. Gass, A. Dakoff, and M. K. Kim, "Phase imaging without 2 ambiguity by multiwavelength digital holography," Opt. Lett. 28, 1141 (2003).

    24. Y. Wang, S. Yang, and X. Gou, "Modified Fourier transform

    method for 3D profile measurement without phase unwrapping," Opt. Lett. 35, 790 (2010).

    25. Y. Zhang, Z. Xiong, Z. Yang, and F. Wu, "Real-Time Scalable

    Depth Sensing With Hybrid Structured Light Illumination," IEEE Trans. Image Process. 23, 97109 (2014).

    26. Y. Zhang, Z. Xiong, and F. Wu, "Unambiguous 3D measurement

    from speckle-embedded fringe," Appl. Opt. 52, 77977805 (2013). 27. H. Cui, W. Liao, N. Dai, and X. Cheng, "A flexible phase-shifting

    method with absolute phase marker retrieval," Measurement 45,

    101108 (2012). 28. S. Gai and F. Da, "A novel phase-shifting method based on strip

    marker," Opt. Lasers Eng. 48, 205211 (2010). 29. W.-H. Su and H. Liu, "Calibration-based two-frequency projected

    fringe profilometry: a robust, accurate, and single-shot

    measurement for objects with large depth discontinuities," Opt.

    Express 14, 9178 (2006). 30. J. M. Huntley and H. Saldner, "Temporal phase-unwrapping

    algorithm for automated interferogram analysis.," Appl. Opt. 32,

    304752 (1993).

    31. Y. Wang and S. Zhang, "Novel phase-coding method for absolute

    phase retrieval," Opt. Lett. 37, 2067 (2012). 32. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook,

    "Dual-frequency pattern scheme for high-speed 3-D shape

    measurement," Opt. Express 18, 52295244 (2010). 33. I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, "The dual-

    tree complex wavelet transform," IEEE Signal Process. Mag. 22,

    123151 (2005). 34. T.-C. Hsung, D. P. Lun, and W. W. L. Ng, "Zero spectrum

    removal using joint bilateral filter for Fourier transform

    profilometry," Vis. Commun. Image Process. (VCIP), 2011 IEEE 14 (2011).

    35. W. W.-L. Ng and D. P.-K. Lun, "Effective bias removal for fringe

    projection profilometry using the dual-tree complex wavelet transform," Appl. Opt. 51, 59095916 (2012).

    36. W. W.-L. Ng and D. P.-K. Lun, "Image enhancement for fringe

    projection profilometry," Circuits Syst. (ISCAS), 2013 IEEE Int. Symp. 729732 (2013).

    37. B. Budianto and D. P. K. Lun, "Efficient 3-dimensional model

    reconstruction based on marker encoded fringe projection profilometry," Acoust. Speech Signal Process. (ICASSP), 2014

    IEEE Int. Conf. 574578 (2014). 38. X. Su and W. Chen, "Fourier transform profilometry: a review,"

    Opt. Lasers Eng. 35, 263284 (2001). 39. T.-C. Hsung and D. P.-K. Lun, "On optical phase shift

    profilometry based on dual tree complex wavelet transform," Image Process. (ICIP), 2010 17th IEEE Int. Conf. 337340 (2010).

    40. N. Kingsbury, "A dual-tree complex wavelet transform with improved orthogonality and symmetry properties," in Proc. Int.

    Conf. Image Process. (2000), Vol. 2, pp. 375378 vol.2. 41. D. L. Donoho, "De-noising by soft-thresholding," IEEE Trans. Inf.

    Theory 41, 613627 (1995). 42. L. Sendur and I. W. Selesnick, "Bivariate shrinkage with local

    variance estimation," IEEE Signal Process. Lett. 9, 438441 (2002).

    43. L. Sendur and I. W. Selesnick, "Bivariate shrinkage functions for

    wavelet-based denoising exploiting interscale dependency," IEEE Trans. Signal Process. 50, 27442756 (2002).

    44. D. L. Donoho and I. M. Johnstone, "Adapting to Unknown

    Smoothness via Wavelet Shrinkage," J. Am. Stat. Assoc. 90, 12001224 (1995).

    45. A. Vo and S. Oraintara, "On the distributions of the relative phase

    of complex wavelet coefficients," in Proc. Int. Symp. Circuits Systems (2009), pp. 529532.

    46. J. Fauqueur, N. Kingsbury, and R. Anderson, "Multiscale

    Keypoint Detection using the Dual-Tree Complex Wavelet Transform," in Proc. IEEE Int. Conf. Image Process. (IEEE,

    2006), pp. 16251628. 47. A. Vo and S. Oraintara, "A study of relative phase in complex

    wavelet domain: Property, statistics and applications in texture

    image retrieval and segmentation," Signal Process. Image

    Commun. 25, 2846 (2010). 48. Q. Kemao, "Two-dimensional windowed Fourier transform for

    fringe pattern analysis: Principles, applications and

    implementations," Opt. Lasers Eng. 45, 304317 (2007). 49. Q. Kemao, H. Wang, and W. Gao, "Windowed Fourier transform

    for fringe pattern analysis: theoretical analyses," Appl. Opt. 47,

    54085419 (2008). 50. H. Shi, B. Hu, and J. Zhang, "A Novel Scheme for the Design of

    Approximate Hilbert Transform Pairs of Orthonormal Wavelet

    Bases," IEEE Trans. Signal Process. 56, 22892297 (2008).