+ All Categories
Home > Documents > Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

Date post: 30-Sep-2016
Category:
Upload: chun
View: 215 times
Download: 2 times
Share this document with a friend
10
Optimized two-frequency phase-measuring- profilometry light-sensor temporal-noise sensitivity Jielin Li,* Laurence G. Hassebrook, and Chun Guan Department of Electrical Engineering, University of Kentucky, 453 AH, Lexington, Kentucky 40506-0046 Received July 16, 2002; accepted August 1, 2002 Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth mea- surement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to op- timize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum. © 2003 Optical Society of America OCIS codes: 110.6880, 120.2650, 120.5800, 150.5670, 120.4290. 1. INTRODUCTION The structured light (SL) illumination approach to active vision 1,2 is one of the most important techniques in cur- rent three-dimensional (3D) shape measurement. SL de- coding techniques model the optical paths associated with emission and detection of reflected SL patterns to com- pute range data correspondence by triangulation. Com- pared with passive vision systems such as stereovision, SL techniques overcome the fundamental ambiguities as- sociated with passive approaches, especially in a low- texture environment. SL also has the advantage of com- putational simplicity and high precision. It has been used in the field of biomedical topology, 3 quality control, 4 and telecollaboration. 5 There are a variety of different SL patterns, which include single-stripe, 6,7 multistripe, 8 gradient, 9,10 binary, 11 sine-wave, 12 and various specialty patterns. 1,2 These patterns are typically implemented in one of three ways: single frame, 13 lateral shift multiframe, 14 and encoded multiframe. 11,1517 The mul- tiframes are projected in sequence, color encoded, 15 or combined into a single composite pattern. 13 Hybrid methods, which include the two-frequency phase- measuring-profilometry (PMP) method, 18 on which our study is focused, are used to further enhance specific per- formance aspects. We have chosen to use the two- frequency PMP for measuring the human face topology because (1) it is resistant to target surface albedo varia- tions, (2) it is resistant to ambient light contribution, (3) it yields nonambiguous depth measurement, (4) with a sec- ond, high frequency, it has the potential for attaining a high degree of accuracy in the depth reconstruction, and (5) the phase reconstruction is a pixelwise operation inde- pendent of the target object. Unlike the binary encoding methods, the PMP technique is not limited to the number of bit patterns used. However, in contrast to the binary methods, the PMP method depends on the pattern projec- tor and the image capture technology having an adequate range in intensity values. That is, if the dynamic range and the intensity resolution of the camera and the frame grabber are not large enough, then PMP is sensitive to saturation caused by highly specular surfaces. The same is true for ambient light if its intensity is significant with respect to the pattern intensity. That is, if the captured intensity saturates, then the phase cannot be accurately recovered. So PMP is best suited for mattelike surfaces with albedo variations and ambient light contributions within the digitization range of the camera and frame grabber technology. An overview of the PMP method used in this study is given in Section 2. A primary concern in SL techniques is the depth mea- surement error. For example, a rigorous study by Trobina 19 mathematically modeled the measurement er- ror due to calibration errors by using a coded-light ap- proach. As another example, Daley and Hassebrook 20 applied information theory to SL binary image encoding in order to use binary entropy to find the maximum spa- tial stripe frequency and thus estimate the maximum bit resolution in depth reconstruction. A key component of estimating the depth error is the calibration process and the resulting reconstruction coefficients. The calibration/ reconstruction method used in this study is detailed in Section 3 and is similar to the procedure in Trobina, 19 which has its origins in stereoscopic calibration methods presented by Faugeras and Toscani 21 and Tsai. 22 Our ap- paratus, as described in Section 3, had no significant ra- dial distortion, so a classic pinhole model is used to pre- dict the perspective distortion. A common theme in calibration error analysis 19,23,24 is emphasis on the effects from spatial and systematic er- rors from the calibration process. In contrast, we focus our attention on the effect of temporal noise corrupting each captured pattern projection in the PMP process. 106 J. Opt. Soc. Am. A/ Vol. 20, No. 1/ January 2003 Li et al. 1084-7529/2003/010106-10$15.00 © 2003 Optical Society of America
Transcript
Page 1: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

106 J. Opt. Soc. Am. A/Vol. 20, No. 1 /January 2003 Li et al.

Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

Jielin Li,* Laurence G. Hassebrook, and Chun Guan

Department of Electrical Engineering, University of Kentucky, 453 AH, Lexington, Kentucky 40506-0046

Received July 16, 2002; accepted August 1, 2002

Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth mea-surement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporalnoise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to op-timize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence ofphase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two setsof pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, andthe second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depthestimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phasemeasurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resultingin depth measurement error. We present a solution for finding the second frequency, where intensity noisevariance is at its minimum. © 2003 Optical Society of America

OCIS codes: 110.6880, 120.2650, 120.5800, 150.5670, 120.4290.

1. INTRODUCTIONThe structured light (SL) illumination approach to activevision1,2 is one of the most important techniques in cur-rent three-dimensional (3D) shape measurement. SL de-coding techniques model the optical paths associated withemission and detection of reflected SL patterns to com-pute range data correspondence by triangulation. Com-pared with passive vision systems such as stereovision,SL techniques overcome the fundamental ambiguities as-sociated with passive approaches, especially in a low-texture environment. SL also has the advantage of com-putational simplicity and high precision. It has beenused in the field of biomedical topology,3 quality control,4

and telecollaboration.5 There are a variety of differentSL patterns, which include single-stripe,6,7 multistripe,8

gradient,9,10 binary,11 sine-wave,12 and various specialtypatterns.1,2 These patterns are typically implemented inone of three ways: single frame,13 lateral shiftmultiframe,14 and encoded multiframe.11,15–17 The mul-tiframes are projected in sequence, color encoded,15 orcombined into a single composite pattern.13 Hybridmethods, which include the two-frequency phase-measuring-profilometry (PMP) method,18 on which ourstudy is focused, are used to further enhance specific per-formance aspects. We have chosen to use the two-frequency PMP for measuring the human face topologybecause (1) it is resistant to target surface albedo varia-tions, (2) it is resistant to ambient light contribution, (3) ityields nonambiguous depth measurement, (4) with a sec-ond, high frequency, it has the potential for attaining ahigh degree of accuracy in the depth reconstruction, and(5) the phase reconstruction is a pixelwise operation inde-pendent of the target object. Unlike the binary encodingmethods, the PMP technique is not limited to the numberof bit patterns used. However, in contrast to the binarymethods, the PMP method depends on the pattern projec-

1084-7529/2003/010106-10$15.00 ©

tor and the image capture technology having an adequaterange in intensity values. That is, if the dynamic rangeand the intensity resolution of the camera and the framegrabber are not large enough, then PMP is sensitive tosaturation caused by highly specular surfaces. The sameis true for ambient light if its intensity is significant withrespect to the pattern intensity. That is, if the capturedintensity saturates, then the phase cannot be accuratelyrecovered. So PMP is best suited for mattelike surfaceswith albedo variations and ambient light contributionswithin the digitization range of the camera and framegrabber technology. An overview of the PMP methodused in this study is given in Section 2.

A primary concern in SL techniques is the depth mea-surement error. For example, a rigorous study byTrobina19 mathematically modeled the measurement er-ror due to calibration errors by using a coded-light ap-proach. As another example, Daley and Hassebrook20

applied information theory to SL binary image encodingin order to use binary entropy to find the maximum spa-tial stripe frequency and thus estimate the maximum bitresolution in depth reconstruction. A key component ofestimating the depth error is the calibration process andthe resulting reconstruction coefficients. The calibration/reconstruction method used in this study is detailed inSection 3 and is similar to the procedure in Trobina,19

which has its origins in stereoscopic calibration methodspresented by Faugeras and Toscani21 and Tsai.22 Our ap-paratus, as described in Section 3, had no significant ra-dial distortion, so a classic pinhole model is used to pre-dict the perspective distortion.

A common theme in calibration error analysis19,23,24 isemphasis on the effects from spatial and systematic er-rors from the calibration process. In contrast, we focusour attention on the effect of temporal noise corruptingeach captured pattern projection in the PMP process.

2003 Optical Society of America

Page 2: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

Li et al. Vol. 20, No. 1 /January 2003 /J. Opt. Soc. Am. A 107

Therefore our analysis determines the PMP method’s re-peatability and standard deviation (STD). Errors in cali-bration accuracy result in systematic errors that do notvary from measurement to measurement. So given thatthere are no gross errors in the calibration process, thesesystematic errors have no significant effect on the re-peated measurement variance. However, the erroranalysis, in Section 4, does incorporate the reconstructioncoefficients. This analysis is new to PMP methods but issimilar to and inspired by a quantization analysis by Be-hrooz Kamgar-parsi and Behzad Kamgar-parsi.25 Theresulting analysis yields a STD of phase for the PMPmethod, which is used to predict an optimum second fre-quency. A numerical model is presented for predictingthe STD as a function of the temporal sensor noise, thespatial pattern frequency, the sine-wave projection pat-tern amplitude, and the number of pattern shifts used.The phase STD model, the optimum frequency estimate,and the numerical performance model provide the systemdesigner with valuable insight into the two-frequencyPMP method. Our design considerations are given at theend of Section 4. The conclusions are given in Section 5,and acknowledgments are given in Section 6.

2. PHASE-MEASURING-PROFILOMETRYRANGE-FINDING METHODAll SL techniques are based on triangulation between thecamera and the projected light. For example, in Fig. 1, alight projector projects a light stripe pattern onto the ob-ject. Shown in Fig. 1 is a light stripe, but other patterns,such as grids, binary bars, and sine-wave fringes, can alsobe projected. The light pattern hits the object and is de-formed if observed from another angle. Capturing thedeformed image with a CCD camera, we calculate therange profile of the illuminated object by analyzing thedeformation. To extract the depth information from thedeformation, we need to know the geometric relationshipamong the camera, the projector, and the world coordi-nate system. The procedure to obtain these parametersis regarded as camera calibration.

As shown in Fig. 2, there are three coordinate systems:(1) the 3D world coordinate Pw 5 (Xw, Yw, Zw), which isthe physical coordinate in the object space and whose ori-gin and orientation are determined by the observer’s con-

Fig. 1. Geometry of single-stripe SL range finder.

venience, (2) the camera coordinate Pc 5 (xc, yc), whichis a two-dimensional (2D) pixel coordinate on the imageplane of the CCD camera, and (3) the projector coordinatePp 5 (xp, yp), which is also a 2D pixel coordinate. Werefer to the camera and projector combination as a 3Dsensor. The objective of 3D sensor calibration is to findthe mapping transformation from the 3D world coordi-nate to the 2D camera and projector coordinates. Theprocedure of range reconstruction is to find the 3D worldcoordinate from the 2D camera and projector coordinates.

As described in Section 1, PMP is an attractive SLmethod for several reasons. It requires as few as threeprojection frames and no point-matching or image en-hancement to obtain the fringe distortion, which makes itsuitable for a pipelined or parallel processing implemen-tation. In PMP, the light pattern projected is a sine-wavepattern, the light pattern is shifted several times, and thecaptured light pattern is expressed as

In~xp, yp! 5 Ap 1 Bp cos~2pfyp 2 2pn/N !, (1)

where Ap and Bp are constants of the projector, f is thefrequency of the sine wave, and (xp, yp) is the projectorcoordinate. The subscript n represents the phase-shiftindex. The total number of phase shifts is N. From theviewpoint of the camera, the received image is distortedby the topology and expressed as

Fig. 2. Coordinates in active range finder.

Fig. 3. Base frequency projections onto a Space Shuttle modelfor N 5 4.

Page 3: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

108 J. Opt. Soc. Am. A/Vol. 20, No. 1 /January 2003 Li et al.

In~xc, yc! 5 A~xc, yc! 1 B~xc, yc!

3 cos@ f~xc, yc! 2 2pn/N#, (2)

where f(xc, yc) represents the phase of the sine wave. Ifthe projected sine pattern is shifted by a factor of 2p/Nfor N times, the phase distortion f(xc, yc) is retrievedby14

f~xc, yc! 5 arctanF (n51

N

In~xc, yc!sin~2pn/N !

(n51

N

In~xc, yc!cos~2pn/N !G . (3)

It can be shown from Eqs. (1) and (2) that the projectorcoordinate yp can be recovered as

yp 5 f~xc, yc!/~2pf !. (4)

Once the phase is obtained, the depth of the objects can beeasily obtained by a geometric computation.26 The basefrequency is the f that gives 1 cycle across the field of viewand therefore yields a nonambiguous depth value. Anexample of the base frequency projections for N 5 4 isshown in Fig. 3. However, the unit frequency is noisy, soa second, higher frequency is used to improve the accu-racy of the depth value.18

3. SYSTEM CALIBRATION ANDRECONSTRUCTIONCalibration allows versatility in the actual apparatus yetresults in accurate reconstruction of the target surface.The apparatus, the calibration formulas, and the recon-struction matrices are now given in detail. The appara-tus consisted of a Pulnix TMC-7 color camera and a TexasInstruments digital light processor (DLP) projector. Thecamera produces composite color video and is captured bydata translation device DT3153, yielding 640 3 480pixel array with 24-bit color and 8-bit intensity resolu-tion. The Texas Instruments DLP development kit pro-jector has an 800 3 600 micromechanical mirror array.The camera lens is a TOYO Optics TV zoom lens, and theprojector lens is the zoom lens that came with the DLPkit. The camera is set up directly above the projectorwith an angle of 29.13° between their optical axes, whichintersect at the target plane, 1.575 m from the projector/camera unit. The camera was adjusted to be approxi-mately perpendicular to the target plane, which in turn isaligned parallel with the Xw –Yw world coordinate plane.

The camera field of view at the target plane is 618 mmwide and 508 mm high. The projector field of view is 840mm wide and 635 mm high. The ratios of projector pixelto camera pixel are 0.9195 horizontal and 1.0 vertical.The camera pixel size on the target plane is 0.97 mm/pixelalong the horizontal and 1.06 mm/pixel along the vertical.The angular resolution of the camera is 0.0327 deg/pixel,and the projector resolution is 0.0368, both along the hori-zontal. The radial distortion was insignificant for bothcamera and projector lenses, with a maximum radial bar-rel distortion along the outer boundaries of their fields ofview, with respect to their optical centers, of less than0.67%. We chose this apparatus setup to scan humanbusts and faces, but the theory should apply to manyother configurations and applications.

Because the radial distortion is low, we are able to usethe ‘‘pinhole lens’’ to model the perspective distortion.The pinhole lens model is a common geometric model forcamera imaging systems and is shown in Fig. 4. Hadthere been significant radial distortion,27,28 the optics ofthe camera and/or the projector would need to be cali-brated off line and a spatial correction lookup table couldbe used to make the spatial corrections to the received im-age in the camera and/or to the projected image of theprojector.

A. Singular-Value-Decomposition-Based SystemCalibrationWe use a standard singular-value-decomposition-based(SVD-based) approach to the camera and projectorcalibration.21–24 The transforms from world to cameracoordinates are given by

xc 5m11

wcXw 1 m12wcYw 1 m13

wcZw 1 m14wc

m31wcXw 1 m32

wcYw 1 m33wcZw 1 m34

wc , (5a)

yc 5m21

wcXw 1 m22wcYw 1 m23

wcZw 1 m24wc

m31wcXw 1 m32

wcYw 1 m33wcZw 1 m34

wc , (5b)

where the perspective matrix is

Mwc 5 F m11wc m12

wc m13wc m14

wc

m21wc m22

wc m23wc m24

wc

m31wc m32

wc m33wc m34

wcG . (6)

As stated by Trucco and Verri,29 the perspective matrixMwc in Eq. (6) has only 11 independent entries, which canbe determined through a homogeneous linear system byat least six world camera point matches. If more than sixpoints can be obtained, then the matrix can be estimated

Fig. 4. Perspective projection model.

Page 4: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

Li et al. Vol. 20, No. 1 /January 2003 /J. Opt. Soc. Am. A 109

through least-squares techniques. If we assume thatthere are M matched points, the denominator of Eq. (5)can be moved to the left side and Eq. (5) can be rewrittenin a linear equation form as Am50, where A is a 2M3 12 matrix.

The matrix A is structured as

A 5 3X1

w Y1w Z1

w 1 0 0 0 0 2x1c X1

w 2x1c Y1

w 2x1c Z1

w 2x1c

0 0 0 0 X1w Y1

w Z1w 1 2y1

c X1w 2y1

c Y1w 2y1

c Z1w 2y1

c

X2w Y2

w Z2w 1 0 0 0 0 2x2

c X2w 2x2

c Y2w 2x2

c Z2w 2x2

c

0 0 0 0 X2w Y2

w Z2w 1 2y2

c X2w 2y2

c Y2w 2y2

c Z2w 2y2

c

¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯

XMw YM

w ZMw 1 0 0 0 0 2xM

c XMw 2xM

c YMw 2xM

c ZMw 2xM

c

0 0 0 0 XMw YM

w ZMw 1 2yM

c XMw 2yN

c YMw 2yM

c ZMw 2yM

c

4 , (7)

and

m 5 @m11wc m12

wc m13wc

¯ m34wc#T. (8)

Since A has rank 11, the vector m can be recovered fromSVD with

A 5 UDVT, (9)

where U is a 2M 3 2M matrix whose columns are or-thogonal vectors, D is a positive diagonal eigenvalue ma-trix, and V is a 12312 matrix whose columns are orthogo-nal. The only nontrivial solution corresponds to the lastcolumn of V, and that is the solution to the parametricmatrix Mwc . To find these calibration points, we use a16-point pattern marked off on an aluminum frame. Theworld (Xw, Yw, Zw) coordinates of the points, in millime-ters, are (0, 254, 645), (0, 381, 645), (0, 508, 645), (0, 635,645); (128, 127, 0), (128, 254, 0), (128, 381, 0), (128, 508,0); (305, 127, 0), (305, 254, 0), (305, 381, 0), (305, 508, 0);and (430, 254, 645), (430, 381, 645), (430, 508, 645), (430,635, 645). The coordinate accuracy is 62 mm. The pro-jector and camera coordinates are manually found bymoving a projected spot onto a target point and thenevaluating the camera coordinate.

With these 16 points, the transformation matrix Mwc iscalculated by solving Am50. For the world-to-projectortransformation matrix Mwp , we assume a similar per-spective projection, where the camera coordinate (xc, yc)is replaced by the projector coordinate (xp, yp), and asimilar perspective matrix,

Mwp 5 F m11wp m12

wp m13wp m14

wp

m21wp m22

wp m23wp m24

wp

m31wp m32

wp m33wp m34

wpG , (10)

is obtained for the corresponding points in world and pro-jector coordinates. It should be noted that because aDLP is being used, the xp coordinates are known and thuscan be used to contribute in the calculation of the denomi-nator coefficients of the projector yp transform in Eq. (11)in Subsection 3.B. However, the xp transformation is not

used anywhere else. If the xp values were not known,they would be replaced by additional known values of yp

in the SVD matrix A.

B. Range ReconstructionOnce the world-to-camera transformation matrix Mwc

and world-to-projector transformation matrix Mwp are ob-tained, the range of the object, i.e., the world coordinate ofthe object, can be computed by solving a 3 3 3 linearequation. For the point (Xw, Yw, Zw), it is projectedonto the camera plane through Mwc and onto the projec-tion plane through Mwp . During a scan, the image posi-tion (xc, yc) is known, and the y phase position yp of theprojection pattern is determined from the detected phasein Eq. (4). The unknown is the world coordinate(Xw, Yw, Zw), which satisfies Eqs. (5) and

yp 5m21

wpXw 1 m22wpYw 1 m23

wpZw 1 m24wp

m31wpXw 1 m32

wpYw 1 m33wpZw 1 m34

wp . (11)

Rearranging Eqs. (5) and (11) and letting

C 5 F m11wc 2 m31

wcxc m12wc 2 m32

wcxc m13wc 2 m33

wcxc

m21wc 2 m31

wcyc m22wc 2 m32

wcyc m23wc 2 m33

wcyc

m21wp 2 m31

wpyp m22wp 2 m32

wpyp m23wp 2 m33

wpypG ,

(12)

D 5 S m34wcxc 2 m14

wc

m34wcyc 2 m24

wc

m34wpyp 2 m24

wpD , (13)

we can compute the world coordinate as

Pw 5 @Xw Yw Zw#T 5 C21D. (14)

Thus the world coordinate of the image points can be ob-tained by solving a 3 3 3 linear equation. In the rangereconstruction, the world point Pw in Eq. (14) is regardedas the intersection point of observing line AB and project-ing line CB in Fig. 2. So it is unique, and if for any rea-son there is no solution for Eq. (14), it means that thepoint is invalid. An invalid point may occur when thereis shadowing, saturation, or low pattern signal energy.

4. INFLUENCE OF INTENSITY NOISE ONRANGE RECONSTRUCTIONCompared with other SL algorithms such as light stripe,binary bar, and gray-code projection, the PMP algorithmuses fewer numbers of frames for a given precision.

Page 5: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

110 J. Opt. Soc. Am. A/Vol. 20, No. 1 /January 2003 Li et al.

However, projecting a sine-wave light pattern requiresthe projector to support multiple gray levels. And the re-constructed phase is sensitive to intensity noise in thecaptured image. The noise sources include ambientlight, shadowing, projector illumination noise, camera/projector flicker, camera noise, and quantization error inthe frame grabber and the projector.

To model the intensity noise, we add noise to theA(xc, yc) in Eq. (2) such that

An 5 A~xc, yc! 5 A0~xc, yc! 1 DAn~xc, yc!, (15)

where A0(xc, yc) is the ideal background intensity anddoes not change in the test. DAn(xc, yc) is the randomintensity noise. To simplify our analysis, we assume thatthe DAn(xc, yc) are independent over n and are second-order processes. By substituting Eq. (15) into the nu-merator of Eq. (3) and considering a single pixel (xc, yc),we can rewrite the numerator as

S~An! 5 (n51

N

In sin~2pn/N ! 5 (n51

N

An sin~2pn/N !

1N

2B sin~f !. (16)

Similarly, the denominator of Eq. (3) can be rewritten as

C~An! 5 (n51

N

In cos~2pn/N ! 5 (n51

N

An cos~2pn/N !

1N

2B cos~f !. (17)

Inserting Eqs. (16) and (17) into Eq. (3), we have

f 5 arctanF S~An!

C~An!G . (18)

To estimate the change of f with respect to DAn , weuse the following gradient:

Df 5 (n51

N]f

]AnU

An5An0

DAn , (19)

]f

]AnU

An0

51

1 1 @S~An0 !/C~An

0 !#2

3 F S~An0 !

]C~An!

]An2 C~An

0 !]S~An!

]An

C~An!2GU

An5An0

,

(20)

where

S~An0 ! 5

N

2B sin~f !, (21)

C~An0 ! 5

N

2B cos~f !. (22)

From Eqs. (16) and (17), we obtain

]S~An!

]AnU

An0

5 sin~2pn/N !, (23)

]C~An!

]AnU

An0

5 cos~2pn/N !. (24)

Substituting Eqs. (21)–(24) into Eq. (20) and noting theproperties of triangulation, we obtain

]f

]AnU

An5An0

52

NBsin~f 1 2pn/N !. (25)

So Eq. (19) is rewritten as

Df 52

NB (n51

N

sin~f 1 2pn/N !DAn . (26)

From Eq. (26), we can see that the reconstructed phaseerror Df depends on the phase f and the intensity noise.However, as we show in the following, the phase errorvariance is independent of f.

Since the noise is assumed to have zero mean, similarto the analysis by Behrooz Kamgar-parsi and BehzadKamgar-parsi,25 the variance of phase error sf

2 can berepresented as

sf2 5 S 2

NB D 2

s 2(n51

N

sin2~f 1 2pn/N ! 52

N

s 2

B2 .

(27)

From Eq. (27), we can see the phase error STD sf in-creases linearly with intensity noise s and decreases withN, the total number of images used, and the fringe modu-lation B. This relationship is demonstrated for sf versusN in Fig. 5 and for sf versus Bp in Fig. 6, where Bp is thefringe modulation of the projector and related to B byB 5 aBp and a is a reflection constant less than or equalto 1. Each STD in Figs. 5 and 6 is calculated by using 50sample values. Thus the phase error variance is demon-strated to be independent of the actual phase value.

As shown in Eq. (4), the phase is linear with respect toyp. From Eq. (11), we can find the relationship between

Fig. 5. Phase STD change with increase of N, with scanning fre-quency f 5 1, BP 5 128, and s52.8309.

Page 6: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

Li et al. Vol. 20, No. 1 /January 2003 /J. Opt. Soc. Am. A 111

phase and reconstructed range. Taking a partial deriva-tive of yp in Eq. (11), we have

CPypw

1 C1Pw 5 D1 , (28)

where

Pypw

5 F ]Xw

]yp

]Yw

]yp

]Zw

]yp GT

5 @Xe Ye Ze#T, (29)

C1 5 F 0 0 0

0 0 0

2m31wp 2m32

wp 2m33wpG , (30)

D1 5 @0 0 m34wp#T. (31)

Combining Eq. (28) with Eq. (14), we have

Pypw

5 C21~D1 2 C1C21D !. (32)

So the reconstructed range error can be represented bythe phase error as

DPw 5 @dXw dYw dZw#T 5 Pypw Df/~2pf !. (33)

From Eqs. (26) and (33), we find that the reconstructederror of the 3D coordinate can be approximated by a func-tion of intensity error such that

DPw 5Pyp

w

pfNB (n51

N

sin~f 2 2pn/N !DAn . (34)

The intensity error is assumed to be zero-mean, Gaussiannoise, so it can be shown that the STD of the reconstruc-tion error is

@ sXw sYw sZw# 5 @ uXeu uYeu uZeu#s

A2NpfB.

(35)

An experimental demonstration of position error is shownin Fig. 7. All three sets of experimental results follow thetheoretical relationships given in Eq. (35). Note that sXw

is lowest because it is orthogonal to the depth distortionalong Yw. The deviation along Zw is highest because the

Fig. 6. Phase STD change with increase of BP, with scanningfrequency f 5 1, N 5 4, and s52.8309.

angle between the camera and the projector gives a lowerresolution in the depth dimension. The reconstructionerror depends on N, B, and f, so by increasing any of theseparameters, one can reduce the range error. Since N in-creases scanning time and B is limited by the systemsetup and difficult to improve, the most practical ap-proach is to increase f, i.e., increase the sine-wave fre-quency. This is the reason that a two-frequency sine-wave projection is used. The unit-frequency sine wave isused to avoid ambiguity in phase unwrapping, and thehigh-frequency sine wave is used to obtain a higherresolution.18 However, we need to determine the optimalhigher frequency.

Knowing the noise level, we can numerically find theoptimal frequency. This is demonstrated in Fig. 8, wherethe simulated phase deviation, indicated by the curves, iscompared with experimental measurements, indicated by

Fig. 7. Reconstructed world coordinate STD change with in-crease of frequency when N 5 4 and BP 5 128.

Fig. 8. Two data sets of simulated and experimental normalizedunwrapped phase STD with increase of frequency for sf

5 0.025346 and sf 5 0.047689. Experimental sets arescanned with N 5 4. BP 5 48 and BP 5 88 for circles andsquares, respectively. f1 and f2 are the optimal frequencies pre-dicted by the mathematical model.

Page 7: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

112 J. Opt. Soc. Am. A/Vol. 20, No. 1 /January 2003 Li et al.

Fig. 9. Range data of a face under (a) unit frequency and (b) two frequencies.

the circles and the squares. The simulation is based onthe simulated intensity such that

In~xc, yc! 5 Q256(A~xc, yc!) 1 Q256(B~xc, yc!)

3 cos@2pfQNp~ ypNp!/Np 2 2pn/N#. (36)

Simulated phase values are obtained through Eq. (3) fromthe simulated intensity values of Eq. (36). Noise withvariance equal to that of experimental measurements isadded to the simulated phase values before phase un-wrapping. It can be seen in Fig. 8 that the STD of an un-wrapped phase normalized by the phase STD sf de-creases for frequencies close to the unit frequency. Thisis because the higher frequency has a small unwrappedSTD, which is in agreement with Eq. (35). However, asthe frequency continues to increase, the wavelength be-comes short compared with the unit-frequency noise floor.This causes an increase in ambiguous phase unwrapping.Eventually, the wavelength becomes relatively small inthe higher frequency, so its phase ambiguity error con-tributes less to the total noise, and the phase noise in the

unit frequency begins to dominate and the curves level offto the unit-frequency STD. Therefore we can use the fre-quency where the STD of an unwrapped phase reaches itsvalley as our optimal frequency fopt . It is also noted fromFig. 8 that, for both experimental sets, the simulated datamatch well with the experimental data before the optimalfrequency is reached. The differences between experi-mental data and simulated data after the optimal fre-quency are possibly caused by the actual noise being non-stationary. However, the optimal frequencies predictedby the simulation are good approximations of those in ex-perimental values. Furthermore, the unwrapped phasecan be mathematically modeled, and optimal frequencycan be numerically obtained.

During the process of two-frequency phase unwrap-ping, phase fringes are counted by

Nf 5f1f 2 f2

2p, (37)

Page 8: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

f

base frequency and the second frequency. Unwrap these

Li et al. Vol. 20, No. 1 /January 2003 /J. Opt. Soc. Am. A 113

where f1 and f2 are the phases measured by the unit fre-quency and the higher frequency, respectively. Thereforethe unwrapped phase error is Df2 /f, where Df2 is thephase error from the higher frequency. However, Nf maybe miscounted as the noise increases. Since the range ofthe phase is (2p,p], based on our unwrapping algorithm,the phase-unwrapping error happens when Df1 . p/f,where Df1 is the phase error from the unit frequency.As f increases, the unwrapped phase error approachesDf1 . The phase error of the unwrapped phase can thenbe modeled as

Dfu 5 H Df2 /f when uDf1u < p/f

Df1 when p/f , uDf1u < p, (38)

where Dfu is the unwrapped phase error.Therefore the variance of the unwrapped phase is

E$Dfu2% 5 E$E$Dfu

2 uDf1%%

5 E2`

` E2`

`

Dfu2f~Dfu , Df1!d~Df1!d~Dfu!.

(39)

Since Df1 and Df2 are zero-mean Gaussian distributed,then, assuming that sf1 5 sf2 5 sf , the variance of theunwrapped phase is

sfu

2 5 P1S p

f D sf2

f 2 1 sf2 2 P2S p

f D , (40)

where

P1~x ! 51

A2psf

E2x

x

expS 2y2

2sf2 D dy,

P2~x ! 51

A2psf

E2x

x

y2 expS 2y2

2sf2 D dy.

As is indicated in Fig. 8, the optimal higher frequencies f1and f2 are the frequencies when ]sfu

2 /]f 5 0 and thevariance of the unwrapped phase and the reconstructedworld coordinates reach their minimal values in Eq. (40).

To demonstrate how this theory can be applied to an ac-tual human face, we adopt a procedure for estimating s 2,sf

2 , and optimum frequency as follows:

1. Set projector parameters in Eq. (1) to the maximumvalues as AP 5 BP 5 128 and adjust aperture to preventsaturation.

2. Capture a series of images of a typical target object.3. Use a small area of the target object (could be a

typical area or a difficult area) and find the temporal STDof each pixel by using the captured frames. Average theSTDs within the small area to estimate s 2.

4. The mean values of the small area represent the at-tenuated AP 1 BP1ambient light, so project and capturea black area and obtain only the ambient light. Thensubtract ambient light from AP 1 BP1ambient light, anddivide by 2 to get the received A and B values. Choosethe number of shifts based on maximum allowable scantime to be N.

5. Use N, B, s 2, and Eq. (27) to get sf2 .

6. Numerically determine f(xc, yc) values fromIn(xc, yc) in Eq. (36) into Eq. (3) at both the base fre-quency and a second frequency f and B(xc, yc) 5 B andA(xc, yc) 5 A.

7. Add noise, with variance s2 , to f(xc, yc) for the

Fig. 10. Reconstructed world coordinates of a face with intensity values.

Page 9: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

114 J. Opt. Soc. Am. A/Vol. 20, No. 1 /January 2003 Li et al.

phase values based on Eq. (37) and estimate the STD offu(xc, yc). Numerically plot the STD for a range of fre-quencies.

8. Numerically solve Eq. (40) for the optimum fre-quency. Use the optimum frequency as well as the curvefound in step 7 to decide on an optimized frequency giventhe application and performance objectives.

The procedure is applied to a human face, ‘‘Timothy.’’With the use of a sample area on the forehead, the opti-mum frequency is determined to be 22; the face isscanned at a second frequency of 20 to allow for gradientmodulation of the projected frequency. The two-frequency reconstruction is presented in Figs. 9 and 10.In Fig. 9, the depth data are visualized as a specular sur-face so that depth variation is highlighted. The depthnoise is obvious for the unit frequency, as shown in Fig.9(a), and a significant improvement is obtained by usingtwo frequencies, as shown in Fig. 9(b). To demonstratethe practicality of the process, we map the intensity val-ues onto the world coordinates and view them from anangle, as shown in Fig. 10. Some dark regions such asthe eye pupils and the nose nostrils are interpolated.

In summary, we provide a procedure for obtaining anoptimum frequency as well as a deviation with respect tofrequency, such as shown in Fig. 8. A PMP designer canchoose a suboptimal frequency that will allow for an ac-ceptable modulation range. Furthermore, a key param-eter in system design is the triangulation angle. Typi-cally, the larger the triangulation angle, the lower themeasurement STD. So the PMP designer would like tomake the triangulation angle as large as possible. How-ever, the larger the triangulation angle, the more theshadowing, the less the reflected light, and the larger thescan head geometry. These trade-offs are made, and oncethe maximum angle is determined, the high-frequencyprojection pattern is determined from Eqs. (3) and (36) toallow for the expected frequency modulations caused bythe target object depth gradient extrema. If the camera,the projector, and the typical target object surface can beset up to measure the anticipated imaging noise, then theabsolute phase error values can be calculated and thecalibration coefficients can be found and used to deter-mine the error in repeated measurements.

5. CONCLUSIONSIn this paper, we presented a detailed procedure for two-frequency PMP optimization. The PMP algorithm per-formance is dependent on the temporal intensity noise.To our knowledge, we are the first to introduce a rigorousmathematical analysis of the PMP temporal noise and itseffect on unwrapped phases as well as the reconstructedworld coordinates. Furthermore, based on the two-frequency phase-unwrapping algorithm, we developedboth a practical simulation model of the unwrapped phasenoise and, for the first time, a mathematical model for de-termining optimal higher frequency. Both approachesapproximate the experimental data. Hence, given a ba-sic measurement of intensity noise variance, PMP design-ers can determine an optimized higher-frequency value byusing these numerical and mathematical models.

ACKNOWLEDGMENTSPartial funding for this research was provided by NASAcooperative agreement NCC5-222 through Western Ken-tucky University and from National Science Foundationgrant EPS-9874764. Our thanks go to Timothy Hasse-brook for being the subject of the face scan.

Corresponding author Laurence G. Hassebrook can bereached by e-mail at [email protected] or by mail at theaddress on the title page.

*Present address, Cisco Systems, Inc., 170 West Tas-man Drive, San Jose, California 95134-1706.

REFERENCES1. F. Chen, G. M. Brown, and M. Song, ‘‘Overview of three-

dimensional shape measurement using optical methods,’’Opt. Eng. 39, 10–22 (2000).

2. J. Batlle, E. Mouaddib, and J. Salvi, ‘‘Recent progress incoded structured light as a technique to solve the correspon-dence problem: a survey,’’ Pattern Recogn. 31, 963–982(1998).

3. X. Y. Su and W. S. Zhou, ‘‘Complex object profilometry andits application for dentistry,’’ in Clinical Applications ofModern Imaging Technology II, L. J. Cerullo, K. S. Heifer-man, Hong Liu, H. Podbielska, A. O. Wist, and L. J. Eamo-rano, eds., Proc. SPIE 2132, 484–489 (1994).

4. G. Sansoni, F. Docchio, U. Minoni, and L. Biancardi, ‘‘Adap-tive profilometry for industrial applications,’’ in Laser Ap-plications to Mechanical Industry, S. Martellucci and A. N.Chester, eds. (Kluwer Academic, Norwell, Mass., 1993), pp.351–365.

5. R. Raskar, G. Welch, M. Cutts, A. Lake, L. Stesin, and H.Fuchs, ‘‘The office of the future: a unified approach toimage-based modeling and spatially immersive displays,’’presented at SIGGRAPH 98, Orlando, Fla., July 19–24,1998.

6. G. Schmaltz, ‘‘A method for presenting the profile curves ofrough surfaces,’’ Naturwissenschaften 18, 315–316 (1932).

7. Y. Shirai and M. Suwa, ‘‘Recognition of polyhedrons with arange finder,’’ in Proceeding of the International Joint Con-ference on Artificial Intelligence (Morgan Kaufman, SanFrancisco, Calif., 1971), pp. 80–87.

8. P. M. Will and K. S. Pennington, ‘‘Grid coding: a prepro-cessing technique for robot and machine vision,’’ Artif. In-tell. 2, 319–329 (1971).

9. B. Carrihill and R. Hummel, ‘‘Experiments with intensityratio depth sensor,’’ Comput. Vision Graph. Image Process.32, 337–358 (1985).

10. D. S. Goodman and L. G. Hassebrook, ‘‘Surface contourmeasuring instrument,’’ IBM Tech. Discl. Bull. 27(4B),2671–2673 (1984).

11. J. L. Posdamer and M. D. Altschuler, ‘‘Surface measure-ment by space-encoded projected beam systems,’’ Comput.Vision Graph. Image Process. 18, 1–17 (1982).

12. D. M. Meadows, W. O. Johnson, and J. B. Allen, ‘‘Genera-tion of surface contours by moire patterns,’’ Appl. Opt. 9,942 (1970).

13. G. Goli, Chun Guan, L. G. Hassebrook, and D. L. Lau,‘‘Video rate three dimensional data acquisition using com-posite light structure patterns,’’ Univ. of Kentucky ECETech. Rep. CSP-02-002 (May 30, 2002).

14. V. Srinivasan, H. C. Liu, and M. Halioua, ‘‘Automatedphase measuring profilometry: a phase mapping ap-proach,’’ Appl. Opt. 24, 185–188 (1985).

15. K. L. Boyer and A. C. Kak, ‘‘Colored-encoded structuredlight for rapid active ranging,’’ IEEE Trans. Pattern Anal.Mach. Intell. PAMI-9, 14–28 (1987).

16. L. G. Hassebrook, R. C. Daley, and W. Chimitt, ‘‘Application

Page 10: Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity

Li et al. Vol. 20, No. 1 /January 2003 /J. Opt. Soc. Am. A 115

of communication theory to high speed structured light il-lumination,’’ in Three-Dimensional Imaging and Laser-Based Systems for Metrology and Inspection III, K. G. Har-ding and D. J. Svetproff, eds., Proc. SPIE 3204, 102–113(1997).

17. J. M. Huntley and H. O. Saldner, ‘‘Shape measurement bytemporal phase unwrapping: comparison of unwrappingalgorithms,’’ Meas. Sci. Technol. 8, 986–992 (1997).

18. H. Zhao, W. Chen, and Y. Tan, ‘‘Phase-unwrapping algo-rithm for the measurement of three-dimensional objectshapes,’’ Appl. Opt. 33, 4497–4500 (1994).

19. M. Trobina, ‘‘Error model of a coded-light range sensor,’’Tech. Rep. BIWI-TR-164, ETH-Zentrum (September 21,1995), pp. 1–35.

20. R. C. Daley and L. G. Hassebrook, ‘‘Channel capacity modelof binary encoded structured light-stripe illumination,’’Appl. Opt. 37, 3689–3696 (1998).

21. O. D. Faugeras and G. Toscani, ‘‘The calibration problem forstereo,’’ in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition ’86 (Institute of Electricaland Electronics Engineers, New York, 1986), pp. 15–20(1986).

22. R. Y. Tsai, ‘‘A versatile camera calibration technique for

high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,’’ IEEE Trans. Rob. Autom.RA-3, 323–344 (1987).

23. R. J. Valkenburg and A. M. McIvor, ‘‘Accurate 3D measure-ment using a structured light system,’’ Image Vision Com-put. 16, 99–110 (1998).

24. R. W. DePiero and M. M. Trivedi, ‘‘3-D computer vision us-ing structured light: design, calibration and implementa-tion issues,’’ Adv. Comput. 43, 243–278 (1996).

25. Behrooz Kamgar-parsi and Behzad Kamgar-parsi, ‘‘Evalua-tion of quantization error in computer vision,’’ IEEE Trans.Pattern Anal. Mach. Intell. 11, 929–939 (1989).

26. W. S. Zhou and X. Y. Su, ‘‘A direct mapping algorithm forphase-measuring profilometry,’’ J. Mod. Opt., 41, 89–94(1994).

27. F. L. Pedrotti and L. S. Pedrotti, Introduction to Optics, 2nded. (Prentice-Hall, Englewood Cliffs, N.J., 1993).

28. J. Weng, P. Cohen, and M. Herniou, ‘‘Camera calibrationwith distortion models and accuracy evaluation,’’ IEEETrans. Pattern Anal. Mach. Intell. 14, 965–980 (1992).

29. E. Trucco and A. Verri, Introductory Techniques for 3-DComputer Vision (Prentice-Hall, Englewood Cliffs, N.J.,1998), Chap. 6, pp. 123–138.


Recommended