+ All Categories
Home > Documents > Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i...

Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i...

Date post: 29-Jul-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
8
Dual-Beam Structured-Light Scanning for 3-D Object Modeling Johnny Park, Guilherme N. DeSouza, and Avinash C. Kak Robot Vision Laboratory, Purdue University 1285 EE Building, West Lafayette, IN. 47907-1285 {jpark, gnelson, kak}@purdue.edu Abstract In this paper, we present our Dual-Beam Structured-Light Scanner (DSLS), a scanning system that generates range maps much richer than those obtained from a conventional structured-light scanning system. Range maps produced by DSLS require fewer registrations for 3-D modeling. We show that the DSLS system more easily satisfies what are often difficult-to-satisfy conditions for determining the 3-D coordinates of an arbitrary object point. Two specific ad- vantages of DSLS over conventional structured-light scan- ning are: 1) A single scan by the DSLS system is capable of generating range data on more surfaces than possible with the conventional approach using the same number of cam- era images. And 2) since the data collected by DSLS is more free of self-occlusions, the object needs be examined from a smaller number of viewpoints. 1 Introduction The existing technology for 3-D modeling and bin-picking has improved significantly in the last few years. The electronics developed to date for structured-light scanners, range scanners, etc., has allowed for acquisition of range data with resolution as fine as 0.05mm, [10]. All this new technology has made it possible to model objects with sizes varying from the large statue of Buddha [12] and David [10], to small industrial parts to be picked from a conveyer- belt [9]. However, despite the growing number of appli- cations found today and the apparently impressive results reported, there still exit a few challenging problems in 3-D modeling. One of these is multiview registration. Multiview registration is a problem that has caught the attention of many researchers in recent years [2, 6, 16]. The need for multiview registration stems from the intrinsic in- ability of sensors to perceive the entire object from one sin- gle view angle. Frequently, an object contains details that are occluded by other parts of the object. Sometimes, oc- cluded surfaces are extrapolated from those that are visible and labeled as “unimaged surfaces” [13], but eventually the information regarding such surfaces needs to be replaced by actual data and the problem of aligning the two sets of range data has to be faced again. One alternative to multiview registration is to construct a scanning system in such a way that the transformation ma- trices corresponding to the different viewpoints are known in advance. However, this condition is difficult to satisfy in practice, especially when the viewpoints are chosen with special criteria such as the minimization of the occluded ar- eas, as used in the notions of the Best Position (BP) of an object and its Next-best-view [5], and in the Next-best-pose [15] for range data collection. In order to attack the problem of multiview registra- tion, different methods have been proposed. The early methods devised for combining multiview range data came from Chen and Medioni [7], where views are incremen- tally merged into larger views (metaviews), and from Besl and McKay’s Iterative Closest Points (ICP) algorithm [2], where features from different views are paired based on their distances and then used to compute a rigid 3-D trans- formation. Many other researchers improved these meth- ods or proposed yet new ones, such as: Bergevin et al. [1] who improved [7] by bringing information from previously registered views in the merging of metaviews; Carmichael et al. [3] who proposed an algorithm for view registration based on local 3-D signatures; etc. The method in [3], for example, which was derived from the work by Johnson et al. [8], improved the computation of local surface signa- tures – called spin-images – by efficiently dealing with data sets with large variations in resolution and cluttered scenes. From the description above, one can immediately point out the two major difficulties in multiview registration. The first difficulty is how to efficiently process the large amount of overlapping range data that is acquired by a scanning sys- tem for different poses of an object. The range data for the successive poses must overlap since otherwise it would be impossible to carry out multiview registration. The sec- ond difficulty is the accumulation of error during view-to- view registration. Although techniques based on curvature patches [11], multi-resolution [14], etc, have been proposed to solve these problems, satisfactory solutions remain to be 1 This paper appears in: Third International Conference on 3D Digital Imaging and Modeling, 2001
Transcript
Page 1: Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i Illuminated stripe z x y Light projector Camera Figure 1: A conventional structured-lightscanning

Dual-BeamStructured-LightScanningfor 3-D ObjectModeling

Johnny Park,GuilhermeN. DeSouza,andAvinashC. Kak

RobotVisionLaboratory, PurdueUniversity1285EEBuilding, WestLafayette, IN. 47907-1285

{jpark, gnelson,kak}@purdue.edu

Abstract

In this paper, we presentour Dual-BeamStructured-LightScanner(DSLS),a scanningsystemthat generatesrangemapsmuch richer thanthoseobtainedfroma conventionalstructured-lightscanningsystem.RangemapsproducedbyDSLSrequire fewer registrations for 3-D modeling. Weshowthat the DSLSsystemmore easilysatisfieswhat areoftendifficult-to-satisfyconditionsfor determiningthe3-Dcoordinatesof an arbitrary objectpoint. Two specificad-vantagesof DSLSover conventionalstructured-lightscan-ningare: 1) A singlescanby theDSLSsystemis capableofgenerating range dataon more surfacesthanpossiblewiththeconventionalapproach usingthesamenumberof cam-era images.And2) sincethedatacollectedbyDSLSis morefreeof self-occlusions,theobjectneedsbeexaminedfromasmallernumberof viewpoints.

1 Introduction

Theexisting technologyfor 3-D modelingandbin-pickinghas improved significantly in the last few years. Theelectronicsdevelopedto datefor structured-lightscanners,rangescanners,etc., hasallowed for acquisitionof rangedatawith resolutionasfine as0.05mm,[10]. All this newtechnologyhasmadeit possibleto modelobjectswith sizesvarying from the large statueof Buddha[12] and David[10], to smallindustrialpartsto bepickedfrom aconveyer-belt [9]. However, despitethe growing numberof appli-cationsfound todayand the apparentlyimpressive resultsreported,therestill exit a few challengingproblemsin 3-Dmodeling.Oneof theseis multiview registration.

Multiview registrationis a problemthat hascaughttheattentionof many researchersin recentyears[2, 6, 16]. Theneedfor multiview registrationstemsfrom the intrinsic in-ability of sensorsto perceivetheentireobjectfrom onesin-gle view angle. Frequently, an objectcontainsdetailsthatareoccludedby otherpartsof the object. Sometimes,oc-cludedsurfacesareextrapolatedfrom thosethatarevisibleandlabeledas“unimagedsurfaces”[13], but eventuallythe

informationregardingsuchsurfacesneedsto bereplacedbyactualdataandtheproblemof aligningthetwo setsof rangedatahasto befacedagain.

Onealternativeto multiview registrationis to constructascanningsystemin sucha way that thetransformationma-tricescorrespondingto thedifferentviewpointsareknownin advance. However, this condition is difficult to satisfyin practice,especiallywhentheviewpointsarechosenwithspecialcriteriasuchastheminimizationof theoccludedar-eas,asusedin the notionsof the BestPosition (BP) of anobjectandits Next-best-view [5], andin theNext-best-pose[15] for rangedatacollection.

In order to attack the problem of multiview registra-tion, different methodshave beenproposed. The earlymethodsdevisedfor combiningmultiview rangedatacamefrom Chenand Medioni [7], where views are incremen-tally mergedinto largerviews (metaviews), andfrom BeslandMcKay’s Iterative ClosestPoints(ICP) algorithm[2],where featuresfrom different views are paired basedontheir distancesandthenusedto computea rigid 3-D trans-formation. Many other researchersimproved thesemeth-odsor proposedyet new ones,suchas: Bergevin et al. [1]who improved[7] by bringinginformationfrom previouslyregisteredviews in the merging of metaviews; Carmichaelet al. [3] who proposedan algorithmfor view registrationbasedon local 3-D signatures;etc. Themethodin [3], forexample,which wasderived from the work by Johnsonetal. [8], improved the computationof local surfacesigna-tures– calledspin-images– by efficiently dealingwith datasetswith largevariationsin resolutionandclutteredscenes.

From the descriptionabove, onecan immediatelypointout thetwo majordifficultiesin multiview registration.Thefirst difficulty is how to efficiently processthelargeamountof overlappingrangedatathatis acquiredby ascanningsys-tem for different posesof an object. The rangedata forthesuccessive posesmustoverlapsinceotherwiseit wouldbeimpossibleto carryout multiview registration.Thesec-ond difficulty is the accumulationof error during view-to-view registration.Althoughtechniquesbasedon curvaturepatches[11], multi-resolution[14], etc,havebeenproposedto solve theseproblems,satisfactorysolutionsremainto be

1

This paper appears in: Third International Conference on 3D Digital Imaging and Modeling, 2001

Page 2: Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i Illuminated stripe z x y Light projector Camera Figure 1: A conventional structured-lightscanning

Plane of light

Linear slide

Camera view

j

iIlluminated stripe

z

x

y

Light projector Camera

Figure1: A conventionalstructured-lightscanningsystem.

found.In this paper, we proposea scanningsystemthatby gen-

eratingricher rangemapsattacksthesedifficulties at theirveryroot. Theproposedsystem,DSLS,is composedof twolight projectorsandonecamera.Thedevicesarecalibratedwith respectto eachotherandamuchricherrangemapcanbe obtainedwith a single scan. We believe that this set-ting significantlyreducesthepossibilityof occlusions,andtherefore,the numberof necessaryview anglesand con-sequentdatasetsis minimized. Also, by minimizing thenumberof datasets,thenumberof views usedin theregis-tration processis reduced.This hasthe effect of reducingtheaccumulationof theview-to-view error.

2 Structured-Light Scanning

structured-lightscannersarewidely usedfor variousappli-cationsin roboticsandcomputervision. They areespeciallyeffectivein 3-D objectbin pickingand3-D objectmodelingapplicationsbecauseof the accuracy andreliability of therangedatayielded. A typical structured-lightscannersys-tem is shown in Figure1. In this system,a planeof lightparallel to the xz-planeis projectedonto the object beingscanned.The intersectionof theplaneof light andthe ob-ject createsa stripeof illuminatedpointson theobjectsur-face.Theplaneof light sweepstheobjectasthelinearslidecarriesthe scanningsystemin the y directionwhile a se-quenceof imagesis taken by the cameraat discretesteps.An index numberk is assignedto eachof theimagesin theorderthey aretaken. Therefore,eachk correspondsto a ypositionof theplaneof light. For eachimagek, asetof im-agecoordinates(i, j) of thepixelsin theilluminatedstripeisobtained.Thetriples(i, j, k)’sarecovertedto (x, y, z) worldcoordinatesusingacalibrationmatrix.

In orderto obtainthepositionof any point on theobjectsurface,thefollowing two conditionsmustbesatisfied:

S2 S1

S3

Figure 2: Exampleof threebasiccasesof occlusions. Upperright: occlusionwith respectto light projector, lower left: occlu-sionwith respectto camera,lower right: noocclusion.

1. The objectpoint mustbe illuminatedby the planeoflight.

2. Thecameramustbeableto seetheilluminatedpoint.

In otherwords,the objectpoint cannotbe occludedeitherwith respectto thelight projectoror with respectto thecam-era. Consider, for example,threebasiccasesasshown inFigure2. In thefirst case,thesurfaceS1canbeseenby thecamera,but thereis no intersectionwith the planeof lightandthe above condition1 is not satisfied.Thus,no pointsonthesurfaceS1aredetected.In thesecondcase,theplaneof light intersectsthesurfaceS2andit createsastripeof il-luminatedpointson thesurface.However, thestripecannotbeseenby thecamera,violating condition2. Again,pointson the surfaceS2 cannotbe detected.Finally, in the thirdcase,the planeof light intersectsthe surfaceS3 creatingastripeof illuminatedpointsthatcanbeseenby thecamera.In this case,both conditionsaresatisfiedandall pointsonthesurfaceS3aredetected.

Someresearcherstry to reduceocclusionswith respecttothecameraby addinga secondcameraon theothersideofthelight projector. Themotivationfor thesecondcameraisthatsomeof theobjectsurfacesthatcannotbeseenby theinitial cameramaybeseenby thesecondcamera.However,the secondcamerageneratestwice asmany imagesto beprocessedandit doesnot reduceocclusionswith respecttothelight projector(Condition1).

Our proposed system, Dual-Beam Structured-LightScanner(DSLS),substantiallyreducesocclusionswith re-spectto thelight projectorwhile usingthesamenumberofimages. In the next section,we presentthe DSLS systemandits advantagesin 3-D objectmodeling.

2

Page 3: Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i Illuminated stripe z x y Light projector Camera Figure 1: A conventional structured-lightscanning

Leftlight projector Right

light projector

Linear slide

Camera

Camera viewLeft plane Right plane

L R

Figure3: Dual-beamstructured-lightscanningsystem

3 Dual-Beam Structured-Light Scan-ner (DSLS)

3.1 System Integration

The DSLS systemwe have developedis shown in Figure3. An additional light projector is mountedon the rightendof the linearslide. Theadditionalplaneof light (rightplane)generatedby thisprojectorintersectstheinitial planeof light (left plane)right below thegroundplane,thuscre-ating two stripesthat are very closeto eachother on thegroundplane.Thecamerais positionedin themiddleof theprojectorsandit observestheilluminatedstripescreatedbytheleft andtheright planes.Sincethetwo planesdonot in-tersectabove the ground,the illuminatedstripesgeneratedby theleft planeandthoseby theright planenever overlapin thecameraview. In otherwords,the illuminatedstripesby thetwo planesareclosestto eachotherwhenthey bothhit theground.It mustbefurthernotedthattheilluminatedstripescreatedby theleft planeappearonly in regionL (seeFigure3). On the other hand,the illuminated stripescre-atedby the right planeappearonly in region R. No stripesareobservedin theregion betweenL andR andthis regionshouldbeminimizedin orderto maximizetheheightabovethe groundplanefor which dual datawould be available.Althoughwe have workedwith thelaserbeamorientationsasshown in Figure3, onecould alsodesigna DSLS likesystemwith otherorientationsaslong asthetwo beamsdonot intersetabovethegroundplane.

3.2 Data Acquisition

Thedataacquisitionprocessof theDSLSis easilymodifiedfrom theconventionalstructured-lightscanner. In fact, theonly modificationcomesfrom realizing that the L regionandthe R region (SeeFigure3) provide two differentsets

of data. For eachimagek, the L region is searchedandasetof triples(i, j, k) � is obtained.Similarly, theR region issearchedto obtaintheset(i, j, k) � . Thesetwo setsof triplesform two rangemaps.This is anattractive featuresincetheprocessingtime for obtainingtwo differentrangemapsbythe DSLS is practicallythe sameasthe time for obtainingonerangemapwith aconventionalsystem.

Average

DSLS 14.03sec

ConventionalSystem 13.27sec

The table shows the averageprocessingtime for theDSLSandtheconventionalsystemover5 trials. Theobjectscenewasthe samefor all the trials and200 imagesweretaken. The processingtime wasrecordedfrom the startofthescanuntil therangemapwasgenerated.

3.3 System Calibration

Thecalibrationof thedualstructured-lightscanneris doneby modifying themethoddescribedin [4]. In this method,� datapointsareusedto solvea

�����transformationmatrix�

. Let thei th datapoint in theworld coordinatebedenotedby (x , y , z ) and the correspondingimagecoordinatebedenotedby (r , c ). Also, let variables �� � to ���� be theelementsof thematrix

�. Then,we have���� � � ��� ��������������� ������������� �������� � ����� ��"! # �

� $ ���� % & ' (���� (1)

�� % & ' � $ �� % *) (& +) (' +) (

� -, !�� % /. (0% & 1. (0& ' 1. (2'

� $ ��4333� (2)

Weusethefreevariable( to accountfor thenon-uniquenessof thehomogeneouscoordinateexpressions.ExpandingEq.(1) andrearrangingit usingEq. (2), we have

������576 3 3 5�6983 576 3 576�:3 3 576 576<;

������

������������������

� � ��� ��������� ��� ������� ��� ���������

������������������ $

��������������������

% �% �=%+>& �& �=& >' �' �=' >

�������������������� (3)

3

Page 4: Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i Illuminated stripe z x y Light projector Camera Figure 1: A conventional structured-lightscanning

Illuminated points bythe left light projector

Camera view

the right light projectorIlluminated points by

System calibration

Calibration block

Figure4: Systemcalibration

where?A@ BDCEGF�HJI�HLKF MNI�MOKP P PF�QOI�Q KRS"T ?U@AV BWCEYX/FZH�[\H X]I�H�[\HX/F�M [ H X]I�M�[^MP PX/F Q_[_`OX]I�Q_[_Q

RS?U@ba B CE-X/F�H�cdH X]I�HecdHX/F M c M X]I M c MP PX/F Q c Q X]I Q c Q

RS4T ?A@gf B CEhX/F�H�iZH X]I�H�i�HX/F M i M X]I M i MP PX/F Q i Q X]I Q i QRS

If werewriteEq. (3)asjlknm-o , thenourproblemis to solvefor k in jlknmpo . Wecanform thenormalequationsandfindthe linear leastsquaressolutionby solving qrjts�jvu�kwmxjts�owhere jts is the transposeof j . The resultingsolution kforms the transformationmatrix y . Note that Equation3containsz_{ equationsand11 unknowns,thereforethemin-imum numberof datapointsneededto solve this equationis 4.

For the dual-beamstructured-lightscanner, we needtofind two transformationmatricesfor the left andthe rightlight projectors. It is possibleto computeonetransforma-tion matrix and solve for the other if we know the exactrelative positionsof two light projectors. This approach,however is not practicalsincefinding theexactrelative po-sitionsis verydifficult.

The calibrationblock we have devisedis shown in Fig-ure4. Usingthis calibrationblock, we measurethe illumi-natedpointsontherodsgeneratedby theleft light projectorand their correspondingpoints in the cameraview. Then,a transformationmatrix y*|~}�� s canbecomputedusingthosemeasureddatapoints.Similarly, we computea transforma-tion matrix y1������� s usingthedatapointsthatwerecreatedbytheright light projector.

We attached9 rodsin thecalibrationblock suchthat thecamerais ableto view all 18 illuminatedpoints. Also, alltherodsareassumedto beparallelto theworld coordinatek�� -plane.

3.4 Advantages of DSLS

In general,with the DSLS systemwe areableto generaterangedataon moresurfacesthanpossiblewith theconven-tional approachanddo so in a singlescanusingthe samenumberof cameraimages.

To see the secondand more important advantageofDSLS, we needto to first describebriefly the shortcom-ings of the currentbestpracticefor combiningrangeim-agesfor objectmodeling:3-D modelingrequiresthatall ofthe externalsurfacesof an objectbe rangemapped.Sincethe different rangemapswould in all likelihoodbe takenfrom differentviewpoints,thereis thentheproblemof reg-istering the rangemapsinto a commoncoordinateframe,a problemfor which no fully automaticprocedurehasyetbeendevised.Onemaycomputetheregistrationby select-ing thecorrespondingpointsmanually;however, thatcanbetediousanddifficult since,for complex objects,humansarenot alwaysgoodat visualizing3D pointsin 2D projectionsof the datacollected. To avoid this painful process,manyresearchersareusingthecalibrationbetweenthesensorandtheobjectto computetheregistration.A popularapproachconsistsof placing an objecton a turntablewhich rotatesin front of a structured-lightscanner. To enhancetheaccu-racy of registrationachieved in this manner, onecanalsousethe ICP (Iterative ClosestPoint) algorithm[2]. If theobjectof theexampleof Figure2 wasplacedonaturntable,the surfaceS2 would be detectedby rotatingthe turntableby 180� but thesurfaceS1wouldnot besincetheturntablewould be rotatingperpendicularto the planeof light. Theonly way to detectthesurfaceS1usinga singlestructured-light scannerwould be to changethe object’s orientationwith respectto the turntablein sucha way that thesurfaceS1 would intersectwith the planeof light and the illumi-natedstripewouldbeseenby thecamera.But changingtheposeof the objectwould alter the transformationfrom theobjectto the turntable,and,if this new transformationma-trix is notavailable,would requiremanualregistration.TheDSLSwill reducethe needof changingthe object’s orien-tationwith respectto theturntable.

4 Experimental Results

We first want to show thata singlescanby the two beamsof DSLScanproducea rangemapfor moresurfacesof anobjectthanpossiblewith just onebeam.Consider, for ex-ample,theobjectshown in Figure5(a).A singlescanof the

4

Page 5: Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i Illuminated stripe z x y Light projector Camera Figure 1: A conventional structured-lightscanning

(a) (b)

(c) (d)

Figure5: (a): photographof object,(b): cloudof pointsdetectedby left light projector, (c): cloudof pointsdetectedby right lightprojector, (d): superpositionof (b) and(c).

Figure 6: Eight different objectsto be tested. Top left to topright: Obj.1- Obj.4,bottom left to bottom right: Obj.5- Obj.8

objectwith just the left light projectorproducesthe rangemapshown in Figure5(b) anda singlescanwith only theright light projectorthe mapshown in Figure5(c). How-ever, a singlescanwith the two light beamstogetherpro-ducesthe rangemapof Figure5(d). The fact thata singleDSLSscanis ableto capturemoresurfacesin someof theposesof theobjectmeansthatonewould needfewer scansfor modelingtheentireobjectand,consequently, onewouldhave to registerfewer rangemaps.

We now illustratehow DSLS improvesupontraditionalstructural-lightscanningwith regardto thesingle-scaneffi-ciency of datacollection.This is doneby usingeightdiffer-entobjectsof differentshapes,colors,andsurfacetextures(Figure6). Eachof theseobjectsis scanned5 timesandtheposeof the object changedrandomlyfor eachscan. Tworangemapsarecollectedfor eachpose:from the left lightprojectorandtheright light projector. Thetwo rangemapsareanalyzedto find the points that weredetectedonly by

theleft light projector, thepointsdetectedonly by therightlight projector, andthepointsdetectedby bothlight projec-tors.Figure7 showstheresultof thisexperiment.Whatthebargraphsdepictis explainedby thelegendsat thebottomof the figure. In eachpair of bars,the left bar shows thenumberof objectpointsdetectedby theleft light projector,theright bar thenumberof pointsby theright light projec-tor. The gray bottomportion in eachbar shows the num-berpointsthataredetectedby bothlight projectors,andtheblacktopportionsthenumberof pointsuniqueto eachlightprojector. The figure shows that the right light projector,whichprojectsabeamataslantanglewith respectto thedi-rectionof thescan,consistentlydetectsmoreobjectpointsthantheleft light projector. This is not surprisingsincetheverticallyprojectedbeamby theleft light projectorwill failto seeany verticalsurfaceson theobjects.In Figure8, theDSLSrangemapsof theeightobjectsareshown.

To show thecomplementaryrolesplayedby theleft andtheright light projectors,weshow in theleft columnof Fig-ure9 therangemapsfor thefree-formobject(labeledObj.7in Figure6). Thetop entry in thecolumnis therangemapproducedby the left light projector, the middle entry therangemapproducedby theright light projector, andthebot-tomentrythecompositerangemapby DSLS.As thereadercansee,the occludedpartsof the left-projectorrangemaparecoveredby datain the right-projectorrangemap. Thefact that the oppositeof this statementis alsotrue is madeevidentby examiningthesamerangemapsbut from a dif-ferentperspective,asshown in theright columnof theFig-ure9.

5 Conclusions

In thispaper, wedescribedtheDual-BeamStructured-LightScanningSystem.Both quantitative andqualitative resultswerepresentedto illustratethe advantagesof usinga sec-ond light projector. Theresultsshowedthat thenumberofregistrationsrequiredfor 3-D modelingcanbesignificantlyreduced.This reductionwaspossiblebecauseof the extrarangedatathat is obtainedby usingbothprojectorsasop-posedto onesingleprojector. TheDSLSaddedin averageover 40% more points to the rangedatathan the conven-tional scanningsystem.

5

Page 6: Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i Illuminated stripe z x y Light projector Camera Figure 1: A conventional structured-lightscanning

Figure8: SinglescannedDSLSrangemapsfor objectsshown in Figure6

6

Page 7: Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i Illuminated stripe z x y Light projector Camera Figure 1: A conventional structured-lightscanning

0 1 2 3 4 5 60

2000

4000

6000

8000

10000

12000

14000

16000

0 1 2 3 4 5 60

2000

4000

6000

8000

10000

12000

Obj. 1 Obj. 2

0 1 2 3 4 5 60

1000

2000

3000

4000

5000

6000

0 1 2 3 4 5 60

500

1000

1500

2000

2500

3000

3500

4000

Obj. 3 Obj. 4

0 1 2 3 4 5 60

1000

2000

3000

4000

5000

6000

7000

8000

0 1 2 3 4 5 60

2000

4000

6000

8000

10000

Obj. 5 Obj. 6

0 1 2 3 4 5 60

2000

4000

6000

8000

10000

12000

14000

16000

18000

0 1 2 3 4 5 60

1000

2000

3000

4000

5000

6000

7000

8000

9000

Obj. 7 Obj. 8

2 3 40

1000

2000

3000

4000

5000

6000

7000

8000

Trial

Num

ber

of p

oint

s

LR

Ro

Rt

Lt

Lo

LR

Figure7: Numberof pointsdetectedby DSLS.Lt: total numberof points detectedby left light projector, Lo: numberof pointsdetectedonly by left light projector, Rt: total numberof pointsdetectedby right light projector, Ro: numberof pointsdetectedonly by right light projector, LR: numberof pointsdetectedbybothlight projectors.

(a) (b)

(c) (d)

(e) (f)

Figure9: Objectpointsdetectedby DSLS in a singlescan.Theleft column shows in (a) the cloud of points yielded by the leftlight projector, in (c) thecloudof pointsyieldedby theright lightprojector, andin (e), the compositeDSLS rangemap. The rightcolumnshows exactly thesamedatabut from a differentperspec-tive.

7

Page 8: Dual-BeamStructured-LightScanning for 3-DObject Modeling · Linear slide Camera view j i Illuminated stripe z x y Light projector Camera Figure 1: A conventional structured-lightscanning

References

[1] R. Bergevin, M. Soucy, H. Gagnon,andD. Lauren-deau, “Towards a GeneralMulti-V iew RegistrationTechnique”, IEEE Transactionson Pattern AnalysisandMachineIntelligence,vol. 18, no. 5, May 1996,pp.540-47.

[2] P.J. Besl, and N.D. McKay, “A Method for Regis-tration of 3-D Shapes”,IEEE Transactionson Pat-tern Analysis andMachineIntelligence,vol. 14, no.2, February1992,pp.239-56.

[3] O. Carmichael,D. Huber, andM. Hebert,“LargeDataSetsandConfusingScenesin 3-D SurfaceMatchingandRecognition,” Proceedingsof theSecondInterna-tional Conferenceon 3-D ImagingandModeling,Ot-tawa,Canada,October1999,pp.358-67.

[4] C.H. Chen,andA.C. Kak, “Modeling andCalibrationof a StructuredLight Scannerfor 3-D RobotVision”,Proceedingsof theIEEE InternationalConferenceonRoboticsandAutomation,RaleighNC, March1987,pp.807-15.

[5] B.T. Chen, W.S. Lou, C.C Chen, and H.C. Lin,“A 3D ScanningSystemBasedon Low-OcclusionApproach”, Proceedingsof the SecondInternationalConferenceon 3-D Imaging and Modeling, Ottawa,Canada,October1999,pp.506-15.

[6] Y. Chen,andG. Medioni, “Object Modelingby Reg-istration of Multiple RangeViews”, Proceedingsofthe IEEE InternationalConferenceon RoboticsandAutomation,Sacramento,California,April 1991,pp.2724-9.

[7] Y. Chen,andG. Medioni, “Object Modelingby Reg-istrationof Multiple RangeViews,” ImageandVisionComputing,vol. 10, no. 3, April 1992, pp. 145-55,UK.

[8] A. Johnson,and M. Hebert, “SurfaceMatching forObject-Recognitionin Complex Three-dimensionalScenes,” Image& Vision Computing,vol. 16, no. 9-10,July. 1998,pp.635-51.

[9] A.C. Kak, J.L. Edwards,“ExperimentalStateof theArt in 3D ObjectRecognitionandLocalizationUsingRangeData”,Proceedingsof theWorkshopon VisionandRobots,Pittsburgh,PA, 1995

[10] M. Levoy, K. Pulli, B. Curless,Z. Rusinkiewicz, D.Koller, L. Pereira,M. Ginzton,S.Anderson,J.Davis,J. Ginsberg, J. Shade,and D. Fulk, “The DigitalMichelangeloProject:3D Scanningof LargeStatues”,Proceedingsof SIGGRAPH,2000,pp.131-44.

[11] V. Nguyen,V. Nzomigni, andC. Stewart, “Fast androbust registrationof 3-D surfacesusing low curva-turepatches,” Proceedingsof theSecondInternationalConferenceon 3-D Imaging and Modeling, Ottawa,Canada,October1999,pp.

[12] K. Nishino, Y. Sato, and K. Ikeuchi, “AppearanceCompressionandSynthesisBasedon 3D Model forMixedReality”,Proceedingsof theInternationalCon-ferenceon ComputerVision, Corfu, Greece,Septem-ber1999,pp.38-45.

[13] M. Reed,and P. Allen, “3-D Modeling from RangeImagery: An IncrementalMethod with a PlanningComponent”,ImageandVisionComputing,February1999,17(1): pp.99-111.

[14] H. Zha, Y. Makimoto, and T. Hasegawa, “DynamicGaze-ControlledLevels of Detail of PolygonalOb-jects in 3-D EnvironmentModeling,” ProceedingsoftheSecondInternationalConferenceon 3-D Imagingand Modeling, Ottawa, Canada,October1999, pp.321-30.

[15] H. Zha, K. Morooka, T. Hasegawa, and T. Nagata,“Activemodelingof 3-D objects:planningonthenextbestpose(NBP) for acquiringrangeimages,” Confer-enceon RecentAdvancesin 3-D Digital ImagingandModeling,Ottawa,Canada,May, 1997,pp.68-75.

[16] Z. Zhang, “Iterative Point Matching for Registra-tion of Free-FormCurvesandSurfaces,” InternationalJournalof ComputerVision, vol. 13, no. 2, 1994,pp.119-52.

8


Recommended