+ All Categories
Home > Documents > Optical High-Precision Three-Dimensional Vision-Based Quality Control of Manufactured Parts by Use...

Optical High-Precision Three-Dimensional Vision-Based Quality Control of Manufactured Parts by Use...

Date post: 30-Sep-2016
Category:
Upload: ernest
View: 212 times
Download: 0 times
Share this document with a friend
17
Optical high-precision three-dimensional vision-based quality control of manufactured parts by use of synthetic images and knowledge for image-data evaluation and interpretation Pierre Graebling, Alex Lallement, Da-Yi Zhou, and Ernest Hirsch Vision-based evaluation of industrial workpieces can make efficient use of knowledge-based approaches, in particular for quality control, inspection, and accurate-measurement tasks. A possible approach is to compare real images with conceptual synthetic images generated by use of standard computer-aided design models, which include tolerances and take the application-specific conditions into account e.g., the measured-calibration data. Integrated in industrial real-life environments, our evaluation methods have been successfully applied to on-line inspection of manufactured parts including sculptured surfaces, using structured light techniques for the reconstruction of three-dimensional shapes. Accuracies in the range 15–50 m are routinely achieved by use of either isolated images or spatially registered image sequences. © 2002 Optical Society of America OCIS codes: 100.0110, 100.2650, 120.0120, 150.0150, 150.3040, 150.6910. 1. Introduction For industrial computer-vision-based applications, such as quality control e.g., 100% quality control of a production, inspection, and accurate-measurement tasks leading to a quantitative evaluation of ma- chined parts, it is necessary to make available tools for the computation of three-dimensional 3D de- scriptions of the image contents. Further, auto- mated recovery of quantitative 3D information about arbitrarily shaped objects or surfaces also greatly im- proves and simplifies a wide range of industrial ap- plications. In particular, on-line inspection during manufacturing, computer-aided design, and shape reconstruction are typical applications for which any advance in automation is largely desirable. This led to the development of a wide range of vision-based measurement systems of 3D coordinates. Though these techniques have been actively investigated dur- ing the last years, 1–6 up to now only a limited number of industrial applications make intensive use of such systems. In this paper, we describe an original approach for the fully automated geometrical inspec- tion of polyhedral parts, including free-form surfaces. This requires formalizing the concept of computer vision. Based on the well-known paradigm perception– action defining the processing path from image ac- quisition to action, starting from the current state of an application and determining by use of one or more images perception, the system provides a result or determines how to act on the application’s environ- ment action to bring it into a desired state corre- sponding to the objective. By use of the specification of the objectives of an application in our case: in- spection tasks, the functionality of computer vision is restricted to a part of the real world that corre- sponds to the application environment of the scene to be imaged. For that purpose, computer-vision- based evaluation of industrial workpieces can make efficient use of knowledge-based approaches. One of the possible approaches is to compare real images with conceptual, i.e. synthetic, images. The set of possible descriptions for the current or desired state of the scene to be taken into account can then be formulated with conceptual representations, gener- ated by use of a priori knowledge, e.g., computer- The authors are with the Universite ´ Louis Pasteur, Laboratoire des Sciences de l’Image, de l’Informatique et de la Te ´le ´de ´tection UMR CNRS 7005, Ecole Nationale Supe ´rieure de Physique de Strasbourg, Boulevard Se ´bastian Brant, F-67400 Illkirch, France. E. Hirsch’s e-mail address is [email protected]. Received 2 July 2001; revised manuscript received 16 January 2002. 0003-693502142627-17$15.000 © 2002 Optical Society of America 10 May 2002 Vol. 41, No. 14 APPLIED OPTICS 2627
Transcript

Optical high-precision three-dimensionalvision-based quality control of manufactured partsby use of synthetic images and knowledge forimage-data evaluation and interpretation

Pierre Graebling, Alex Lallement, Da-Yi Zhou, and Ernest Hirsch

Vision-based evaluation of industrial workpieces can make efficient use of knowledge-based approaches,in particular for quality control, inspection, and accurate-measurement tasks. A possible approach is tocompare real images with conceptual �synthetic� images generated by use of standard computer-aideddesign models, which include tolerances and take the application-specific conditions into account �e.g., themeasured-calibration data�. Integrated in �industrial� real-life environments, our evaluation methodshave been successfully applied to on-line inspection of manufactured parts including sculptured surfaces,using structured light techniques for the reconstruction of three-dimensional shapes. Accuracies in therange 15–50 �m are routinely achieved by use of either isolated images or spatially registered imagesequences. © 2002 Optical Society of America

OCIS codes: 100.0110, 100.2650, 120.0120, 150.0150, 150.3040, 150.6910.

1. Introduction

For industrial computer-vision-based applications,such as quality control �e.g., 100% quality control of aproduction�, inspection, and accurate-measurementtasks leading to a quantitative evaluation of ma-chined parts, it is necessary to make available toolsfor the computation of three-dimensional �3D� de-scriptions of the image contents. Further, auto-mated recovery of quantitative 3D information aboutarbitrarily shaped objects or surfaces also greatly im-proves and simplifies a wide range of industrial ap-plications. In particular, on-line inspection duringmanufacturing, computer-aided design, and shapereconstruction are typical applications for which anyadvance in automation is largely desirable. This ledto the development of a wide range of vision-basedmeasurement systems of 3D coordinates. Though

The authors are with the Universite Louis Pasteur, Laboratoiredes Sciences de l’Image, de l’Informatique et de la Teledetection�UMR CNRS 7005�, Ecole Nationale Superieure de Physique deStrasbourg, Boulevard Sebastian Brant, F-67400 Illkirch, France.

E. Hirsch’s e-mail address is [email protected] 2 July 2001; revised manuscript received 16 January

2002.0003-6935�02�142627-17$15.00�0© 2002 Optical Society of America

these techniques have been actively investigated dur-ing the last years,1–6 up to now only a limited numberof industrial applications make intensive use of suchsystems. In this paper, we describe an originalapproach for the fully automated geometrical inspec-tion of polyhedral parts, including free-form surfaces.This requires formalizing the concept of computervision. Based on the well-known paradigm perception–action defining the processing path from image ac-quisition to action, starting from the current state ofan application and determining by use of one or moreimages �perception�, the system provides a result ordetermines how to act on the application’s environ-ment �action� to bring it into a desired state corre-sponding to the objective. By use of the specificationof the objectives of an application �in our case: in-spection tasks�, the functionality of computer visionis restricted to a part of the real world that corre-sponds to the application environment of the scene tobe imaged. For that purpose, computer-vision-based evaluation of industrial workpieces can makeefficient use of knowledge-based approaches. One ofthe possible approaches is to compare real imageswith conceptual, i.e. synthetic, images. The set ofpossible descriptions for the current or desired stateof the scene to be taken into account can then beformulated with conceptual representations, gener-ated by use of a priori knowledge, e.g., computer-

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2627

aided design �CAD� models, structure of theapplications, data acquisition conditions, etc. Theserepresentations are both quantitative �number andstructure of objects� and qualitative �complexity ofmodels or concepts�. More specifically, the interpre-tation of the acquired images establishes a link be-tween each pixel of an image and the conceptualrepresentations of the application or, equivalently,between an image primitive and an element of theapplication modeling used as reference. In order toestablish these �in our case: quantitative� relations,it is assumed that the following points have beenadequately addressed:

• Modeling of the camera �e.g., geometric model ofthe perspective� and modeling of the acquisitionsetup �monocular system, stereoscopic head, trinocu-lar system, active acquisition �for example, struc-tured light�, etc.�,

• Calibration of the acquisition sensor,• Algorithmic models for image evaluation. In

our case, we use an iterative interpretation processsimilar to the interpretation model proposed first byKanade and Nagel.7–9

This implies a fine a priori modeling of the applica-tion and of the conditions under which the applica-tion is taken in charge by the vision system.

Integrated in �industrial� real-life environments,our developed hardware and software have beenapplied, among others, to nonstandard surface-inspection tasks requiring extremely high process-ing rates and to robotics. Specifically, inspectionrelies on the comparison of real data, for example,that obtained by use of a structured light-basedvision system, with reference data. These are inour case either conceptual images generated withmeasured calibration data, CAD models includingtolerances and measured acquisition parameters,or coordinate measurement machine �CMM�-basedinformation used as ground truth. The resultingevaluation methods have been successfully appliedto vision-based on-line inspection of manufacturedparts including sculptured surfaces �e.g., turbineblades�. For the reconstruction of 3D shapes offree-form surfaces, we apply the structured-light-based approach to be described in Section 2. Theimplemented system is built around a sequence ofthree processes. The first acquires a set of over-lapping range images covering the whole area ofinterest on the manufactured part and controls thesensor displacements around the object. Then, thesecond module organizes the registration of thisspatial-image sequence to express the various setsof measurements in a single reference frame. Fi-nally, the last procedure exploits the whole singledata set for evaluation by matching and comparingthe real extracted measurements and the referencerepresentations to assess the quality of the work-piece. For the determination of contour-basedmeasurement data in accordance with conventional3D reconstruction principles, the following steps

are carried out: image acquisition, extraction ofimage-contour points, matching of correspondingcontour points leading to pairs of image points, anddetermination of the 3D coordinates by use of tri-angulation techniques. The basic hypothesis is ad-equate feature extraction. Operators used forimage evaluation usually lead to errors. These er-rors can be partially avoided through the use ofknowledge-based systems. With the objective be-ing to find the optimal compromise between detec-tion and localization for a given gray-leveltransition, the optimal operator is chosen and itsparameters fixed according to the image contents.The developed approach, which includes toleranceinformation, is presented in Section 3. Last a com-bination of contour- and surface-based approachesenables the full 3D reconstruction or measures on�industrial� workpieces. In this case, the two-dimensional �2D� surface data have to be related toone-dimensional contour data to derive a completedescription of the object. Here also a priori knowl-edge facilitates the task, and a higher-level devel-opment system is desirable. With such a systemthe optimally extracted image features can then befurther processed to build closed scene features.The closed scene features delimit surfaces in whichthe presented structured light approach can be ap-plied to derive the desired shape descriptions. Insummary, the fully automated knowledge-basedapproaches can be meaningfully applied to model-based measuring and 3D reconstruction of work-pieces. Innovation lies also in the automateddetermination of referencing data enabling us tolink the measurements to, e.g., CAD-based refer-ence descriptions or independent data sets acting asground truth. Indeed, contrary to the usual ap-proaches, the whole procedure has been fully auto-mated and does not require an accurate referencingor positioning of the part within the measurementsystem because the software includes proceduresfor automated �self-�referencing with respect to an apriori defined area on the workpiece. Further, thetechniques are simple to apply and robust againstencountered error sources. Representative resultsare presented in Section 4. They demonstrate boththe statistically determined accuracy of single 3Dmeasured points and the behavior of the system.As a result, accuracy in the range of 15–50 �m canbe routinely achieved for the reconstruction witheither isolated images or spatially registered imagesequences. Also, to carry out a full-3D reconstruc-tion of workpieces, the realization of a controlled-acquisition system, i.e., a measuring robot with 4degrees of freedom for the model-based acquisitionof images, is briefly presented in Section 5. Thisdevice should facilitate the controlled feature ex-traction out of the images in consideration of a finemodeling of the image contents and in applicationto dimensional measures and 3D reconstruction ofrigid or nonrigid objects. A short conclusion andoutlook is also provided in Section 5.

2628 APPLIED OPTICS � Vol. 41, No. 14 � 10 May 2002

2. Free-Form Surface Evaluation

A. Principle

Three-dimensional descriptions of objects or scenesviewed by vision systems are more and more usefulfor a large spectrum of vision-based applications.Further, for CAD-based applications, such as reverseengineering or industrial inspection tasks, the knowl-edge of the 3D geometrical information of the scene isof prime interest. With the aim of achieving a reli-able, flexible, and contact-free inspection of 100% ofthe production of quasi-polyhedral workpieces, whichmay be partially composed of free-form surfaces, wehave developed an acquisition and inspection system,which uses in particular a structured light based ap-proach.10,11 Inspection itself mainly relies on com-parison between acquired images of the workpiecesand corresponding synthetic images generated usingCAD models, including tolerance information, of thepieces to be evaluated. Basically, 3D informationcan be obtained by imaging an object or a scene witha CCD sensor and, then, deriving the coordinates ofselected 3D points with triangulation methods based,for example, on stereovision or projection of a struc-tured light pattern.10

Structured-light-based systems provide more orless generic solutions for the determination of surfacerange data out of images of the application scene. Inour case, for free-form surfaces, after illumination ofa part of the area of interest with a pattern projector,a grid with alternatively black and transparent par-allel stripes, an image of the projected light pattern isacquired. Then, the subpixel locations in the imageof the dark fringes are extracted and combined withdata resulting from an off-line calibration phase de-scribing both the measuring head and space to com-pute the 3D coordinates of the extracted 2D positions.The 3D coordinates of the points, materialized in thisway, are derived with an accuracy of the order of 20�m for a field of view of 5 cm � 5 cm.10 Figure 1illustrates this situation. The whole procedure,built around three processing tasks, can be summa-rized as follows:

Off-line calibration phase. The aim of this task isto provide a complete description of the light struc-ture projected into the 3D space. This information isnecessary for both the simulation and the reconstruc-tion tasks. The result is a file of polyline equationsdescribing this behavior. The calibration is realizedin two steps:

• The calibration of the camera itself, which con-sists in the determination of the relation between the2D image space and the 3D world space. This isdone by computing, with subpixel accuracy, the im-age coordinates of probe points in different positions,the world coordinates of these points being known�calibration plate�. The correspondence list betweenthe 3D world coordinates and the 2D image coordi-nates allows then deriving the required parameters.For that purpose, a modified version of the methoddeveloped by Tsai12 is used, which results in a very

accurate camera calibration. The observed accuracyis usually in the range from a few �m up to 10 �m fora field of view of 5 cm � 5 cm. This step can also beinterpreted as consisting of the extraction of 3D raysof sight associated with the camera pixels and de-scribed as a set of straight lines.

• The complete calibration of the measuring head�sensor and light projector� to determine the whole3D light structure in space, which includes the rela-tion between the optical system and the scanner mov-ing the sensor around the object �transformationmatrix between the scanner coordinate system andthe measuring-head coordinate system�. Knowingthe results of the first calibration step, the secondstep consists in the extraction of 3D polylines corre-sponding to the intersection of the previously ex-tracted sensor rays of sight with the projected fringes.To get the equations of the 3D polylines, the 3D po-sitions of their vertices have been obtained by trans-lating a reference plane step by step. For each step,an image is acquired, and the image coordinates offringe points are extracted. As a result, a set ofcorrespondences ��xim, yim� � �Zw, n� is obtained,where xim, yim are the image coordinates, Zw the 3Dworld coordinate corresponding to the translation,and n the fringe number. With the calibration datafrom the first step, one obtains the description of theprojections of the sensor lines onto each shadowplane, which can be stored in a so-called calibrationfile as correspondences between pixel coordinates�xim, yim� and their corresponding projections �Xw, Yw,Zw� in 3D space. This calibration file is one of theinputs for the 3D reconstruction module.

The method also implies the activation of the sim-ulation system to emulate the acquisition and theimage-evaluation processes, which uses a theoreticalmodel of the workpiece and enables obtaining thecomparison data needed for the inspection task. Toachieve this, a CAD-based description is worked outto simulate the illumination phenomenon and to ex-tract the same features that are extracted during theprocessing of real images. This makes use again ofthe knowledge of the acquisition parameters deter-mined during the calibration phase.

During the on-line phase, the operating softwareexecutes a program that manages the system, syn-chronizes the acquisition device and the mechanicalscanner, and runs the image processing and 3D re-construction modules.

Image Analysis Phase. The evaluation of the ac-quired images of the projected light structure pro-vides information about the 3D shape of the objectdescribed as a set of �Xw, Yw, Zw� coordinates. Theimage analysis itself consists in the extraction of a set��xim, yim, n� of triplets, which represent the position�xim, yim� of the fringes in the image. n is their iden-tification number. First, because the projectedfringes usually have a thickness of few tens of pixels,it is necessary to extract their centers with subpixelaccuracy to get the test points accurately for each linethat will be used in the following reconstruction task.

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2629

Second, the identification number n of the fringes hasto be determined. The latter point is solved by use ofa projection pattern, where one of the fringes can beeasily identified. This is carried out with a grid,where the central stripe is missing. After identifi-

cation of this fringe, one can number the fringes inincreasing order in one direction, and in decreasingorder in the other direction. The subpixel position ofthe fringes is computed by analyzing the local inten-sity distributions of the image to mark the minima of

Fig. 1. Principle of the measuring system and of the acquisition of spatial-image sequences. The figure on top illustrates the specificcorrespondence problem to be solved due to the use of structured-light patterns. The bottom figure shows the principle of 3D recon-struction of measured points �the reconstructed coordinates are indicated in bold�.

2630 APPLIED OPTICS � Vol. 41, No. 14 � 10 May 2002

the intensity distributions. It was verified that, inthe neighborhood of these minima, the correspondinggray-level profiles can be closely approximated byparabolae. Along the direction perpendicular to thefringe, because the pixel values can be approximatedby parabolas, the derivatives can be expressed as yim axim � b near the minima. Then the positionsneed to be computed to verify xim �b�a, which takethe previous remark into account.

Further, the image-processing algorithm was spec-ified for operation in real working conditions. Inparticular, independence with respect to the localmean intensities �processing in daylight is possible�,and operation with low-contrast �10% contrast� andhigh-contrast �100% contrast� patterns are achieved.

As a result, one gets the location of each pointbelonging to a fringe in image coordinates with sub-pixel accuracy. This is the second input for the 3Dreconstruction module.

3D Reconstruction Phase. The combination of thedata from the two steps above, i.e., the calibrationdata, the identifiers of the projected fringes and theirpositions �xim, yim� in the image plane, provides thenecessary information for 3D reconstruction and�orcomparison with the simulated CAD-based data.With this information the reconstruction or inspec-tion tasks can be reduced either to an interpolation of3D coordinates by use of the calibration data, or to acomparison with corresponding conceptual data: inour case, a range of values that takes into account thetolerances on nominal surface data in pixel coordi-nates. If accurate surface shape information is re-quired, the measured points can be expressed, withthe help of the calibration data, in world coordinates.Coordinates �Xe, Ye, Ze� are calculated by interpola-tion between the 3D coordinates of nearby pixels byfollowing the principle indicated on Fig. 1. Thus theshapes of the object surfaces themselves can be com-puted. As a conclusion to this subsection, the resultis the set of 3D coordinates of the points correspond-ing to the projection of the light pattern onto theobject and building a sampling of the imaged surfaceof the object.

B. Evaluation of Free-Form Surfaces

However, for many applications, and especially forthose requiring high accuracy, a single image oftendoes not cover the whole area of interest for evalua-tion. Indeed, for applications requiring fields ofview of large extent, covering areas much larger thanthat of the used sensor, the actual resolution of stan-dard CCD cameras does not allow it to obtain all ofthe information from a single image without spoilingthe accuracy. This is further enhanced by the factthat imaged scenes often exhibit diversified and com-plex objects that require moving the sensor aroundthe object in order to view all of it. Therefore, inthese cases, it is essential to be able to combine sev-eral views in order to produce a complete and accu-rate description of the desired zone. In other words,one has to acquire a spatial sequence of images of thescene to evaluate and to register those images so that

they can provide the desired 3D information inthe form of a unique description. In these cases,different sets of 3D points, which correspond to theimages of the spatial sequence acquired from variouspoints of view, have to be registered to provide asingle description for the whole surface of interest.Various 2D or 3D registration techniques have beendeveloped for a large set of applications.13–24 Amongthe existing recent techniques, 2D algorithms seem tobe easier to use and more temporally efficient. Ac-cording to the acquisition system used, assumingthat the relation between the pattern projector andthe camera is firmly fixed and that the piece to beevaluated is assumed to be rigid during the wholeacquisition phase, it is, however, clear that the sim-pler 2D methods for registration have to be discarded.Indeed, the data extracted from the various images ofthe sequence actually correspond to different loca-tions on the object surface, even though the datasubsets may belong to a common region of overlapbetween two or more images. Figure 1 schemati-cally clarifies this observation.

As a result, from one image to the other, differentclouds of 3D points are extracted. As a consequence,only 3D registration schemes can be considered.Suitable 3D registration methods for the applicationsin view can be based on the so-called iterative closestpoints algorithms.14–16,20,22 They are usually splitinto three different steps: first, selection of so-calledcontrol points, second, matching of selected pointsfrom the data sets of two consecutive images, andfinally, determination of the optimal transformationenabling to link the images. Once the transforma-tion is found, it is applied to one of the two images inorder to express the two 3D coordinate sets in a com-mon coordinate system, which leads in this way to asingle description of the part�s� in the actual scenecaptured by the two successive acquisitions. Theprocess is then iteratively applied to the completespatial sequence of images until the desired globalunique description is obtained that can then be usedstraightforwardly for 3D shape reconstruction andmeasurement tasks.24 Our registration scheme,based on the ICP algorithm, relies on the use of sur-face interpolation of clouds of 3D points extractedfrom each image for matching the 3D control pointsand refines an initial transformation by iterativelycomputing incremental transformations that mini-mize the distance between the images to be regis-tered. Compared with similar approaches,13,17,19

our method adds mainly two new features. First,the clouds of 3D points are interpolated by surfacesthat describe them locally. Thus, to get correspon-dences between the two images to be registered,given a point from the first image, we look onto thesurface modeling the second image for the corespond-ing point, which minimizes their distance. This re-sults in more accurate matches than a point to pointregistration, which is only suitable for dense rangeimages. The interpolation techniques used rely onthe use of thin-plate-splines and nonuniform rationalB-spline functions, both of which are able to repre-

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2631

sent standard shapes and free-form surfaces. Thedeveloped automated method registers efficientlyspatial-image sequences leading to 3D-shape recov-ery�reconstruction of the imaged object�s�, includingfree-form surfaces. Second, registration requiresthe existence of overlap regions between the images.The size and shape of the overlap region between twoimages exercise great influence on the performance ofthe procedure with respect to accuracy and executiontime. Therefore these overlap regions between im-ages are determined in our case automatically with-out spoiling accuracy.

Accordingly, in order to reduce significantly theresulting search space for registration, we determineautomatically the overlap region between two datasets, making use of a threshold using a distance his-togram summarizing the shortest distances betweenpoints in the two images, which should be kept at aminimum. This fixes the number of candidate prim-itives, here 3D points, for the following iterative clos-est points-like matching process and allows anappreciable saving of execution time. Because accu-racy is of prime importance, the matched controlpoints need to be of the best quality. Therefore wehave applied a statistical sorting scheme similar tothe technique used by Zhang22 in order to eliminatebad matches.

The optimal transformation, i.e., the relation be-tween the two points of view used for the acquisitionsof the sets of points describing parts of the objectsurface is considered to be rigid and decomposed intoa rotational component and a translation part. Tak-ing robustness as a major criterion, we have imple-mented and compared three approaches basedrespectively on unit quaternions,25 dual quater-nions,26 and singular value decomposition.27 A com-parative study of the three implementations, withrespect to time needed for computing the transforma-tions and for accuracy, led to the conclusion that theapproach based on unit quaternions is the most effi-cient and the most robust �see also Ref. 28�. As afinal processing step for the loop of the iterative reg-istration, the transformation is applied to all thepoints of a given set of points, including those pointsthat were rejected for registration. This step en-ables the expression of the data extracted from the tworegistered images in one common reference frame.

Furthermore, the acquisition can be roughly con-trolled because an estimate of the initial transforma-tion is known. This fulfills one of the constraints ofour application for an accurate on-line processing ofan image sequence. Further, implementation of theregistration scheme is made easier and leads to moreaccurate results. Once the sequence is processed, allthe point sets are expressed in a common referenceframe. The final registration accuracy for a typicalsequence of images is in the range of 20 to 50 �m,depending on the complexity of the analyzed object.The technique leads, as a result of the whole process,to one set of 3D coordinates, all expressed in onecommon coordinate system for the whole image se-

quence, which can then be used straightforwardly for3D-shape reconstruction and measurement tasks.

In order to minimize the error propagation throughthe whole image sequence in view of model genera-tion�reconstruction or measurement tasks, one canregister all the views simultaneously using, for ex-ample, different types of network topology.15 Con-versely, other approaches19,20 propose to process theimages sequentially; that is, adding images one byone to the registered model. Though this kind ofapproach may lead to an accumulated registrationerror, we have implemented such a strategy. Themain reason is that the registration procedure has tobe integrated into the inspection system �see Subsec-tion 2.B� for which on-line processing is a strong con-straint. In this latter case, a sequential approach isbetter suited, as it allows registering an image withthe partially reconstructed model, while going onwith a further acquisition. Nevertheless, the firstapproach has not been definitely rejected. Indeed, aplanning system will next be integrated into the ac-tual system to determine the best strategy for opti-mizing the number of acquisitions and viewpoints tobe used. Examples of results are given in Section 5.

3. Contour Evaluation

A. Introduction and Principle

Faced with competition, more and more industrieshave to produce their products faster with high qual-ity, to reduce the manufacturing costs. In conse-quence, quality control is all the more important andunavoidable to ensure correct manufacturing. If,traditionally, quality-control tasks and inspectionhave usually been carried out by human operators,either visually or through use of dedicated measuringdevices, this is no longer possible when inspectionmust intervene at each step of the manufacturingprocess and for 100% of the production. Thus, a newcomparison approach for visual inspection of quasi-polyhedral manufactured parts has been devised.Beginning with an explicit modeling of the tolerancespecifications associated with this type of parts, wehave developed a new approach for the comparisonbetween real and conceptual representations. Inte-grated into a complete vision-based inspection line,this robust and efficient method makes it possible toevaluate quantitatively manufactured parts with ac-curacy and to carry out tolerance verification. Thesystem can be summarized by the following points:

• Generation of conceptual representations. Toobtain the ROIs �regions of interest� representing theallowable errors for the edges in the image represent-ing the part to be evaluated, tolerance representa-tions for geometric features, such as line segments,ellipses, and elliptical arcs have been modeled.

• Comparison between edge-point lists extractedfrom images and corresponding ROIs.

The approach enables the inspection of manufacturedparts to better take into account the tolerance speci-

2632 APPLIED OPTICS � Vol. 41, No. 14 � 10 May 2002

fications associated with the workpiece and to realizesignificant dimensional measurements. Further, allchanges of the tolerance specifications can be trans-lated automatically into modifications of the associ-ated ROIs. Furthermore, the computing time forthe comparison �often executed on line� has been alsoreduced.

B. Implementation Issues

Dimensions of a workpiece usually deviate from theirnominal values due to random and�or systematic er-rors. The design thus includes tolerancing informa-tion describing allowable variations from the nominalshape and size, defined either in international, na-tional, or company internal standards. However,tolerance is a concept that CAD systems often onlypartially include. Accordingly, we developed anapproach for the explicit modeling of tolerance spec-ifications associated with quasi-polyhedral manu-factured parts.23 Also, products described by solidmodels require a tolerance representation that cancapture the semantics of the tolerances and relatethem to the solid models.29–31 However, this infor-mation has currently no robust definition of toler-ance.32 This makes it very difficult to representtolerances in a suitable form, especially forautomated-inspection applications. Thus we em-ployed for our particular inspection task a CADconstructive solid geometry model of a workpiece,enhanced with a suitable representation of the tol-erance information, following the theory of geomet-ric tolerancing originally introduced by Requicha.32

Based on this pioneering work, the tolerance infor-mation is specified as a set of geometric attributes ofthe surface features of an object boundary, as inother comparable approaches.23,30,33–36 Tolerancemodels have been developed for both elliptical prim-itives �size tolerance, form tolerance, and positiontolerance described by a single representation; seeFig. 2� and line segments �rectangular parallelepi-ped as shown in Fig. 2�.

A complete 3D model is then established on thebasis of three kinds of models �geometric, topologic,and technological, Ref. 23� by use of the CATIA CAD

system from Dassault systems. This model includesthe tolerance specifications, simply considered as aset of complementary properties or attributes, asshown on Fig. 3.

Anticipating the use of machine vision for inspec-tion, features not visible from the camera viewpoint�s� �determined by calibration� are eliminatedwith a ray-tracing algorithm. This leads to the so-called reduced model of the part, which containssolely the features visible in the real images. In thismodel tolerances are represented only for exploitableelements, i.e., the primitives extracted from the im-ages that are entirely visible and measurable, whichtake into account the view points of the cameras �seeFig. 3�. Last, still using CATIA, the 3D reducedmodel is transformed into a 2D representation byperspective projection �by use of the calibration pa-rameters� modeling the image-taking process. Thisrepresentation contains the ROIs, which define theallowable errors of the visible part features in theimage plane, to be used for on-line evaluation tasks ofmanufactured parts. Specifically, a feature is con-sidered as acceptable if and only if all the pixels be-longing to the edge are localized in the correspondingROI. This comparison takes place after adequateprocessing of the images of the workpiece. Process-ing is carried out using knowledge-based edgedetection, an expert system we have specially devel-

Fig. 2. Tolerance models for ellipses �left-hand side� and line segments �right-hand side�.

Fig. 3. Full-3D tolerance model for a typical workpiece �left-handside�, reduced model taking the acquisition conditions into account�right-hand side�.

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2633

oped.37,38 Making use of a priori knowledge aboutthe scene and image contents, this knowledge-basedsystem achieves controlled edge extraction based onan iterative process, which adapts itself, through spe-cific reasoning mechanisms, to the local contextaround the primitives being extracted. The a prioriknowledge is conveniently encoded into synthetic im-ages generated by use of ray tracing. These concep-tual data enable further classification of theextracted images’s features as edges belonging to thepart, to shadow areas, to highlights, etc. Practically,for each given gray-level transition, the systemchooses an operator in accordance with the transitiontype and sets optimally the parameter values of theoperator. In consequence, the localization errors areminimized and only the edges associated with thegeometry of the part are kept for further processing.Last, comparison is carried out between the edge-point lists describing the geometric primitives andthe corresponding ROIs encoded in the 2D simulatedrepresentations �see Fig. 4�. Conformity inspectionof a part can then be performed directly through eval-uation of the results of the comparison. This allowsus to define in which condition the part is �the part isgood, bad, or out of conformance, but reworkable� andto describe quantitatively the deviations of the realobject with respect to its model. Last, adequate met-ric descriptions of the features of the part are con-structed and stored in a standard ASCII file. Thesedata can then be evaluated either automatically orsimply by the end user in an easily readable form.

4. Results

In this section, we give the results of representativeapplications �conventional manufactured parts, tur-bine blade, etc.�, exemplifying the behavior of the

system and demonstrating the quality of the mea-surements �accuracy range: 10–50 �m�.

A. Free-Form Surfaces

In the case of real-life image sequences, the sensorhead is moved around the scene with a computer-controlled micrometric table with four degrees offreedom. This arrangement models adequately con-ventional CMM except for the measurement space,which is smaller than those provided by industrialsystems. Figure 5 gives an example of reconstruc-tion�measurement of a real-life object.

Both interpolation schemes lead more or less to thesame final accuracy. Registration accuracy is in therange of 15 �m to 50 �m, for fields of view of about 5cm � 5 cm and images of 512 � 512 pixels. Onaverage a greater number of matched primitives arefound and selected when the thin-plate-splines ver-sion is used �approximately two times more than non-uniform rational B-splines are used�. In order tounderstand this latter observation, complementaryinvestigations have been carried out, which showthat computation of the normals associated with the3D points of the surface of the first image are moreaccurate when the thin-plate-spline version is ap-plied. This is especially observed in the case ofobjects that include free-form surfaces. As a conse-quence, higher precision for these normals leads tothe selection of a larger number of valid matches.

Experimental work shows also that the size of theoverlap region should be sufficiently large to not dis-

Fig. 4. Acquired image with tolerance model overlaid �above left-hand side�, equivalent synthetic image �above right-hand side�,and contour data retained for measurement by the expert systemdescribed by an automatically generated ASCII file �below center�.

Fig. 5. Analyzed turbine blade and registration result of a se-quence of four images.

2634 APPLIED OPTICS � Vol. 41, No. 14 � 10 May 2002

turb the registration process. For all the analyzedworkpieces, the average overlap size should be of theorder of 30% to 40% of the image size.

Finally, in order to show the efficiency of the de-veloped registration algorithm, registered-image se-quences have been visualized within the CAD toolused. Figure 6 shows an example of the reconstruc-tion of a part of a turbine blade �Fig. 5� by use of theCAD system CATIA as a visualization tool and illus-trates also the same area overlaid on the 3D data setobtained with a 3D coordinate measurement machine�from MITUTOYO, Japan�. This exemplifies the po-tential use of the method for reverse engineering ap-proaches.

The aim of the described system is also the recon-struction and�or the quantitative evaluation of sur-faces. Geometric inspection of an object usuallyinvolves testing the compliance of the observed datawith a reference-data set. Therefore a correspon-dence between these two sets of data has to be deter-mined to express one of these representations in thecoordinate system of the other. In this caseregistered-point sets are compared with similar sim-ulated information including tolerance. In general,these data are gained from a CAD model that corre-sponds to the real piece. This could be done with aCAD tool to localize roughly the CAD model withregard to the registered sequences. Then, by use ofthe developed registration algorithm, the CAD modeland the registered sequence are expressed into a com-

mon reference frame. But often, such a model is notavailable and it is necessary to build one from datagained with a CMM with touch probes. Usually,accuracy better than 5 �m is reached, so that thegained information can be used as a reference for thecomparison. Several approaches for the associatedregistration problem have been suggested and inves-tigated during the last years.21,39 Matching can beachieved by making use of particular features �linearedges, corners, etc.� available in both data sets, by useof tree-search approaches, for example. If such fea-tures are not available, as is the case for free-formsurfaces, the rigid transformation between both rep-resentations can be determined by use of the previ-ously described registration technique �minimizationof a distance criterion within an area common to bothdata sets�. Another possibility is to determine geo-metric intrinsic features that characterize the objectand that are present in both sets, to register them,and then to derive the desired transformation. Thislatter approach, implemented in our system, is de-scribed next.

As depicted in Fig. 7 for the case of a turbine blade,objects of interest can be roughly described as beingcomposed of polyhedral parts and a body that is a�usually smoothly varying� free-form surface. Com-paring the registered 3D points expressed in theframe RsenA of the vision system with reference 3Dpoints expressed in the frame RsenB of the CMM re-quires that these two data sets are expressed in acommon reference frame. In order to achieve on-line inspection of, e.g., turbine blades, we have chosento define a 3D orthogonal reference frame Rpart be-longing to the body of the parts �see Fig. 7�. Becausesuch a part of the piece is usually checked during aprevious manufacturing step and thus is known, onecan consider that the planes forming the polyhedralpart are perfect within the associated tolerances.

Thus their intersections can be used to determinethe desired reference frame Rpart, assuming that, inthe image, parts of at least three planes are visible.The determination of position and orientation of thisframe has been fully automated and only requiresfulfillment of a few assumptions �e.g., localization ofthe part with respect to the sensing device�. Among

Fig. 6. Surface reconstruction �using the CATIA CAD system asa visualization tool� of a part of the turbine blade of Fig. 5 from asequence of three images �left-hand side� and corresponding pointset obtained with a CMM �right-hand side�.

Fig. 7. Structure of a typical turbine blade and principle of the localization of the reference frame Rpart.

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2635

these are the assumptions that three particularplanes P1, P2, and P3 of the base can be simulta-neously determined and that the part stays on itsbase. Then, given a registered data set of 3D pointsexpressed in RsenA, the procedure makes use of thepolyhedral structure of the base in order to decidewhich points belong to the planes P1, P2, or P3. Anexample of a typical data set of 3D points �81 � 42points� is given in Fig. 8, together with the corre-sponding labeling of the data points. Rough initialequations of the planes are then computed, and acoarse to fine strategy allows determining recursivelytheir best approximations. The reference frameRpart can then be defined by computing the intersec-tions between planes P1, P2, and P3. Finally, thetransformation between RsenA and Rpart is computedto express the 3D coordinates of the registered pointsin the frame Rpart.

The same procedure is also applied off line to theCMM data set. This allows the expression of the twodata sets in the same frame. As a consequence, thisenables the comparison of the registered and refer-ence data sets. However, the 3D data sets are bothnot dense enough and have been acquired with dif-ferent densities. This prohibits a point to point com-parison. Thus a distance criterion is again used inorder to determine, for each point of the registereddata set, its counterpart on a surface modeling thereference data set. For this purpose, two surfaceapproximations were implemented. The first mod-els the reference data using a Delaunay triangula-tion. After each registered point is determined forthe corresponding elementary triangular facet, thecorresponding reference point is determined by theintersection of the normal to this surface that passesthrough the point of the first data set and that has theshortest length. The second approach is based onthin-plate-splines. The distance criterion is herealso applied to each registered point, by use of aNewton-like iterative search as a first step to definethe optimal normal. A representative turbine bladewas used to verify the �self-�referencing capability ofour approach. Results are representative for the di-mensional inspection of manufactured parts, whichinclude sculptured surfaces. Recall that Fig. 6�right-hand side� shows the reference-point set rep-resenting a part of the turbine blade of Fig. 5 �see alsoFigs. 7 and 8�. Thus sequences of images acquiredfrom various points of view have been registered toprovide a single description for the body and the tran-sition surface between the base and the body �see Fig.8�. The automated determination of the 3D orthog-onal reference frame Rpart that belongs to the turbineblade is shown on Fig. 8. Comparison of the Delau-nay and thin-plate-splines-based surface modelingapproaches �with respect to computing efficiency andmeasurement accuracy� shows that usually the ob-served accuracy is of the same order of magnitude inboth cases: approximately a few tens of �m. Fig-ure 8 exemplifies the result of the registration of the4 measured point sets �shaded in gray� expressed in

the automatically determined common frame Rpartwith the reference CMM point set.

Last, to qualify the accuracy of our method, wehave evaluated for each point in a given measure-ment set its distance to the corresponding referencedata set. We also estimated the standard deviationfor the whole data set. Accordingly, we have firstestimated the quality of the CMM data set acting asground truth. Figure 9 top left-hand side shows anexperimental data set obtained with the CMM, whichis similar to the one shown in Fig. 6. On the topright-hand side, Fig. 9 depicts the points extractedfrom the raw CMM matrix of measurement pointsand that are labeled as planes. As explained above,these three planes are then used to construct thereference system. The bottom of the same illustra-tion indicates, for each CMM point that belongs to agiven plane, its distance from the corresponding ref-erence plane together with the standard deviation forall points associated with one of the three referenceplanes P1, P2, and P3. These planes correspond tothe definition given above when describing the �self �-referencing procedure �see also Fig. 8�. Taking thestandard deviation as a measure of accuracy, oneobserves that the accuracy is of the order of a few �mwhen the data set is large enough. If the referencedata describing a plane are too small, usually theaccuracy decreases as shown in Fig. 9 by the standarddeviation associated with plane P2. Note that inthis experiment this is due to the inability of theCMM to evaluate points in this nearly horizontalplane. Vision-based measurements can in such sit-uations provide a sensible alternative, because theyare able to overcome this inherent drawback of theCMM �see Fig. 6 left-hand side�.

Using the same approach, we estimated the qualityof the measurement set obtained using thestructured-light approach �compare with Fig. 9�.Figure 10 shows �at the top left-hand side� the rawmatrix of measurement points �one image�, which aresimilar to the data set depicted in Fig. 8. The figureindicates �at the top right-hand side� the measuredpoints labeled as planes and used for registrationwith the reference system determined by the threeplanes P1, P2, and P3 of Fig. 9. Finally, the bottomof Fig. 10 gives for each measured point in a givenplane its distance from the corresponding plane, to-gether with the standard deviations for the three setsof points. The observed accuracy measure, taken asthe average distance from the reference plane aug-mented by one standard deviation, is of the order of25 �m. This is typical of all the experiments carriedout. As a result, usually the accuracy observed liesbetween 15 �m and 50 �m, the upper limit beingobserved in less favorable experimental conditions.Note that the accuracy of the CMM data is approxi-mately 10 times better than the precision observedwith the structured-light-based data. This observa-tion justifies the use of CMM data as ground truth forqualifying the vision-based measurements. Finally,one can also note that the vision system delivers more

2636 APPLIED OPTICS � Vol. 41, No. 14 � 10 May 2002

Fig. 8. Figure on the top left-hand side shows how a matrix of measurement points expressed in RsenA and how a reference-coordinatesystem can be automatically determined with one image of the sequence. The illustration �top right-hand side� shows the points labeledas P1 �asterisks�, P2 �open circles�, or P3 �filled circles� used to construct the reference system. The figure below �center� pictures theresult of the registration of the four measured-point sets �shaded in gray� expressed in the automatically determined common frame Rpart

with the reference CMM point set.

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2637

measurements for plane P2 where the CMM encoun-ters difficulties.

As a last experimental estimation of the behavior ofthe proposed method, we evaluated the accuracy ofthe measurements after registration with the CMM

data acting as ground truth. Figure 11 pictures �topleft-hand side� the result of the registration of themeasured point set from Fig. 10 �shaded in gray�,expressed in the automatically determined commonframe Rpart, with the CMM-reference-point set. The

Fig. 9. Estimation of the quality of the CMM data set acting as ground truth �see also Fig. 8�. The figure on the top left-hand side showsthe raw CMM matrix of measurement points. The illustration top right-hand side shows the points labeled as planes and used to constructthe reference system. �Note that the density of points kept for defining planes P1 and P3 is so high that they appear in the figure as blacksurface patches�. The figure below indicates for each CMM point in a given plane its distance from this reference plane together with thestandard deviation. The order of the points along the x-axis is arbitrary. The planes P1, P2, and P3 have the same definition as in Fig. 8.

2638 APPLIED OPTICS � Vol. 41, No. 14 � 10 May 2002

illustration �top right-hand side� in the same figureshows the points kept for evaluation of the accuracy�note that they do not belong to a plane, but to thefree-form surface of the workpiece�. The bottom ofFig. 11 shows for each measured point its distance

from the CMM data set, together with the standarddeviation for the whole data set. Note that only afew points exhibit very large errors. These pointsare readily eliminated with the standard deviation asa qualifying threshold.

Fig. 10. Estimation of the quality of the data set obtained with the structured-light approach �compare with Fig. 9�. The figure on thetop left-hand side shows the raw matrix of measurement points �one image�. The illustration �top right-hand side� shows the pointslabeled as planes and used for registration with the reference system determined by the three planes P1, P2, and P3 of Fig. 9. The figurebelow indicates for each measured point in a given plane its distance from the corresponding plane together with the standard deviation.The order of the points along the x-axis is arbitrary. The planes P1, P2, and P3 have the same definition as in Fig. 8.

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2639

In a worst-case situation, with rather bad light-ing conditions that lead to images with poor con-trast �see Subsection 2.A�, even though suchconditions are usually avoided in a real-life appli-cation, accuracy remains acceptable. An example

of such a case is shown on Fig. 12, which indicates,after registration with the CMM data acting asground truth, for each measured point in plane P3its distance from the reference plane, together withthe standard deviation for the whole set of measure-

Fig. 11. Estimation of the quality of the measurements after registration with the CMM data acting as ground truth. �See also Figs.9 and 10�. The figure on the top left-hand side pictures the result of the registration of the measured-point set from Fig. 10 �shaded ingray�, expressed in the automatically determined common frame Rpart, with the reference CMM point set. The illustration �top right-handside� shows the points kept for evaluation of the accuracy. The figure below shows for each measured point its distance from the CMMdata set, together with the standard deviation. The order of the points along the x-axis is arbitrary.

2640 APPLIED OPTICS � Vol. 41, No. 14 � 10 May 2002

ments. Plane P3 has the same definition as inFig. 8.

B. Contours

Typical results, after having applied the completeprocessing chain to images of a representative partacquired under two different viewpoints, are shownon Figs. 4 �below center� and 13 �right-hand side�.In these figures, only the edges relative to borders,which will be exploited, for the dimensional measure-ments are represented. As a result, the proposedmethod is able to evaluate quantitatively manufac-tured parts with an accuracy of the order of 10 to 30�m and to carry out tolerance verification for simplegeometry manufactured parts. One can observethat the accuracy of contour-based measurements isslightly better than in the case of structured lightbased evaluations. This can be understood if onetakes into account the fact that contour points areusually not evaluated as single points, but rather aslists of points belonging to the same geometric fea-ture. This is the case in our system, as described inSubsection 3.B. As a result, some averaging overthe contour points associated with the same geomet-

ric primitive takes place, leading to reduced measure-ment errors and, thus, better accuracy.

5. Conclusion and Outlook

In this paper, we have introduced a vision-based sys-tem for 3D inspection of polyhedral workpieces in-cluding sculptured surfaces, which makes use of anefficient and accurate registration procedure for thespatial-image sequences required to scan the sur-faces to their full extent. This sensing system over-comes the inherent drawbacks of CMM systems,because data-acquisition time is strongly reduced, ac-curate positioning of the workpieces with respect tothe sensing device is no longer required, and surface-curvature variations out of reach for CMMs can bemeasured.

The proposed procedures exhibit some interestingproperties:

• Simplicity of use. The procedures have beenfully automated.

• Robustness with respect to noise, measurementerrors, and aberrant matches. The algorithms takecare of eventual outliers, which can be readily elim-

Fig. 12. Estimation of the quality of the measurements under bad acquisition conditions after registration, with the CMM data actingas ground truth. The figure shows, as an example, for each measured point in plane P3 its distance from this reference plane, togetherwith the standard deviation. The order of the points along the x-axis is arbitrary. The plane P3 has the same definition as in Fig. 8.

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2641

inated, enabling us to use optimally significant infor-mation for interpolation, matching, reconstruction,and�or measurement.

• Final observed accuracy. Because all theavailable significant information has been effectivelytaken into account for registration, the accuracy isoptimized. As a result, the measurement accuracy�15 to 50 �m� is of the same order of magnitude asthat observed when extracting the 3D data sets fromthe single images �approximately 20 �m�.

• Ability to scan the contours and surfaces to beevaluated in their full extent to provide, in a singlereference frame, the 3D description of the object�s� tobe reconstructed and�or inspected. Two strategiesare currently being tested, the appropriateness of eachdepending on the precision to be achieved. The firstone, assuming a need for high accuracy, proceeds in-crementally for the reconstruction. In order to reachthe accuracy specified, the size of the overlap region isadjusted so as to allow us to obtain the set accuracy.This, usually, implies rather large overlap regions, en-abling us only to complement lightly the descriptionprovided by a single image. However, taking the pro-

cessing times and the necessary stabilization periodinto account after moving the measuring head, thewhole process can be automatically carried out contin-uously and smoothly until the whole object has beensensed �thus the designation incremental reconstruc-tion� without the need to store the acquired imagesequence. For the second strategy, when the commit-ment to high accuracy can be relaxed, viewpoint vari-ations can be large, which lead to small overlap regionsand spatial sequences with few images. The se-quence of images can then be stored and processed offline.

• Further extensions of the current version of theapproaches are being studied to include an error-propagation mechanism, with one image of the se-quence providing the reference frame, and to improvethe matching phase, i.e., the computation of the inter-section between straight lines and surface representa-tions. Work is also in progress to qualify theprocedure for entire workpieces for which the differentoverlap regions may have various shapes and sizes.For that purpose, we are developing an inspection ro-bot capable of evaluating parts in a measuring volumecorresponding to a half-sphere of radius 2m �see Fig.14�. On line, inspection is also based on comparisonof image data and CAD-based data automatically gen-erated. This device should facilitate the controlled-feature extraction in view of both a fine modeling of theimage contents with an application to dimensionalmeasures, and a 3D reconstruction of rigid or nonrigidobjects with a combination of contour- and surface-based approaches that enable the full-3D reconstruc-tion or measure of workpieces. For this purpose, werelate the surface data to contour data to derive acomplete description of the object. Using ourknowledge-based edge-detection system,36 optimally

Fig. 13. Acquired image �left-hand side� and contours kept fordimensional measurements by the inspection system.

Fig. 14. Sketch of measuring robot �left-hand side� and current status of the actual device �right-hand side�.

2642 APPLIED OPTICS � Vol. 41, No. 14 � 10 May 2002

extracted image features are further processed to buildclosed scene features delimiting surfaces in which astructured-light approach can be applied, which canlead to the desired shape descriptions. In summary,the fully automated knowledge-based approaches canbe meaningfully applied to model-based measuringand 3D reconstruction. The techniques are easy toapply, robust against encountered error sources, andtake current yet limited processing�evaluation possi-bilities into account.

References1. E. J. Bayro-Corrochano, “Review of automated visual inspection

1983–1993, Part I: conventional approaches,” in Intelligent Ro-bots and Computer Vision XII: Algorithms and Techniques,D. P. Casasent, ed., Proc. SPIE 2055, 128–158 �1993�.

2. E. J. Bayro-Corrochano, “Review of automated visual inspection1983–1993, Part II: Approaches to intelligent systems,” in Intel-ligent Robots and Computer Vision XII: Algorithms and Tech-niques, D. P. Casasent, ed., Proc. SPIE 2055, 159–173 �1993�.

3. T. Newman and A. Jain, “A system for 3D CAD-based inspectionusing range images,” Pattern Recogn. 20, 1555–1574 �1995�.

4. C. Costa and M. Petrou, “Automatic registration of ceramictiles for the purpose of fault detection,” Mach. Vision Appl. 11,225–230 �2000�.

5. F. Arman and J. Aggarwal, “Model-Based Object RecognitionDense-Range Images.—A Review,” ACM Comput. Surv. 25,5–43 �1993�.

6. T. Newman and A. Jain, “A survey of automated visual inspec-tion,” Comput. Vision Image Understand. 61, 231–262 �1995�.

7. T. Kanade, “Region segmentation: signal versus semantics,”in Proceedings of the International Joint Conference on PatternRecognition, Kyoto, Japan �1978�, pp. 95–105.

8. T. Kanade, “Region segmentation: signal versus semantics,”Comput. Graph. Image Process. 13, 279–297 �1980�.

9. H.-H. Nagel, “Uber die reprasentation von wissen zur auswer-tung von bildern,” in Angewandte Szenenanalyse, J.-P. Foith,ed., Informatik-Fachberichte No. 20 �Springer-Verlag, Berlin,1979�, pp. 3–21.

10. P. Graebling, C. Boucher, Ch. Daul, and E. Hirsch, “3D sculp-tured surface analysis using a structured light approach,” inVideometrics IV, S. El-Hakim, ed., Proc. SPIE 2598, 15–139�1995�.

11. E. Hirsch and P. Graebling, “Vision based on-line inspection ofmanufactured parts: advanced concepts for quantitativequality control, standardization and integration aspects,” inProc. IMS - Int. Conf. on Rapid Product Development, FPF,Stuttgart, Germany �1994�, pp. 147–158.

12. R. Y. Tsai, “A versatile camera calibration technique for highaccuracy 3D machine vision metrology using of the shelf TVcameras and lenses,” IEEE Trans. Rob. Autom. RA-3, 323–344�1987�.

13. R. Benjemaa and F. Schmitt, “Recalage global de plusieurs surfacespar une approche algebrique,” in 11eme Congres Reconnaissancedes Formes et Intelligence Artificielle, RFIA’98, 20-17 Janvier,LASMEA, Clermont-Ferrand, France �1998�, pp. 227–396.

14. R. Bergevin, D. Laurendeau, and D. Poussart, “Registeringrange views of multipart objects,” Comput. Vision Image Un-derstand. 61, 1–16 �1995�.

15. R. Bergevin, M. Soucy, H. Gagnon, and D. Laurendeau, “To-wards a general multi-view registration technique,” IEEETrans. Pattern Anal. Mach. Intell. 15, 540–547 �1996�.

16. P. Besl and N. Mc Kay, “A model for registration of 3D shapes,”IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 �1992�.

17. G. Blais and M. Levine, “Registering multiview range data tocreate 3D computer objects,” IEEE Trans. Pattern Anal. Mach.Intell. 14, 820–818 �1995�.

18. L. Brown, “A survey of image registration techniques,” ACMComput. Surv. 18, 325–376 �1992�.

19. Y. Chen and G. Medioni, “Object modelling by registration of mul-tiple range images,” Image Vision Comput. 10, 145–155 �1992�.

20. C. Dorai, G. Wang, A. Jain, and C. Mercer, “Registration andintegration of multiple object views for 3D model construction,”IEEE Trans. Pattern Anal. Mach. Intell. 20, 83–89 �1998�.

21. A. Goshtbasy, “Three-dimensional model construction frommultiview range images: survey with new results,” PatternRecogn. 31, 1405–1414 �1998�.

22. Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vision, 13�2�, 119–152 �1994�.

23. D. Zhou, “Application de la comparaison d’images reelles etconceptuelles a l’extraction controlee d’indices images et a lametrologie dimensionnelle,” Ph.D. dissertation �UniversiteLouis Pasteur, Strasbourg, France, February 2000�.

24. Ch. Schoenenberger, P. Graebling, and E. Hirsch, “Acquisitionand 3D registration of image sequences for structured lightbased free-form surface reconstruction,” in Proceedings of the IXEuropean Signal Processing Conference, S. Theodaris, I. Pitas,A. Stouraikis, and N. Kalouptsidis, eds. Rhodes Island, Greece,�Typorama Editions, Patras, Greece, 1998�, Vol. III, pp. 1281–1284.

25. B. Horn, “Closed-form solution of absolute orientation usingunit quaternions,” J. Opt. Soc. Am. A, 4, 629–642 �1987�.

26. M. Walker, L. Shao, and R. Volz, “Estimating 3D locationparameters using dual number quaternions,” Comput. VisionImage Understand. 54, 358–367 �1991�.

27. K. Arun, T. Huang, and S. Blostein, “Least squares fitting oftwo 3D points sets,” IEEE Trans. Pattern Anal. Mach. Intell. 9,698–700 �1987�.

28. A. Lorusso, D. W. Eggert, and R. B. Fisher, “A comparison offour algorithms for estimating 3-d rigid transformation,” inProceedings of the 6th British Machine Vision Conference�BMVC’95�, D. Pycock, ed. �BMVA Press, Edinburgh, UK,1995�, pp. 237–246.

29. A. Fleming, “Geometric relationships between toleranced fea-tures,” Artif. Intell. 37, 403–423 �1988�.

30. J. Guilford and J. Turner, “Representational primitives for geomet-ric tolerancing,” Comput. Aided Des. 25, 577–586 �1993�.

31. N. P. Juster, “Modeling and representation of dimensions and tol-erances: a survey,” Comput. Aided Des., 24, 25–237 �1992�.

32. A. A. G. Requicha, “Toward a theory of geometric tolerancing,” Int.J. Rob. Res. 24, 45–60 �1983�.

33. A. A. G. Requicha and S. C. Chan, “Representation of geomet-ric features, tolerances, and attributes in solid modellers basedon constructive geometry,” IEEE Trans. Rob. Autom. RA-24,2356–2366 �1986�.

34. L. Rivest, C. Fortin, and C. Morel, “Tolerancing a solid modelwith a kinematic formulation,” Comput. Aided Des. 26, 465–476 �1994�.

35. U. Roy, C. R. Liu, “Integrated CAD frameworks: Tolerancerepresentation scheme in a solid model,” Comput. Indus. Eng.24, 495–509 �1993�.

36. G. H. Tarbox, S. N. Gottschlich, “IVIS: An integrated volu-metric inspection system,” Comput. Vision Image Understand.61, 430–444 �1995�.

37. C. Boucher, C. Daul, P. Graebling, E. Hirsch, “KBED: Aknowledge-based edge detection system,” in Database and Ex-pert System Applications, N. Revelle and A. M. Tjoa, eds.,Lectures Notes in Computer Sciences No. 978 �Springer-Verlag, Berlin, 1995�, pp. 344–353.

38. C. Boucher, “Systeme a base de connaissances pour la detec-tion controlee des contours dans des images a niveaux de gris,”Ph.D. dissertation �Universite Louis Pasteur, Strasbourg,France, July 1996�.

39. G. Farin, “Curves and surfaces for CAGD: a practical guide,”4th ed. �Academic, San Diego, Calif., 1996�.

10 May 2002 � Vol. 41, No. 14 � APPLIED OPTICS 2643


Recommended