+ All Categories
Home > Documents > 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON...

3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON...

Date post: 07-Jun-2018
Category:
Upload: hoanghanh
View: 223 times
Download: 0 times
Share this document with a friend
10
3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With Vignetting Effect Reduction for a Nikon 6B/6D Autocollimator Guillermo J. Bergues, Luis Canali, Senior Member, IEEE, Clemar Schurrer, and Ana Georgina Flesia Abstract—In this paper, we present an electronic interface created for the Nikon 6B/6D visual autocollimator that allows for an increase in the final resolution of measurements and a reduction in the vignetting and distortion effects produced by this optical instrument’s lenses. The electronic interface consists of a Basler ACE high-definition camera and its positioning devices and a computer with a subpixel digital image processing package. The latter includes two main procedures: one for scale calibration and the other for determining the position of crosshair lines. Both procedures work at subpixel level. The feasibility of the measurement method was verified. The resolution obtained for the measurement of angular displacements is about 0.019 s of arc, 25 times better than the one registered by the original visual system. Its overall performance was compared against an electronic level with internationally traceable certification. Index Terms—Autocollimator, Hough transform, subpixel line detection, vignetting effect, visual interface. I. I NTRODUCTION I N MANUFACTURING, automotive, and aerospace industries, there is a need for accurately measuring the geometric parameters of surfaces used in optomechanical assembly and in the adjustment of optical instruments. Autocollimators are used in such industrial environments for precision alignments of mechanical components, and the detection of angular movement and angular monitoring over time and to ensure compliance with angle specifications and standards [1]–[3]. Autocollimators operate either by visual detection (by sight) or digital detection using a photodetector [4]. Visual autocollimators are often used for lining up laser rod ends and Manuscript received February 13, 2015; revised April 13, 2015; accepted May 19, 2015. Date of publication July 20, 2015; date of current version November 6, 2015. This work was supported in part by the Fund for Scientific and Technological Research, in part by the Secyt-Universidad Nacional de Córdoba y Secyt–Universidad Tecnológ- ica Nacional (UTN) under Grant PICT 2008-00291, in part by PID–UTN under Grant 2012-25/E170 and Grant 1406, and in part by PID under Grant 2012 05/B504. The Associate Editor coordinating the review process was Dr. Zheng Liu. (Corresponding author: Ana Georgina Flesia.) L. Canali are with the Centro de Investigación en Informática para la Inge- niería, Universidad Tecnológica Nacional–Facultad Regional Córdoba, Ciudad Universitaria, Córdoba 5000, Argentina (e-mail: [email protected]). G. J. Bergues and C. Schurrer is with the Centro de Metrología Dimensional, Universidad Tecnológica Nacional–Facultad Regional Córdoba, Ciudad Universitaria, Córdoba 5000, Argentina (e-mail: [email protected]; [email protected]). A. G. Flesia is with the Consejo Nacional de Investigaciones Científicas y Técnicas, Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Córdoba 5000, Argentina (e-mail: fl[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIM.2015.2444263 checking the face parallelism of optical windows and wedges. Digital autocollimators are used as angle measurement standards for monitoring angular movement over long periods of time and for checking angular position repeatability in mechanical systems [5]. A visual autocollimator can measure angles as small as 0.5 s of arc, while a digital autocollimator can be up to 100 times more accurate [6]. Digital autocollimators have a photodetector recording the position of the projected light, and thus higher resolution can be achieved, increasing the spatial resolution of the detector or by improving the data processing system [7], [8]. In addition, there are some commercial external interfaces that can be acquired to transform visual models into digital ones, such as the Davidson Optronics Digital Autocollimator Upgrade Kit, which allows existing D-652 models to have all the digital functionality of the new model D-720, digital two-axis autocollimator. The upgrade kit replaces the autocollimator’s eyepiece assembly with a video imager that is controlled by a black box software that allows results to be viewed in real time, statistically analyzed, and stored for later reference [9]. The Nikon 6B/6D standard visual autocollimator is a high-precision autocollimator. It has an aperture of 70 mm, 0.5 s of arc resolution within a range of 5 min of arc, and 1 s of arc resolution within a range of 30 min of arc. The manufacturer does not provide an external interface to increase its abilities; neither is it possible to change the detector since it has a closed optical system. Given these limitations, the resolution of this instrument is insufficient for carrying out certain measurements. In [10], the potential of a low-cost interface design was addressed, mounting a simple interface with an off-the-shelf webcam with a CMOS sensor and a wide-angle lens. This system allowed us to capture the center of the internal image formed in the autocollimator to later process the information and obtain a measurement. The accuracy and uncertainties measurements were not discussed at the time because the optics of the camera and the positioning device were not reliable enough. In this paper, we extended our first design by replacing the webcam with a high-resolution Basler Ace camera with charge-coupled device (CCD) sensor and calibrating the camera’s position to reduce external errors [11]. This hardware design is similar to the one in [4] and [8], in the sense that the camera is also attached to the external eyepiece of the autocollimator. The experiments in [4] and [8] were restricted to analyze the zero’s long-term thermal stability of the system while we present real measurements 0018-9456 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Transcript
Page 1: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015

Electronic Interface With Vignetting EffectReduction for a Nikon 6B/6D Autocollimator

Guillermo J. Bergues, Luis Canali, Senior Member, IEEE, Clemar Schurrer, and Ana Georgina Flesia

Abstract— In this paper, we present an electronic interfacecreated for the Nikon 6B/6D visual autocollimator that allowsfor an increase in the final resolution of measurements and areduction in the vignetting and distortion effects produced by thisoptical instrument’s lenses. The electronic interface consists ofa Basler ACE high-definition camera and its positioning devicesand a computer with a subpixel digital image processing package.The latter includes two main procedures: one for scale calibrationand the other for determining the position of crosshair lines.Both procedures work at subpixel level. The feasibility of themeasurement method was verified. The resolution obtained forthe measurement of angular displacements is about 0.019 s ofarc, 25 times better than the one registered by the originalvisual system. Its overall performance was compared against anelectronic level with internationally traceable certification.

Index Terms— Autocollimator, Hough transform, subpixel linedetection, vignetting effect, visual interface.

I. INTRODUCTION

IN MANUFACTURING, automotive, and aerospaceindustries, there is a need for accurately measuring the

geometric parameters of surfaces used in optomechanicalassembly and in the adjustment of optical instruments.Autocollimators are used in such industrial environmentsfor precision alignments of mechanical components, and thedetection of angular movement and angular monitoring overtime and to ensure compliance with angle specifications andstandards [1]–[3].

Autocollimators operate either by visual detection (bysight) or digital detection using a photodetector [4]. Visualautocollimators are often used for lining up laser rod ends and

Manuscript received February 13, 2015; revised April 13, 2015;accepted May 19, 2015. Date of publication July 20, 2015; date ofcurrent version November 6, 2015. This work was supported in partby the Fund for Scientific and Technological Research, in part bythe Secyt-Universidad Nacional de Córdoba y Secyt–Universidad Tecnológ-ica Nacional (UTN) under Grant PICT 2008-00291, in part by PID–UTNunder Grant 2012-25/E170 and Grant 1406, and in part by PID under Grant2012 05/B504. The Associate Editor coordinating the review process wasDr. Zheng Liu. (Corresponding author: Ana Georgina Flesia.)

L. Canali are with the Centro de Investigación en Informática para la Inge-niería, Universidad Tecnológica Nacional–Facultad Regional Córdoba, CiudadUniversitaria, Córdoba 5000, Argentina (e-mail: [email protected]).

G. J. Bergues and C. Schurrer is with the Centro de MetrologíaDimensional, Universidad Tecnológica Nacional–Facultad RegionalCórdoba, Ciudad Universitaria, Córdoba 5000, Argentina (e-mail:[email protected]; [email protected]).

A. G. Flesia is with the Consejo Nacional de Investigaciones Científicas yTécnicas, Facultad de Matemática, Astronomía y Física, Universidad Nacionalde Córdoba, Córdoba 5000, Argentina (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIM.2015.2444263

checking the face parallelism of optical windows and wedges.Digital autocollimators are used as angle measurementstandards for monitoring angular movement over long periodsof time and for checking angular position repeatabilityin mechanical systems [5]. A visual autocollimator canmeasure angles as small as 0.5 s of arc, while a digitalautocollimator can be up to 100 times more accurate [6].Digital autocollimators have a photodetector recording theposition of the projected light, and thus higher resolution canbe achieved, increasing the spatial resolution of the detector orby improving the data processing system [7], [8]. In addition,there are some commercial external interfaces that can beacquired to transform visual models into digital ones, such asthe Davidson Optronics Digital Autocollimator Upgrade Kit,which allows existing D-652 models to have all the digitalfunctionality of the new model D-720, digital two-axisautocollimator. The upgrade kit replaces the autocollimator’seyepiece assembly with a video imager that is controlled bya black box software that allows results to be viewed in realtime, statistically analyzed, and stored for later reference [9].

The Nikon 6B/6D standard visual autocollimator is ahigh-precision autocollimator. It has an aperture of 70 mm,0.5 s of arc resolution within a range of 5 min of arc, and1 s of arc resolution within a range of 30 min of arc. Themanufacturer does not provide an external interface to increaseits abilities; neither is it possible to change the detector sinceit has a closed optical system. Given these limitations, theresolution of this instrument is insufficient for carrying outcertain measurements.

In [10], the potential of a low-cost interface design wasaddressed, mounting a simple interface with an off-the-shelfwebcam with a CMOS sensor and a wide-angle lens. Thissystem allowed us to capture the center of the internal imageformed in the autocollimator to later process the informationand obtain a measurement. The accuracy and uncertaintiesmeasurements were not discussed at the time because theoptics of the camera and the positioning device were notreliable enough.

In this paper, we extended our first design by replacingthe webcam with a high-resolution Basler Ace camerawith charge-coupled device (CCD) sensor and calibratingthe camera’s position to reduce external errors [11]. Thishardware design is similar to the one in [4] and [8], inthe sense that the camera is also attached to the externaleyepiece of the autocollimator. The experiments in [4] and [8]were restricted to analyze the zero’s long-term thermalstability of the system while we present real measurements

0018-9456 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

BERGUES et al.: ELECTRONIC INTERFACE WITH VIGNETTING EFFECT REDUCTION FOR A NIKON 6B/6D AUTOCOLLIMATOR 3501

made over a wider range of angles. Another system, anelectronic reference-beam (two mirrors) autocollimatordesign with nanoradian sensitivity was discussed in [5]. Thesoftware that controlled the system also corrected temperatureCCD displacement errors with subpixel image processingtechniques similar to the ones proposed in [10]–[12].

In this paper, we also present the software’s design processin detail, showing an improvement of the general subpixeledge detection algorithm discussed in [12] and [13] with apreprocessing step to reduce vignetting effects. Also throughsimulation, we discuss the performance of four other detectors,including a new algorithm related to the probabilistic Houghtransform. The simulation study allows the calculation of theuncertainty introduced in the measurement by the software,a matter of great importance in metrology. The performanceof the five detectors was also studied segmenting real imagescaptured with the interface in a controlled experiment.We consider this analysis to be of great importance to themetrologic community, since with slight modifications in theinterface’s design and in the software that powers it, digitalinterfaces for other visual autocollimator models can beimplemented as well as to the development of line detectionalgorithms at sub-pixel level. The location of curves withinan image with subpixel resolution is highly interesting invery different fields, such as glass width estimation [14] andhigh-temperature specimen contour estimation [13], [15].

As a result of the software implementation, the inter-face increases 25 times the native resolution of theNikon 6D autocollimator, without requiring any tampering ofthe instrument. Our design improves the design in [4] in threemain points: 1) we deliver a final measurement for the rangeof the autocollimator, not only the zero value; 2) we calculatethe uncertainty introduced by the software; and 3) our subpixeldetection algorithm also compensates for vignetting effects.

In Section II, we describe the operational principles ofthe autocollimator and we describe how we proceededto perform a measurement with the visual interface.In Sections III and IV, we discuss hardware implementationissues and image processing software design. In Section V,different line detectors were tested under simulation tocalculate the uncertainty that the software adds to the system,revealing, at the same time, the high power and resilienceto noise and distortion of the interface’s line detector.Section VI shows an experiment made at the Centro deMetrología Dimensional (CEMETRO) Laboratory, whereinterface measurements were compared against measurementsfrom a certified electronic level. In Section VII, the resultsare presented and resolution and uncertainties are discussedas well. Section VIII shows the conclusions and future work.

II. SYSTEM CONFIGURATION

A. Autocollimator

An autocollimator is an optic instrument used for measuringsmall angular displacements (at the seconds of arc level)[see Fig. 1(a)]. It can perform measurements without makingcontact with the measured object. To perform a measurement,the autocollimator works together with a reflecting surface E ,

Fig. 1. Correspondence between the image and measurement. (a) Image ofthe reticle scale. (b) Measuring process scheme.

whose distance to the autocollimator has no influence onthe measurement. Its measuring characteristics are expressedin [11] and its calibration ratio is given by

tan(2 · α) = d

f(1)

where f is the focal distance.

B. Measurement With Visual Interface

In Fig. 1(a), an image of the autocollimator’s reticle scalecan be seen. In Fig. 1(b), the measurement process scheme ispresented, as well as the variables’ pitch (By), yaw (Bx ), anddistance of calibration (�xy), which need to be calculated toobtain the measurement with the visual interface.

Using Fig. 1(b) to explain the process, the procedure toperform an automated measurement is to perform thefollowing.

1) Establish the distance �xy between the centerof the divisions of the reticle at subpixel level.�xy—uncertainty estimation.

2) Associate a coordinate system (x, y) to the image reticle.3) Identify the crosshair lines at subpixel level and measure

the distance By and Bx between the center of the linesforming the cross with each axis.

For the Nikon 6D autocollimator, the distance betweenconsecutive divisions of the scale represents 60 s of arc(1 div = 1 min). Once the value �xy of pixels/division isobtained, the observed angle (e.g., pitch) in seconds of arccan be obtained using

αy = 60 · By

�xy. (2)

III. DESIGN CONSIDERATIONS

A. Alignment and Distortion

Imperfections in positioning and aligning of the camera withthe autocollimator’s telescope can cause distortions and blur inthe CCD image of the reticle. Besides, determining when thecamera is in focus is a delicate point [16], [17], since thereis a tradeoff between picture sharpness and the number ofpixels to determine the position of the lines at subpixel level.

Page 3: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

3502 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015

Fig. 2. Image set. (a) Reticle scale image. (b) Crosshair line image.

For those reasons, an alignment is performed following theprocedure proposed in Section II-B of [11]. This alignmentcomprises the first step to be carried out for later reducing theVignetting effect using software.

B. Capturing the Reticle Scale Images

The Nikon 6D is a dark field autocollimator: the darkerthe ambient, the better the precision attained for the operatorin placing the crosshair lines. However, this situation is notoptimal to obtain a good image of the reticle. Scale marksappear distorted and faint in comparison with the crosshairlines, leading to errors in the determination of the �xy value.However, the observation of the scale and the crosshair linescan be made separately, with no movements on the positioningsystem of the camera by simply changing ambient light. Thus,the last step in the system calibration strategy is to take imagesof the reticle [see Fig. 2(a)] with the ambient lights on tomaximize the contrast between the clear, bright background,and the dark lines of the reticle.

C. Capturing the Crosshair Line Images

To perform the measurements by determining the displace-ment of the crosshair lines against the reference scale, thelights in the room were turned off, capturing only imageswith the crosshair lines [see Fig. 2(b)] since the background(and the reticle scale) is dark and the cross itself is bright.

IV. IMAGE PROCESSING

In this section, the optimal algorithm (GaussianDetector G D) is described in detail along with thepreprocessing necessary for reducing the Vignetting effect.Then, in Section V, it will be compared with those of otherdetectors in a simulation.

The technique reported in [12] and early versions of oursoftware [10], [11] explain how to find the centroid of each linewith the Hough transform and average the centroid locations tocalculate the position of each pattern. This method has disad-vantages for high sensitivity in low-frequency measurements.The CCD pixels were measured to be both nonlinear andnoisy at low intensity, but the Hough matrix weights all pixelsequally, and therefore increased noise in the low intensitypixels and obscured the better sensitivity available from thehigh-intensity pixels.

We developed a new data processing algorithm to reducethese effects. Our algorithm identifies lines by finding local

Fig. 3. Schematic diagram of the proposed method for subpixel line detection.(A) Image of the scene (B) intensity levels of the cross sections to the edgeline (C) and (D) Determination of the center of the cross sections to the edgeline using Gaussian and parabolic fitting, respectively.

maxima and fitting cross section peaks with a Gaussianfunction. This algorithm uses robust statistics to fit the data,obtaining good localization even with noise and motionblur (see Fig. 3 for a schematic diagram of the method).Nevertheless, to increase fitting accuracy with small samples,the imagery must be corrected first for the vignetting effectcaused by the autocollimator’s lenses.

A. Reducing Vignetting Effects by Averaging and Filtering

Vignetting effect refers to a position-dependent loss oflighting in the output of an optical system, mainly due to theblocking of a part of the incident ray bundle by the effectivesize of the aperture stop, resulting in a gradual fading out of animage at points near its periphery [18]. This particular patternis shown in Fig. 6, which is a representation of the real imageshown in Fig. 1(a).

This effect distorts the imagery taken by the system,introducing a smooth displacement in the centroid of thelines. Fig. 4 shows a 3-D section of the reticle showing eightscale marks that appear distorted by this effect: the eightintensity peaks are above a parabolic background that behavesas an offset. Given the smooth character of the background,the Savitzky–Golay filter is an ideal method to reduce theoffset. This filter is designed for smoothing images [19],it increases the signal-to-noise ratio (S/N) without greatlydistorting the signal. This is achieved by fitting successivesubsets of adjacent data points with a second-degreepolynomial using the method of linear least squares.

Filtering is a step of great importance when using fittingmethods, since background noise can greatly reduce the accu-

Page 4: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

BERGUES et al.: ELECTRONIC INTERFACE WITH VIGNETTING EFFECT REDUCTION FOR A NIKON 6B/6D AUTOCOLLIMATOR 3503

Fig. 4. Vignetting effect over a center piece of the reticle image (3-D intensityfunction).

Fig. 5. Cross section of the image in Fig. 4. The original curve with vignettingeffect in red and the same cross section after filtering in blue.

Fig. 6. Scheme representing how the incoming light has to be seen in thepicture.

racy of subpixel location of the edges inside the detected pixel.In Fig. 5, one of the cross sections of Fig. 4 is shown. Theeight segments in red are severely affected by vignetting,producing errors in the final detection. After the filtering(curve in blue), the original offsets of the segments are notablyreduced.

B. Further Vignetting Correction

After filtering, a residual background could still remain(see Fig. 7). The effect of this offset on the position of thelines in the images can be quantified in the following way:assuming a Gaussian line profile like the one in (4) and witha residual background as yB = sB · x + bB after filteringprocess, the centroid parameter is shifted in the first-order

Fig. 7. Residual background yB = sB · x + bB affecting the position andheight of each Gaussian function.

approximation by:�B = sB ∗ σ 2 (3)

where sB is the slope of the residual background and σ isthe width of Gaussian function. The �B value will be veryimportant for the uncertainty discussion in Section VII-C.

C. Subpixel Line Detection (Gaussian Detector G D)

The cross sections of smooth intensity lines can be modeledwith a bell or parabola shape, as shown in Fig. 3 [12], [13]. Forthese models, the determination of the line center at subpixellevel is given by a parameter when a Gaussian function isconsidered or a combination of parameters in the case of theparabola shape.

The center of the line at pixel level is detected by analyzingthe intensity matrix in search of maximum intensity values.Around each of the pixels, a linear neighborhood L N

(cross section) orthogonal to each detected line position isextracted (see Fig. 3), and the location of the maximumof the fitted cross section of the Gaussian function SV

is obtained (4). This function has three parameters: thecentroid b (the center of the line), and the pair a and σ ,which give the bell’s height and width, respectively

SV = a ∗ e−(x−b)2/σ 2x ∈ L N . (4)

In [20], the second-order polynomials

SV = A1x2 + A2x + A3 x ∈ L N (5)

are also used for fitting when only few samples are availablein the neighborhood, as the result of the camera’s coarseresolution. What is more, blurred edges have also beensharpened by these subpixel methods and fitting second-orderpolynomials [12], [13], [15]. In our case, the second-orderpolynomials are very sensitive to the size of the neighborhoodas well as to the symmetry of the neighborhood around thecenter of the line at pixel level. In Fig. 8, we show a discreteedge curve, defined in a neighborhood of N = 44 pixelsaround a detected line center, with a Gaussian and quadraticfitting asymmetric neighborhoods. For quadratic fitting, weobserve that the small centered neighborhood gives a goodestimate of the subpixel position of the edge point, but theasymmetric neighborhood provides a biased estimate of thesubpixel location. On the other hand, the Gaussian fittingproved to be insensitive to the size and symmetry of theneighborhood.

Page 5: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

3504 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015

Fig. 8. Second order polynomial approximations to the cross-sectional curve.Red line: approximation made using a small symmetric L neighborhood(discarded values in green) centered on the coarse edge pixel. Cyan line:approximation made using a biased neighborhood L (marked in blue).Purple line: Gaussian fitting computed with all data.

Fig. 9. (a) Segment of the real line. (b) Simulated line. (c) Simulated image.

V. SIMULATION

The aim of this simulation is to show which is the optimalalgorithm for our image processing interface as well as tocalculate the uncertainty added to the measurement by thesoftware. In view of this aim, 100 images were created usingthe pseudocode shown in Section V-A, following the linemodel described in [12]. Each image has one line paralleleddisplaced with a centesimal step (chosen subpixel value) fromthe previous image. In Fig. 9(a) and (b), we can observea line segment extracted from a real captured image and asimulated line, respectively. In this way, the real lines of boththe cross and the scale of a Nikon 6B/6D autocollimatorare represented [12]. Using these simulated data—since weworked in a controlled environment—it was possible to under-stand, analyze, and improve the performance of each simulateddetector.

The equation that was used to generate the Gaussian line inan image I formed by a matrix (Nx , Ny) was the following:

I (i, j) = round(A · e

−(i0− j)2

2 ·�2), i0 = Ny

2∈ N. (6)

The values of c (position of the simulated line) to beestimated are i0 + k with k = 1/100. The round function is

Fig. 10. Parametric space of a straight line.

the round to the nearest integer function. The width of the linegiven by � allows us to build a line according to the numberof pixels that the line in the captured images occupies, where

0 ≤ I (i, j) ≤ 255 e I (i, j) ∈ N. (7)

A. Algorithm

To create the 100 images (n) of a straight line with spatialresolution (Nx , Ny), a 3-D matrix is formed following thebelow pseudocode.

n = 100;for i = 1 : Nx

if d � 0in = in − 1/d; (line inclination)

endfor j = 1 : Ny

for k = 1 : nss = i0 + (k)/n; (sub-pixel step)G( j, i, k) = round(A ∗ ex p(−(ss + in − j)2/(2 ∗ c2)));

endss = 0;

endend

Then, to add noise in accordance with the S/N radio valuessearched for, the MATLAB function imnoise is utilized.

B. Hough Detector (HD)

The Hough transform allows for the transformation of thebinary border image’s discreet space, which is made up ofpixels, into the parametric space (see Fig. 10), which is afunction of θ , y, and ρ variables that define a straight line

ρ = x · cos(θ) + y · sen(θ). (8)

In the usual transform [21], [22], the data contained in thegray levels are almost entirely lost by thresholding when thebinary image is computed. Thresholding and edge detectioncan both be excluded if a new parametric space is created. Thisparametric space must include all the information providedby each gray level [23]. The algorithm designed in [23]detects bands with only one gray value. This approximationis inadequate for metrologic measures with autocollimatorsin which the position of a line has to be obtained at subpixelprecision (what is important is not the line band but its centralposition).

1) Gray Hough Parameter Counting Space: Based on (8),with a space of image given by (x, y, G), in which G is the

Page 6: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

BERGUES et al.: ELECTRONIC INTERFACE WITH VIGNETTING EFFECT REDUCTION FOR A NIKON 6B/6D AUTOCOLLIMATOR 3505

Fig. 11. Gray Hough parameter counting space.

gray level corresponding to each point (x, y) of the image,a new parametric space is formed

ρ(G) = x · cos(θ(G)) + y · sen(θ(G)). (9)

This parametric space defines a mapping space f :(x, y, G) → H (ρ, θ, G), which builds Hi accumulators thatcorrespond to each gray level. All the accumulators derivein a density function whose maximum values correspondto the center of each line. 2n°bits accumulators—unified ina 4-D matrix—are created, n being the quantization bit numberof the camera.

In Fig. 11, as an example, the local maximum values ofthe Hn accumulator are shown. The color black respondsto the region most likely to encounter a straight line forthe gray level n. The following algorithm summarizes theprocedure.

1) Initialize each accumulator H (ρ, θ, G) a 0.2) For each pixel (x, y, G), y θ j = 0° → 179°.3) Calculate ρ j (Gi ) = xi · cos(θ j (Gi )) + yi · sen(θ j (Gi )).4) H (ρ j , θ j , Gi ) = H (ρ j , θ j , Gi ) + 1.5) Create the final matrix HT = H1 + · · · + Hn.6) Compute the points with the highest degree of probabil-

ity in the parametric plane.7) Map these points onto the image plane and obtain the

density function center that is defined by them.

C. Probabilistic Detector (PD)

According to the line profile sampled [see Fig. 3(B)],another detector was created in accordance with the probabilis-tic weighted mean. Being each profile value a sample x andits amplitude, and the weight (probability distribution) P(x),we can express the following discreet variable, each orthogonalcut being cv and N being the number of points per cut:

< x > =N∑

i=1

xi · P(xi ), x = (1, . . . , N)′ (10)

P = (P(x1), . . . , P(xN ))′, Pxi = cv∑N

i=1 cv i

. (11)

D. Maximum Value Detector (MD)

An algorithm that finds the maximum of the vertical cutsshown in Fig. 3(B) is defined through the following algorithm.

Fig. 12. Regression line y = a · x + b that intercepts the center of the lineformed by different gray levels. Each pixel has an intensity value I (x, y),which is function of the spatial coordinates (x, y).

1) Given an image I (M, N).2) Generate N samplings cv of the line.3) Obtain the maximum values of Mcv i cutting functions.4) Detect the center ci = (

∑Ni=1 Mcv i )/N .

E. Weighted Least Squares Detector (WLSD)

Using the whole of the image intensity matrix, three vectorsare formed, which will determine the regression liney = a · x + b that intercepts the center of the sought line(see Fig. 12). The b parameter gives the center of eachline. The summed square of residuals s (12) is minimized,and the result of this process gives the parameter values ofthe regression lines a and b

s =N∑

i=1

Wi · (yi − (a · xi + b))2 (12)

where each weight Wi is given by

Wi = I (x, y)∑Ni=1 I (x, y)

. (13)

F. Simulation Results With and Without Noise

The images captured have a S/N of 13 dB. After applyingthe Satvizky–Golay filter, a S/N = 23 dB is obtained.For each noise level in between these extremes (11 levels),100 images with subpixel step k = 1/100 were created (a totalof 1100 images). Taking into account the subpixel centroidvalues c′ estimated for each detector as a linear function of thesimulated centroids c, the Pearson correlation coefficient r [24]is used as a quality estimation measurement. In Fig. 13, theresults of r for the different detectors and noise-to-signallevels (N/S) are shown.

The probabilistic detector PD and the maximum valuedetector MD are very influenced by noise level. The leastsquares detector WLSD is not so influenced by noise level asthe others mentioned before, although it does not surpass theGaussian G D nor the Hough detector HD in the detections weare interested in (without noise and with S/N = 23 dB). TheHough detector obtains good values for the relevant points.However, the Gaussian detector is the one that is most success-ful in obtaining the center of the straight line at subpixel level.On the other hand, the Hough Pearson coefficient values (rH )show that the HD is a reliable alternative method (the valuesof r > 0.95 are acceptable).

Page 7: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

3506 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015

Fig. 13. Detection characteristics according to the noise-level N/S presentfor all detectors.

Fig. 14. Tilt line parameters. We vary the height Hp in pixels and maintainL p = 1624 steady to obtain different α.

Fig. 15. Detection characteristics according to line tilting for the Hough andGaussian detector.

G. Results of Simulation With Tilted Line

The two most accurate detectors were chosen. Theirbehavior was studied as the simulated line was being tilted.The angles that were studied are the ones generated in thespatial resolution’s original image (1234, 1624) (see Fig. 14).In Fig. 15, it is possible to observe how the Gaussian detectorremains unaffected by the tilting, while the Hough detectoris deteriorated as the tilting increases. On the other hand,a maximum value for the tilting α = 0.07° can be observedin the Hough detector. These data are used in the initialcalibration when the Hough subpixel detector is utilized.

According to the results expressed in this simulation, a studyon the real images captured was carried out to obtain thefinal performance of the Gaussian algorithm selected (it wasalso compared with the Hough algorithm in this final step).

Fig. 16. Experience with an electronic level and an autocollimator.

This final comparison can be looked at in Fig. 18. We alsoworked with each of the remaining detectors to find out thegain in resolution for each of them (see Fig. 19).

VI. MEASURING EXPERIMENT

A. Controlled Experiment

To validate the feasibility of the measurement method withthe visual interface, a controlled experiment was designed:different angles were generated using a micrometric screwand measured simultaneously using the system under test(autocollimator + camera + software) and an electroniclevel. The Mahr electronic level was used as reference. Theelectronic level is a Mahr Federal EMD-832P-48-W2 withserial number 2095-06293. This instrument is available at thelaboratory where the experiment is performed (CEMETRO,UTN Córdoba) and it is readily traceable to internationallyaccepted standards. The electronic level resolution is 0.1 sof arc and the accuracy is within 2%. The CEMETROlaboratory [25] has a cooling system that keeps the thermicstability in ±0.5 °C under 20 °C. On the other hand, themeasurements were carried out under the standard lightningconditions of the laboratory (760 lux according to IlluminatingEngineering Society).

The digital interface was calibrated with a total of200 images of the reticle, filtered and corrected. The mirror Eand the electronic level N were placed on a bar that can rotatearound an axis. Bar rotation was regulated by means of amicrometric screw T placed on one of its ends (see Fig. 16).The working assumption is that the electronic level N ,for the kth position of screw T generates a reference mea-surement of a pitch angle, so the results of the experimentwere ordered pairs (Xk, αk), Xk being the electronic levelreading and αk being the digital reading of the autocollimatormeasurement.

B. Establish the Distance �xy

Using the algorithm shown in Fig. 17(A) and the abovecontrolled experience for capturing the images, the reticle mustbe calibrated, obtaining the value �xy .

In separate procedures, we located the eight central scalemarks in the vertical and horizontal directions, obtained theirsubpixel position with Gaussian fitting, and estimated theautocollimator scale pitch using a simple regression modelgiven by

pk = �xy ∗ k + ε, k : 1, . . . , 8, ε ∼ N(0, σ ). (14)

The estimated subpixel values for the autocollimator scalepitch on each axis were coincident within the 95% confidence

Page 8: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

BERGUES et al.: ELECTRONIC INTERFACE WITH VIGNETTING EFFECT REDUCTION FOR A NIKON 6B/6D AUTOCOLLIMATOR 3507

Fig. 17. (A) Flowchart of detection of �xy (B) Flowchart detection ofcrosshair line centers.

interval. Therefore, a mean value for scale calibration wasset as

�xy = (97.31 ± 0.02) pixels/div (95% confidence). (15)

This allowed us to conclude that scales are linear within anuncertainty margin of 0.02 pixels/div for both axes.

C. Subpixel Crosshair Lines’ Position Estimation

Applying the algorithm schematized in Fig. 17(B), theposition of the 1-D straight lines were estimated with thecentroid Bk of a Gaussian fit to a cross section of the linedetected at pixel level, as it was carried out with the scale.

The value αk was the relative position with respect tocentroid B1, corresponding to the reference value (firstmeasurement) of the visual interface and the electronic level.It was converted to seconds of arc using

αk = (Bk − (B1)) ∗ 60

�xy= (Bk − (B1)) ∗ 60

97.31. (16)

VII. RESULTS AND DISCUSSION

A. Resolution Discussion

Given the G D’s Pearson coefficient values (rG ) as a functionof N/S, a regression curve is determined by

rG = 1 + 0.0658 · X · ln(X), X = N/S. (17)

Each rG , in accordance with this regression curve, hasa quadratic mean (s) given by

s = L · 1√12

√N0

N0 − 2

√1

r2G

− 1 (18)

where L is the simulation interval’s width (1.25 s of arc) andN0 is the number of points used (100). The maximum intensityis 2n per pixel (n = n° bits of our camera); therefore, the best

Fig. 18. Detection characteristics on the real image for the Gaussian andHough detectors. The Gaussian detector shows less overall discrepancy, as itis maintained within a range of ±0, 1 s of arc. The Hough detector’s range,however, is within ±0, 15.

N/S relation that could be expected is 1 pixel/2n pixels. In thisway, it is possible to calculate which is the smallest N/S forthe different numbers of bits. The Rayleigh criterion (in optics)says that two lines can be distinguished if they are separatedat least by the sum of its half-widths. Using this (universallyaccepted) criterion and extending it to our case, we can definethe expected minimum resolution as Rn = s. This quantityincludes the properties of the algorithm, the spatial resolutionof the camera, the relative width of the line, and the intensityresolution of the camera, that is to say, all the properties of theimage and their processing. For this instance, a 8-bits cameraand R8 = 0.0195 s of arc. Following this line, the camera thatwould be needed to increase the resolution can be chosen;e.g., if our camera has n = 12 bits, then R12 = 0.0060 sof arc.

Other uncertainty contributions are the autocollimator’soptic, camera’s optic, its possible misalignments, and thenonlinearity of the CCD in x, y (explained in Section VII-C).

B. Comparison of Measurements Performed With theModified Autocollimator and the Electronic Level

In this section, the measurements made with the G D areanalyzed. The data (Xk, αk) for k = 1...25 were fitted with alinear regression model

αk = a ∗ Xk + b + ε, ε ∼ N(0, σ ). (19)

The slope was a = 1.0232 ± 0.0004 (95% confidence) andthe intercept was b = 0.81 ± 0.02 s of arc (95% confidence).The difference between a and unity is near 2%. This value isvery close to the electronic level accuracy.

The error in the fitting caused by the calibration of the elec-tronic level was disregarded because this research is focusedon the new instrument’s resolution. The discrepancies aredefined by (see Fig. 18)

Dk = αk − (a ∗ Xk + b). (20)

The mean square value (Drms) of discrepancies calculatedin this controlled experiment was

Drms =√∑

k D2k

25= 0.04′′. (21)

Page 9: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

3508 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015

Fig. 19. Detection characteristics on the real image for (MD), (WLSD),and (PD) detectors. The detectors’ range is within ±0, 5, very close to theinstrument’s resolution.

This quantity is an estimation of the instrument’s accuracy(autocollimator plus camera) as it is defined in VIM-2008(item 2.3 Note 3). The previous definition of resolution (Rn)satisfies Rn < Drms and can be used to estimate the resolutiongain GG of the vision system with respect to that of theinstrument (R = 0, 5′′) using the Gaussian detector

GG = R

Rn= 0.5

0.0195= 25.64 ∼ 25. (22)

The results show an increase in the instrument’s measure-ment resolution when the operator is replaced by an automatedprocedure using this electronic interface. On the other hand,the Hough detector has a Rn = 0.06 and a G H = 8.

If the same analysis is carried out for the real imageswith the detectors (PD), (MD), and (WLSD), the followinggains in resolution are obtained: (G P = 4), (GM = 6), and(GWLS = 7). However, the range of the discrepancies shownin Fig. 19 indicates that they are very close to the instrument’sresolution, which greatly limits the use of these detectors. Theconclusions of Section V-F are confirmed.

C. Uncertainty Discussion

According to Joint Committee for Guides in Metrology [26],the combined standard uncertainty u2

c(y) is given by

u2c(y) =

N∑i=1

(∂ f

∂xi

)2

u2(xi ). (23)

Applied to our case, using (1)

u2c(α) =

(60

�xy

)2

u2(By) +(

60 · By

�2xy

)2

u2(�xy) (24)

where the value of u2(�XY ) is given by

u(�xy) =√(

u2noise + u2

BIAS

)(25)

where unoise = 0.01′′ is the uncertainty associated with therandom noise obtained directly from the regression modelapplied to obtain (15). Using (3), uBIAS could be estimated as

uBIAS = 2 ∗ �B

97.31 ∗ 7= 0.001′′. (26)

Even if in our case uBIAS unoise, care must be taken inthe filtering process. On the other hand, the uncertainty u(By)

has a weak effect due to vignetting because when theBy measurements were performed, the background was dark.For this reason, we can roughly approximate u(By) = unoise,which gives an uncertainty uc(α) = 0.015 s. Comparing thisresult with (21), we conclude that uc(α) is underestimatedbecause uncertainties related to ambient conditions, autocolli-mator and CCD nonlinearities, and so on were not included.A more detailed analysis of this item is out the scope of thispaper and will be included in another communication.

VIII. CONCLUSION

In this paper, the main characteristics of an electronic inter-face are described. The interface comprises a CCD camera andan image processing package custom built around MATLABenvironment. The image package includes two main proce-dures: one for reticle scale calibration and another to determinethe position of the crosshair lines. The main method used forimage processing was chosen from a detailed simulation.

The measurements were performed using the two set ofimages obtained independently from each other. The reticlescale images were used to obtain the distance betweentwo consecutive divisions of the scale and the crosshair lineswere used in angles measurement. The procedure to obtain thereticle scale value �xy is carried out just once, since it dependssolely on camera resolution and not on measured values.

The interface was set up to improve the resolution ofmeasurements performed with a standard Nikon 6B/6D darkfield autocollimator. The results indicate that an incrementin resolution is feasible (25 times more). The key for anglecalculation is the use of filtered and corrected image sectionsthat were modeled with Gaussian functions. The centroids ofthese functions allow for the determination of the position ofeach line of the scale with subpixel resolution.

The choice of this algorithm was carried out through aprocess of rigorous analysis using simulation. This simulationincluded many algorithms that were tested under differentnoise levels and with tilted lines, thus allowing for a definitionof the optimal. What is more, the simulation allowed foran analysis of the contribution of the software to themeasurement’s uncertainty.

To calibrate the whole range of the autocollimator(30 min of arc), it will be necessary to obtain a more accuratereference for the angle values and an improved experimentalsetup to generate angles covering the whole range of the auto-collimator. On the other hand, to obtain reliable measurementsat subpixel level, it is necessary to give all the contributions tothe uncertainty of the measured angles with a detailed analysisof each one according to Guide to the expression of theUncertainty in Measurement by the International Organizationfor Standardization and to introduce a temperature stabilitycontrol so that the uncertainty can be less than the oneproduced by image processing. This will be done in a futurepublication.

ACKNOWLEDGMENT

The authors dedicate this paper to Dr. G. Ames, whose earlypassing has deeply moved them.

Page 10: 3500 IEEE TRANSACTIONS ON …flesia/PDF-files/BerguesEtAl2015a.pdf3500 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 64, NO. 12, DECEMBER 2015 Electronic Interface With

BERGUES et al.: ELECTRONIC INTERFACE WITH VIGNETTING EFFECT REDUCTION FOR A NIKON 6B/6D AUTOCOLLIMATOR 3509

REFERENCES

[1] K. Li, C. Kuang, and X. Liu, “Small angular displacement measurementbased on an autocollimator and a common-path compensation principle,”Rev. Sci. Instrum., vol. 84, no. 1, p. 015108, 2013.

[2] A. V. Kirsanov, T. V. Barmashova, V. V. Zelenogorskii, andA. K. Potemkin, “Computer-aided two-coordinate autocollimator formeasuring small angular deviations,” Instrum. Experim. Techn., vol. 52,no. 1, pp. 141–143, Jan. 2009.

[3] R. Soufli et al., “Development and calibration of mirrors and gratingsfor the soft X-ray materials science beamline at the linac coherent lightsource free-electron laser,” Appl. Opt., vol. 51, no. 12, pp. 2118–2128,Apr. 2012.

[4] J. Yuan and X. Long, “CCD-area-based autocollimator for preci-sion small-angle measurement,” Rev. Sci. Instrum., vol. 74, no. 3,pp. 1362–1365, 2003.

[5] T. B. Arp, C. A. Hagedorn, S. Schlamminger, and J. H. Gundlach,“A reference-beam autocollimator with nanoradian sensitivity from mHzto kHz and dynamic range of 107,” Rev. Sci. Instrum., vol. 84, no. 9,pp. 095007-1–095007-7, Sep. 2013.

[6] S. G. Alcock et al., “The diamond-NOM: A non-contact profiler capableof characterizing optical figure error with sub-nanometre repeatability,”Nucl. Instrum. Methods Phys. Res. A, Accel. Spectrom. Detect. Assoc.Equip., vol. 616, nos. 2–3, pp. 224–228, May 2010.

[7] J.-B. Tan, L. Ao, J.-W. Cui, and W.-J. Kang, “Further improvement ofedge location accuracy of charge-coupled-device laser autocollimatorsusing orthogonal Fourier–Mellin moments,” Opt. Eng., vol. 46, no. 5,pp. 057007-1–057007-12, May 2007.

[8] J. Yuan, X. Long, and K. Yang, “Temperature-controlled autocollimatorwith ultrahigh angular measuring precision,” Rev. Sci. Instrum., vol. 76,no. 12, p. 125106, 2005.

[9] Davidson-Optronics. 5 Inch-Aperture Autocollimator D-652. [Online].Available: http://davidsonoptronics.com/products/autocollimators/d652/,accessed May 2014.

[10] C. Schurrer, A. G. Flesia, G. Bergues, G. Ames, and L. Canali, “Interfazvisual para un autocolimador Nikon 6D mediante procesamiento deimágenes con precisión sub-pixel: Un caso de estudio,” Revi. Iberoamer.Autom. Informát. Ind., vol. 11, no. 3, pp. 327–336, 2014.

[11] G. Bergues, G. Ames, L. Canali, C. Schurrer, and A. G. Flesia, “Externalvisual interface for a Nikon 6D autocollimator,” in Proc. IEEE Int.Instrum. Meas. Technol. Conf. (I2MTC), May 2014, pp. 35–39.

[12] A. G. Flesia, G. Ames, G. Bergues, L. Canali, and C. Schurrer, “Sub-pixel straight lines detection for measuring through machine vision,”in Proc. IEEE Int. Instrum. Meas. Technol. Conf. (I2MTC), May 2014,pp. 402–406.

[13] A. Fabijanska, “A survey of subpixel edge detection methods for imagesof heat-emitting metal specimens,” Int. J. Appl. Math. Comput. Sci.,vol. 22, no. 3, pp. 695–710, 2012.

[14] J. B. Park, J. G. Lee, M. K. Lee, and E. S. Lee, “A glass thicknessmeasuring system using the machine vision method,” Int. J. Precis. Eng.Manuf., vol. 12, no. 5, pp. 769–774, Oct. 2011.

[15] A. Fabijanska and D. Sankowski, “Computer vision system for hightemperature measurements of surface properties,” Mach. Vis. Appl.,vol. 20, no. 6, pp. 411–421, Oct. 2009.

[16] N. T. Goldsmith, “Deep focus; a digital image processing technique toproduce improved focal depth in light microscopy,” Image Anal. Stereol.,vol. 19, no. 3, pp. 163–167, 2011.

[17] D. Vollath, “The influence of the scene parameters and of noise on thebehaviour of automatic focusing algorithms,” J. Microscopy, vol. 151,no. 2, pp. 133–146, Aug. 1988.

[18] E. Hecht, Optics, 4th ed. Reading, MA, USA: Addison-Wesley,Aug. 2001.

[19] R. W. Schafer, “What is a Savitzky–Golay filter? [Lecture Notes],” IEEETrans. Signal Process., vol. 28, no. 4, pp. 111–117, Jul. 2011.

[20] Z. Ping, Z. Wenzhen, D. Zhenyun, and Z. Wenhui, “Subpixel-preciseedge extraction algorithm based on facet model,” in Proc. 4th Int. Conf.Comput. Inf. Sci. (ICCIS), Aug. 2012, pp. 86–89.

[21] N. Aggarwal and W. C. Karl, “Line detection in images throughregularized Hough transform,” IEEE Trans. Image Process., vol. 15,no. 3, pp. 582–591, Mar. 2006.

[22] R. O. Duda and P. E. Hart, “Use of the Hough transformation to detectlines and curves in pictures,” Commun. ACM, vol. 15, no. 1, pp. 11–15,Jan. 1972.

[23] R.-C. Lo and W.-H. Tsai, “Gray-scale Hough transform for thick linedetection in gray-scale images,” Pattern Recognit., vol. 28, no. 5,pp. 647–661, May 1995.

[24] E. B. Niven and C. V. Deutsch, “Calculating a robust correlationcoefficient and quantifying its uncertainty,” Comput. Geosci., vol. 40,pp. 1–9, Mar. 2012.

[25] CEMETRO-LAB. [Online]. Available: http://www.investigacion.frc.utn.edu.ar/cemetro/laboratorio.html, accessed May 2014.

[26] Evaluation of Measurement Data—Guide to the Expression ofUncertainty in Measurement, document JCGM 100:2008

Guillermo J. Bergues was born in BuenosAires, Argentina, in 1994. He received the BSDegree in electronics engineering from NationalTechnologic University, Córdoba, Argentina,in 2010.

He is currently with the CEMETRO Laboratory,Universidad Tecnológica Nacional–FacultadRegional Córdoba, Ciudad Universitaria, Córdoba,where he is involved in research on machinevision for metrological applications. He is a PhDcandidate under the direction of Dr. Flesia. His

current research interests include language for machine vision and imageprocessing techniques for metrological applications.

Luis Canali (SM’2003) received the ElectronicsEngineering and Ph.D. degrees from Universi-dad Tecnológica Nacional, Córdoba, Argentina,in 1977 and 1999, respectively. He is cur-rently a Professor with the Department of Elec-tronic and the Chairman of the Centre for ITResearch with Universidad Tecnológica Nacional.His current research interests include robotics,control of machine tools, and signal processing.

Clemar Schurrer received the Ph.D. degree inphysics from the Faculty of Mathematics, Astron-omy and Physics, Universidad Nacional de Córdoba,Córdoba, Argentina, in 1995. He is currently aProfessor with the Centro de Metrología Dimen-sional,. Universidad Tecnológica Nacional-RegionalCórdoba.

His current research interests include anglemetrology applied to surface characterization forms.

Ana Georgina Flesia received the B.S and thePh.D. degree in mathematics from the UniversidadNacional de Córdoba, Córdoba, Argentina, in 1994and 1999, respectively.

She completed her post-doctoral studies withthe Department of Statistics, Stanford University,Stanford, CA, USA. She is currently an AssociateProfessor with the Mathematics, Physics andAstronomy Institute, Universidad Nacional deCórdoba, and an Adjoint Researcher withCONICET, Buenos Aires, Argentina. Her current

research interests include statistical analysis of Synthetic Aperture Radar andInfrared images, computational harmonic analysis of natural images, anddigital image processing.


Recommended