+ All Categories
Home > Documents > Range sensing based on shearing interferometry

Range sensing based on shearing interferometry

Date post: 03-Oct-2016
Category:
Upload: hans
View: 214 times
Download: 1 times
Share this document with a friend
7
Range sensing based on shearing interferometry Gerd Hausler, J. Huff less, Manfred Maul, and Hans Weissmann We report an optical range sensor that combines the advantages of focus sensing and interferometry. It works for specularly reflecting objects and for diffusely reflecting objects. The sensor is simple, robust against vibrations, and has high depth resolution. The aperture is very small,hence the sensor is small and minimizes shading problems. The sensor is based on the followingprinciple: A light spot is projected onto the object under test. The radius of the wave that is scattered at the object is measured with high accuracy by shearing interferometry. 1. Introduction In this paper we introduce an optical range sensor based on the principle illustrated in Fig. 1. A small light spot is projected onto the object under test. From the object, a scattered wave emerges. We mea- sure the radius of this wave in the sensor plane. This radius is the desired distance z of the illuminated object spot. For high distance resolution the radius can be measured very accurately. This measurement is performed by shearing interferometry: the object spot is virtually duplicated (for example, by a Savart plate) and the resulting interference fringe pattern is evaluated. The system has several advantages compared with commercially available systems. These features are as follows: Depth resolution e6z can be scaled from submicro- meters to millimeters, dependent on the measuring distance. The sensor works for specularly reflecting objects and for diffusely reflecting (rough) objects. Until now we achieved a depth resolution 6z (rms) of 10 ,4m at a working distance of 300 mm (1:30,000) for specularly reflecting objects and of 1:5000 for diffusely reflecting objects. Reduction of speckle noise is neces- sary with rough objects to achieve high relative dis- tance resolution. There is no ambiguity of results-as in conventional interferometry. The apertures of illumination and observation can be very small; the occurrence of shading effects and The authors are with University of Erlangen-Nuremberg, Physics Institute, 1 Erwin-Rommel-Strasse, D-8520 Erlangen, Federal Re- public of Germany. Received 17 March 1988. 0003-6935/88/224638-07$02.00/0. © 1988 Optical Society of America. hidden points is minimized. Compared to the Ray- leigh depth of focus, a multiple superresolution in depth can be achieved. The sensor is small and consists of only a couple of simple elements. The system utilizes common path interferometry. Hence it is robust against vibrations and air turbulence. We consider 3-D data (sometimes called 2/ 2 -D) about the shape of an object much better adapted for automatic inspection than 2-D data about the local reflectance of objects. The main advantage of 3-D data over 2-D data is the invariance of the shape against the variations of the illumination and against the soiling of the object: 3-D data suffer much less from noise and artifacts. Furthermore, in most indus- trial applications, the primary question is mainly about the shape of the object, not about the local reflectance. Nevertheless, local reflectance is mainly utilized by image processing people as the source of information. The reason is that it is much easier to get a TV picture of the object into a computer than the data about the shape of the object. However, from the standpoint of systems theory, it is not reasonable to invest a lot of software work on 2-D data which are spoiled by artifacts and which are incomplete with regard to the primary 3-D question of the user. In- stead, one should concentrate on preprocessing the data. A very effective way of preprocessing is to ac- quire the data about shape z(x,y) of the object. Several principles of 3-D sensing are known. 1 These principles are triangulation, time-of-flight measure- ment, focus sensing, and interferometry. Advantages and drawbacks are equally distributed over the four principles. Most commercial 3-D sensors are based on triangu- lation. Triangulation utilizes observation from two different directions (for example, stereophotogram- metry) or structured illumination from one direction and observation from a different direction (for exam- 4638 APPLIED OPTICS / Vol. 27, No. 22 / 15 November 1988
Transcript

Range sensing based on shearing interferometry

Gerd Hausler, J. Huff less, Manfred Maul, and Hans Weissmann

We report an optical range sensor that combines the advantages of focus sensing and interferometry. It worksfor specularly reflecting objects and for diffusely reflecting objects. The sensor is simple, robust againstvibrations, and has high depth resolution. The aperture is very small, hence the sensor is small and minimizesshading problems. The sensor is based on the following principle: A light spot is projected onto the objectunder test. The radius of the wave that is scattered at the object is measured with high accuracy by shearinginterferometry.

1. Introduction

In this paper we introduce an optical range sensorbased on the principle illustrated in Fig. 1. A smalllight spot is projected onto the object under test.From the object, a scattered wave emerges. We mea-sure the radius of this wave in the sensor plane. Thisradius is the desired distance z of the illuminatedobject spot. For high distance resolution the radiuscan be measured very accurately. This measurementis performed by shearing interferometry: the objectspot is virtually duplicated (for example, by a Savartplate) and the resulting interference fringe pattern isevaluated.

The system has several advantages compared withcommercially available systems. These features are asfollows:

Depth resolution e6z can be scaled from submicro-meters to millimeters, dependent on the measuringdistance.

The sensor works for specularly reflecting objectsand for diffusely reflecting (rough) objects.

Until now we achieved a depth resolution 6z (rms) of10 ,4m at a working distance of 300 mm (1:30,000) forspecularly reflecting objects and of 1:5000 for diffuselyreflecting objects. Reduction of speckle noise is neces-sary with rough objects to achieve high relative dis-tance resolution. There is no ambiguity of results-as inconventional interferometry.

The apertures of illumination and observation canbe very small; the occurrence of shading effects and

The authors are with University of Erlangen-Nuremberg, PhysicsInstitute, 1 Erwin-Rommel-Strasse, D-8520 Erlangen, Federal Re-public of Germany.

Received 17 March 1988.0003-6935/88/224638-07$02.00/0.© 1988 Optical Society of America.

hidden points is minimized. Compared to the Ray-leigh depth of focus, a multiple superresolution indepth can be achieved.

The sensor is small and consists of only a couple ofsimple elements. The system utilizes common pathinterferometry. Hence it is robust against vibrationsand air turbulence.

We consider 3-D data (sometimes called 2/ 2-D)about the shape of an object much better adapted forautomatic inspection than 2-D data about the localreflectance of objects. The main advantage of 3-Ddata over 2-D data is the invariance of the shapeagainst the variations of the illumination and againstthe soiling of the object: 3-D data suffer much lessfrom noise and artifacts. Furthermore, in most indus-trial applications, the primary question is mainlyabout the shape of the object, not about the localreflectance. Nevertheless, local reflectance is mainlyutilized by image processing people as the source ofinformation. The reason is that it is much easier to geta TV picture of the object into a computer than thedata about the shape of the object. However, from thestandpoint of systems theory, it is not reasonable toinvest a lot of software work on 2-D data which arespoiled by artifacts and which are incomplete withregard to the primary 3-D question of the user. In-stead, one should concentrate on preprocessing thedata. A very effective way of preprocessing is to ac-quire the data about shape z(x,y) of the object.

Several principles of 3-D sensing are known.1 Theseprinciples are triangulation, time-of-flight measure-ment, focus sensing, and interferometry. Advantagesand drawbacks are equally distributed over the fourprinciples.

Most commercial 3-D sensors are based on triangu-lation. Triangulation utilizes observation from twodifferent directions (for example, stereophotogram-metry) or structured illumination from one directionand observation from a different direction (for exam-

4638 APPLIED OPTICS / Vol. 27, No. 22 / 15 November 1988

photosensor I

polarizer 45°

savart plate 00

sensor headspherical

beam splitter

object

light source

Fig. 1. Range sensing by shearing interferometry: basic setup.

ple, moire). Depth information is converted into alateral position on the detector, which can easily beevaluated. With a given lateral resolution x on theobject and a given triangulation angle 0 between direc-tion of illumination and direction of observation, thedepth resolution z of such a system can easily beestimated to z = x/tanO, which can be approximatedfor small angles 0:

6z = 1%x/sinO. (1)

We assume that the lateral resolution Ax is given by theRayleigh limit x = /2 sinu, where X denotes thewavelength and sinu denotes the aperture of the imag-ing optics. Then we get

6z = +X/2 sinO sinu. (2)

In a conventional imaging system the edge of thepupil sees the object from a different angle 0 = u thanthe center of the pupil. If we introduce 0 = u into thedepth resolution we find

±Xz = ± /2 sin2u, (3)

which is the classical depth resolution according to theRayleigh X/4 criterion. This is the connection of focussensing with triangulation.

Most triangulation systems make use of a large tri-angulation angle 6 to achieve high depth resolution.However, they have to pay for this resolution. It isimpossible to look into holes, etc. Examples are de-scribed2'3 of how to achieve higher resolution withsmaller apertures. However, shading as a main prob-lem of triangulation still remains.

Methods that measure the time of flight do notsuffer from hidden points. Each location that can beilluminated can be measured. The reason is that theaperture can be chosen as small as necessary to ensurethe required lateral resolution of the object. However,methods that measure the time of flight of light have aresolution not much better than 1 mm because of thelarge velocity of light.4 If this limit could be improved,

time-of-flight measurement would be a promising can-didate in the competition.

Focus sensing methods make use of the variations inan image that occur when the focusing changes. Thereare a lot of implementations of this principle. Thereare passive systems that utilize the natural structure ofthe object.5 The depth resolution of these methods isin the range of the Rayleigh depth [see Eq. (3)]. Theresolution can largely be improved with structuredillumination, in a way that the observing optics sees adilute intensity distribution. Even microprofilometrywith resolution in the nanometer range is possible.6Interferometry uses the wavelength of light as a mea-sure of distance. Interferometric methods are knownwith a depth resolution in the angstrom range.

In a certain sense, optical profilometers are rangedsensors too, but they work in the submicron range. Aninteresting example combining several interferometrictechniques is given in Ref. 7. In interferometric profi-lometry a narrow spot is projected onto the object andthe phase of the reflected wave is compared to a refer-ence. Unfortunately, this principle cannot be scaledfor macroscopic range sensing. For macroscopic ap-plications interferometry is usually too sensitive. Thelarge depth sensitivity is connected with ambiguitiesand a lack of robustness against environmental pertur-bations.

A further important drawback is the following: in-terferometric profilometers need a reflected wave thatdoes not suffer from speckles. This cannot be ensuredin macroscopic range sensing. Considerable successhas been achieved in overcoming the problems in ap-plying interferometry. One solution is to use verylarge wavelengths.8 Recently, a range sensor based ontwo-wavelength speckle interferometry was reported9

that works at rough surfaces. However, a certain lackof robustness still remains in these methods, i.e., nocommon path interferometer.

From the above considerations it is apparent thathigh (relative) sensitivity, robustness, uniqueness of

15 November 1988 / Vol. 27, No. 22 / APPLIED OPTICS 4639

interferencefringes

to

results, absence of hidden points, working at roughsurfaces, ... cannot be achieved by one single princi-ple.

Our method described above can be considered as acombination of focus sensing and interferometry: thewavefront that emerges from the object is utilized, as infocus sensing methods. However the wavefront varia-tions are measured much better than X/4. Hence, adepth resolution is achieved, much better than thatgiven by Rayleigh X/4 criterion. The evaluation of thewavefront is performed by interferometry. However,we do not compare the wavefront by an external refer-ence. Instead, shearing interferometry is utilized.The advantages are obvious: Phase variations be-tween the reference and the object wave cannot occurby small vibrations or by small air turbulences. Fur-thermore, the object may be diffusely reflecting, sincewe compare the (speckled = distorted) wave with itsslightly shifted version.

In the next section we discuss the method and itslimitations. In Sec. III methods for the high resolu-tion evaluation of the interference pattern are de-scribed.

11. Basic Limitations of Fringe Formation

In the following we explain the basic ideas of theprinciple.

A light spot is projected onto the object under test(Fig. 1). Its lateral location is determined by a scan-ning device. Projection can be performed with a verysmall aperture furnishing a large depth of focus. Forthe sake of simplicity we discuss in a first approach anidealized pointlike spot on the object. This case isapproximated by an object that reflects the light spec-ularly.

A spherical wave emerges from this light spot. Itscurvature carries information about the distance zfrom the light spot to the sensor plane. The curvatureof the wavefront is measured by means of a lateralshearing interferometer.

The shearing interferometer (see Fig. 1) consists of aSavart plate (orientation 0) with preceding polarizerunder 450 orientation. The Savart plate performs thevirtual duplication of the two wavefronts and intro-duces a lateral shear s between these wavefronts. Asubsequent polarizer, oriented at 45° to the opticalaxes of the Savert plate, makes it possible for the twoorthogonally polarized waves to interfere. The inter-ferogram in the plane of observation at distance z fromthe object consists of straight fringes (similar to Schus-ter fringes) with period p:

= Xz/s. (4)

This result is based on a quadratic approximation ofthe spherical wavefront which holds for the small aper-ture that we utilize. Hence, for known shear s andknown wavelength X of the illuminating light the mea-suring distance z can be calculated uniquely from thefringe period.

The situation becomes more complicated for thegeneral case of a rough surface and an extended light

spot on the surface. In this case the wavefront in theobservation plane is perturbed by speckle, in ampli-tude and phase. Some limiting cases can immediatelybe estimated.

A laser spot with diameter do is projected onto theobject (at z = 0). In the plane of measurement, atdistance z, we get a speckle pattern. The averagespeckle diameter 0 is d,:

d= z/d. (5)

The speckled wave is virtually duplicated and sheared.We now have two identical speckle patterns with later-al shear s. It is well known 0 that there is a significantlateral phase correlation in the speckle field, as long asthe lateral distance does not exceed the diameter ofone speckle. Hence, if we want to see a distinct fringepattern in spite of the diffuse reflection at the object,shear s must satisfy the condition

s d. (6)

This condition can usually be satisfied: Consider adistance z = 300 mm and a spot diameter do = 200 Atm.The spot diameter is assumed to be large, since we donot know where the object is. Instead, we want tomeasure its distance. Hence, we must take defocusinginto account in the projection step. With the givenvalues of z and do and with X = 0.63 Aim, a maximumshear of 1.9 mm is possible. We can work with shearsof, for example, 70 gAm.

Figure 2 illustrates condition (6). It shows thefringe patterns for different amounts of shear. Forsmall shears, the phases of the fringes within differentspeckles are strongly correlated. This is not the casefor a shear in the range of the speckle diameter.Hence, in the latter case the contrast of fringes is verylow.

Substituting (6) into Eqs. (5) and (4) we get anequivalent condition:

do < p. (7)

The spot diameter has to be smaller than the period ofthe fringes. This is plausible: each illuminated ob-ject location x generates its own fringe pattern cen-tered at x. All these differently shifted patterns haveto be superimposed coherently with statistical phases.Sufficient contrast can only be achieved if the spotdiameter is smaller than the fringe period.

Equation (6) or (7) is necessary for a distinct fringepattern in the sensor plane. Of course, we must keepin mind that the fringe location suffers from a smallstatistical shift, even for s << d (or for do << p) due tothe nonperfect correlation between the sheared wave-fronts. Hence speckle noise will afflict distance reso-lution. This problem is discussed in more detail inRef. 11. Here we describe the result of experimentsonly.

Ill. High Resolution Fringe Evaluation

We investigated two techniques for the evaluation ofthe shearing interferogram:

(1) Fourier transformation of the interferogram.

4640 APPLIED OPTICS / Vol. 27, No. 22 / 15 November 1988

Fig. 2. Fringe patterns for a diffusely reflecting object, for a con-stant shear, and different speckle size: (a) large speckles; (b) medi-

um speckles; (c) speckles approximately as large as the shear.

The mean fringe frequency appears as the maximum ofthe spectrum. This technique is described in Sec.III.A.

(2) The phases of the fringes are measured directlyat a number of points by a heterodyne technique.This is discussed in Sec. III.B.

A. Fourier Transform Speckle Shearing Interferometry

Distance z is encoded in the mean frequency of thefringes in the speckle shearing interferogram, accord-ing to Eq. (4). The maximum value of the Fourierspectrum furnishes the fringe frequency with high ac-

curacy. There is an averaging over all statistical phaseerrors.

In the first experiment we extract the informationfrom the shearing interferogram by a CCD line camera.The line is oriented perpendicular to the fringes. Theevaluation of the intensity signal is performed in twosteps:

(1) The mean intensity signal is subtracted and thefast Fourier transform (FFT) of the zero mean signal iscalculated. The maximum of the spectrum indicatesthe number of fringes of the shearing interferogramthat are observed within the field of view. The result-ing distance resolution, however, is much too low forpractical use.

(2) Calculation of intermediate frequencies: TheFFT of the first step delivers sampled values at dis-crete frequencies. However, the maximum amplitudeof the spectrum is usually located somewhere betweenthe sampling points. Hence, we have to interpolatethe FFT and find the amplitude at intermediate fre-quencies.

The distance between adjacent frequency samples isindirectly proportional to the width of the signal.Hence, we get the desired intermediate frequencies ifwe artificially make the width of the signal larger. Weregard the observed part of the shearing interfero-gram, with N sampled values, as a small portion withina much larger area with M sampled values. This largerarea is essentially empty, with the exception of the firstN sampled values (the measured interferogram). As aresult, the digital Fourier transform (DFT) of the ex-tended signal displays denser sampling. Thus, thatfrequency where the spectrum has its maximum-andhence the mean fringe frequency-can be determinedmore accurately by a factor of MIN.

Obviously, for M >> N with N - 1000, a direct imple-mentation would require an unrealistically highamount of hardware and computation time. We mini-mize computation time by an iterative procedure.

We start with a FFT of the M = N samples of thedetector signal. The first estimate vo(1) of the fringefrequency is the maximum value of the FFT. In thenext step we double frequency resolution by doublingM - 2N. But we do not perform this by a FFT. Nowonly two frequency samples are calculated by a DFT.These are the values v1(2) index 1 at left and vr(2) justleft and right of the first estimate vo(1). From thesethree values, v(1), vI(2), vr(2 ), one value has the high-est DFT modulus. This is taken as the second esti-mate and the initial value vo(2) for the next iteration.By this oversampling concept the resolution can beincreased by a factor of 2n performing only n iterations.

Figure 3 illustrates the development of resolutionwith subsequent iteration for two different distances.It should be mentioned that the resolution of the firstiteration is so low that the two different distances leadto the same initial frequency vo(1).

Now we briefly report some experimental results ofrange sensing by Fourier transform shearing interfer-ometry. Figure 4 shows results of specularly reflectingobjects. Figure 4(a) displays the variation of the mea-

15 November 1988 / Vol. 27, No. 22 / APPLIED OPTICS 4641

719':, frequency ( * 128 )

.7130. | I--- --- --- -- / ---- -- 1 6 Z

7170-1 f- -I .

71 60-

7150-

'I40 -

2 mm

n

2:6

27-a6 -

281.

27 3

2.7 to

:2:, f.

.7130 , ,,: , Cl 1 3 t 5 6 7

oversampling-step = 2n

Fig.3. High resolution evaluation of the fringe pattern by iterativeoversampling of the Fourier transform. After seven iterations, reso-

lution increased by a factor of 27.

-ee -sire.i d i st.a.nce [in-,)3

(a): | ;2 4 S 6 B i: ~1I 1 2 i 1.4 1:~ CtA bj ect d; sPlacement. [rel . L*li i s)

0 d te'. .a t i o n f r m :.c teua l p o si tj or,> 7 mA

) 2-

- 1

I - I

me s ,ed obj e c-. d i stance Cmm)

-O _:

n0. 0 5 1_0 1 5a-c'tu-al U i s PIL a -ment Cmm -

[mm) de3 i aa i on -rm atl Poe i t i onC - 1 1

[ . 05-

ooj.... .......... a

0 0 0.5 1 _0 1 _5

ctL~at ob eci Posi tion Emm)

Fig.4. Experimental results of range sensing ofaspecularly reflect-ing object: (a) measured distance z vs actual z displacement and (b)deviations from the regression line taken from (a). The maximumdeviation is 20 ,um at a distance z of -300 mm. With standarddeviation a - 10 im, we get a relative depth resolution of a/z 1/

30.000. The measuring aperture was only 0.035.

sured distance, while the object was longitudinallyshifted via a distance of 2 mm. To show the errors, Fig.4(b) displays the deviation from the actual position.The maximum deviation is h20 Atm at a distance ofabout z = 330 mm. The standard deviation is about a= 10 ,um. Note that the aperture of observation in thisexperiment is only 0.035. Hence, we have a fiftyfoldsuperresolution compared with the correspondingRayleigh depth of resolution, which is about +0.5 mm.The relative resolution is r/z 1/30,000.

For diffusely reflecting objects we cannot expectsuch a large resolution according to the considerationsof Sec. II. Indeed, experiments with speckled inter-

.0........... ..... .. 0.. ..

(b)2 4 : 10 i2 14 16 s 18

atal objet p0 i i o [ - . LnI is)

Fig. 5. Experimental results of range sensing for diffusely reflect-ing objects. To reduce speckle errors, the object spot was movedacross the object by a distance of 1.5 mm during each measurement.(a) Measured distance vs actual displacement and (b) deviation from

the regression line taken from (a).

ferograms performed in the same way as describedabove displayed only a poor resolution, being in therange of a/z 1:500. It turns out that with speckledinterferograms resolution cannot be improved over thelimit given by the Rayleigh depth."

Hence, for improved resolution it is necessary toreduce speckles. The main source of speckle is spatialcoherence. There are essentially three ways to reducespatial coherence.

(1) We move the object spot over a small area of theobject and temporally average the varying speckle pat-terns in the sensor plane. Of course, then we lose somelateral resolution on the object. However, this methodis very simple and effective.

(2) We introduce temporally varying aberrations inthe pupil of the projection lens (boiling pupil) andtemporally average in the sensor plane. This can easi-ly be implemented by blowing hot air through thepupil."

(3) We use a small thermal light source instead of alaser. With a high pressure mercury lamp, spatialcoherence and speckle are strongly reduced and inter-ference fringes are maintained."

Figure 5 displays results of diffusely reflecting ob-jects in an equivalent representation as shown in Fig. 4.For each measuring point speckles are averaged whilemoving the (focused) light spot along a distance of 1.5mm on the object. From Fig. 5(b) we see that themaximum deviation from actual position is in the+150-,um range at a distance of 280 mm. The rela-tive (rms) resolution /z is about 1/3000. It should bementioned that there is much room for improvement,

4642 APPLIED OPTICS / Vol. 27, No. 22 / 15 November 1988

27 0

331. 5

0

-o 05-

-O - 1 0-

Of to

-lo-I .

PHASE MEASUREMENT

PHOTO-DETECTORS

:- | /ROTATINGANALYZER

OUARTER WAVE PLATE

SAVART PLATE

POLARIZER (450)

Fig. 6. High resolution fringe evaluation by heter-odyning: basic setup.

for example, by moving the spot not only along a linebut within a 2-D area on the object and by making useof the information in the whole detector plane (notonly along one line, as has been done so far).

B. High Resolution Fringe Evaluation by Heterodyning

High resolution fringe evaluation is essential withour method because the wavefront changes only slight-ly for small changes in object distance. The oversam-pling concept works satisfactorily and can principallyutilize all the information in the pupil plane. How-ever, this method takes a couple of seconds for eachmeasurement, without the use of a special purposeprocessor.

For specularly reflecting objects, or after specklereduction, heterodyning is a more appropriate evalua-tion method. In common path interferometry the mu-tual temporal modulation of the two interfering wavesis possible via a rotating polarizer.' 2

We use the device of Fig. 6. The running fringes aregenerated by a rotating polarizer acting on a circularlypolarized wave. With f rotations per second, eachphotoelectric sensor sees an intensity I(t) temporallyvarying with frequency 2f.

There is a phase shift (, between the two sensorsignals, dependent on object distance z and distance bbetween the sensors, s = 2rsb/Xz. We measure thisphase shift with high resolution (electrical reproduc-ibility 7r/50,000). With the known constants s, b,and X, we can calculate distance z from the measuredphase ~s.

In Fig. 7 the result of an experiment with a specular-ly reflecting object is displayed in a manner similar tothat in Figs. 4 and 5. The relative (rms) resolution is1:22,000. It is somewhat smaller than in the experi-ment of Fig. 4. The reason is that with heterodyningwe take less information from the interference fieldthan by Fourier transformation. Hence, the influenceof local errors on fringe position is stronger. Further-more, the rotation of the analyzer is a source error.The mechanical rotation could be avoided by utilizinga Zeeman Laser.13

measured object distance (mm)

296. 8-

296. 6-

296. 4 ,/

296. 2

296. 0-

0.2 0.4

0 +deviation from actual

(a)

0.6actual

position

0.8 1displacement (mm)

(mm)

0.05t

0.00

El

ElEl

El El El

-0.05t

-0. I0-(b)

0.2 0.4 0.6 0.8 actual object position (mm)

Fig. 7. Experimental results of heterodyne evaluation with a specu-larly reflecting object.

Experiments with diffusely reflecting objects andspeckle reduction by a boiling pupil furnished a resolu-tion (rms) of 0.19 mm (1:2000) at a working distance of400 mm (see Fig. 8). Experiments with a thermalsource yield a relative depth resolution (rms) of 1:5500.Referring to the small aperture used, an approximatelyfivefold superresolution could be achieved with theboiling pupil. With the thermal source, a twenty-fivefold superresolution was achieved.

IV. Conclusions

Range sensing by shearing interferometry is possiblefor specularly reflecting and diffusely reflecting ob-jects. Depth resolution can be very high in the firstcase. For diffusely reflecting objects a high depth

15 November 1988 / Vol. 27, No. 22 / APPLIED OPTICS 4643

measured object distance (mm)

4AtEl

410 f

409

408

407-

406i

ElEl

0

n - E]

El ElEl

2 4 6 a i nlateral dIsplacement in rel. unit s

Fig. 8. Experimental results of heterodyne evaluation for a diffuse-ly reflecting object. To study the influence of speckle on the repro-ducibility, several measurements are performed on different loca-tions of the surface but at a constant object distance. The smallsquares show statistical variation without speckle reduction; thecircles indicate measurements with speckle reduction by a boiling

pupil.

resolution can also be achieved. However, reductionof speckle noise in the interferogram is necessary.

The method works with very small apertures andthere is almost no hidden point problem. It is robust,since no external reference is necessary, small, andsimple.

References1. T. C. Strand, "Optical Three-Dimensional Sensing for Machine

Vision," Opt. Eng. 24, 33 (1985).2. M. Halioua and H. C. Liu, "Optical Sensing Techniques for 3-D

Machine Vision," Proc. Soc. Photo-Opt. Instrum. Eng. 665, 150(1986).

3. W. Dremel, G. Hausler, and M. Maul, "Triangulation with LargeDynamical Range," Proc. Soc. Photo-Opt. Instrum. Eng. 665,182 (1986).

4. M. Taylor and D. A. Jackson, "High Precision Non-ContactingOptical Level Gauge," Opt. Acta 33, 1571 (1986).

5. G. Hausler and E. Korner, "Simple Focusing Criterion," Appl.Opt. 23, 2468 (1984).

6. R. Brodmann, "In-Process Optical Metrology for Precision Ma-chining," Proc. Soc. Photo-Opt. Instrum. Eng. 802, 165 (1987).

7. G. E. Sommargren, "Optical Heterodyne Profilometry," Appl.Opt. 20, 610 (1981).

8. 0. Kwon, J. C. Wyant, and C. R. Hayslett, "Rough SurfaceInterferometry at 10.6,um," Appl. Opt. 19, 1862 (1980).

9. A. F. Fercher, H. Z. Hu, and U. Vry, "Rough Surface Interferom-etry with a Two-Wavelength Heterodyne Speckle Interferome-ter," Appl. Opt. 24, 2181 (1985).

10. J. W. Goodman, "Statistical Properties of Laser Speckle Pat-terns," in Laser Speckle and Related Phenomena, J. C. Dainty,Ed. (Springer-Verlag, Berlin, 1984), p. 9.

11. G. Hausler and J. Herrmann, "Range Sensing by Shearing Inter-ferometry: Influence of Speckle," Appl. Opt. 27, 4631 (1988).

12. H. Z. Hu, "Polarization Heterodyne Interferometry Using aSimple Rotating Analyzer. 1: Theory and Error Analysis,Appl. Opt. 22, 2052 (1983).

13. H. Takasaki, N. Umeda, and M. Tsukiji, "Stabilized TransverseZeeman Laser as a New Light Source for Optical Measurement,"Appl. Opt. 19, 435 (1980).

NASA continued from page 4625

Upgraded machines record a total of 336 tracks, 28 at a time in 12passes, each lasting 800 s. Thus, one tape lasts almost 3 h whenrecording at 56-MHz bandwidth (112 Mbit/s) and holds -1012 bit.Advantage can readily be taken of the continued rapid advance oftape technology: Mark III A hardware has already been used todemonstrate 16 h of recording at 128 Mbit/s on a single reel ofdigital-videotape (16 passes times 32 tracks at 19,685 bit/cm using a40-cm reel of 0.01-mm tape, 8100 m long) to meet the requirementsof the very long baseline array.

The original Mark III hardware and the III A subsystem upgradeswere developed as part of the NASA Crustal Dynamics Project at theHaystack Observatory.

This work was done by A. E. E. Rogers, A. R. Whitney, J. I. Levine,E. F. Nesman, J. C. Webber, and H. F. Hinteregger of NEROCHaystack Observatory for Goddard Space Flight Center. Refer toGSC-13028.

Video analog signal dividerA system known as a video analog signal divider produces a video

image based on the color ratio, as distinguished from the usual objectbrightness or color-component brightness that the typical videosignal represents. This device is particularly useful in thermalmapping because the color distribution of certain phosphors is inde-pendently a function of temperature. Analog video circuitry is usedto provide an inexpensive real-time image-processing technique.

The video analog signal divider works with a standard three-tubecolor camera with red/green/blue output. Because the synchroniza-tion signal in a standard camera system can be either separate orcombined with the green signal, the circuit includes a toggle switchto accommodate these two alternatives. The configuration shownin Fig. 5, with its blue and green inputs, represents an application forthermal imaging in which a blue/green thermographic phosphor isused. A toggle switch to a 1-V divisor and an external gain adjust-ment are included in the circuit for calibration purposes. The heartof the signal divider is an integrated circuit functioning as a wide-band linear divider with a signal bandwidth of 60 MHz.

Blue Signa

r - i-I a t i + Synchranlzallo+ v ~~~~AdderGreen Refeence

sluelGreenGreen (With dJ - Galn Adjustmen t h

Synchronizaon) synchronlzeciInternal

Synchronizing. SynchronizaltlonSIgnal

Separator. 0 ?External : ffff

Synchronizing SynchronizationSignal

Fig. 5. Video analog signal divider produces a black-and-whitecomposite video signal based on the color ratio. The device isinexpensive, uses the signal from a standard red/green/blue cameraas input, and can be used to produce quantitative thermal images of

two-color phosphor coatings.

This work was done by Gregory M. Buck of Langley ResearchCenter. No further documentation is available. Inquiries concern-ing rights for the commercial use of this invention should be ad-dressed to the Patent Counsel, Langley Research Center, attn: G. F.Helfrich, Mail Code 279, Hampton, VA 23665. Refer to LAR-13740.

0

continued on page 4652

4644 APPLIED OPTICS / Vol. 27, No. 22 / 15 November 1988


Recommended