+ All Categories
Home > Documents > Pro ceedings of the 2000 IEEE - University of...

Pro ceedings of the 2000 IEEE - University of...

Date post: 11-Sep-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
6
Transcript
Page 1: Pro ceedings of the 2000 IEEE - University of Washingtonfaculty.washington.edu/cfolson/papers/pdf/icra00.pdfPro ceedings of the 2000 IEEE In ternational Conference on Rob otics and

Proceedings of the 2000 IEEE

International Conference on Robotics and Automation

San Francisco, California � April 2000

Landmark Selection for Terrain Matching

Clark F. Olson

Jet Propulsion Laboratory, California Institute of Technology

4800 Oak Grove Drive, Mail Stop 125-209, Pasadena, CA 91109

http://robotics.jpl.nasa.gov/people/olson/homepage.html

Abstract

We describe techniques to optimally select land-

marks in order to perform mobile robot localization by

matching terrain maps. The method is based upon a

maximum-likelihood robot localization algorithm that

eÆciently searches the space of possible robot posi-

tions. We use a sensor error model to estimate the

probability distribution of the terrain expected to be

seen from the current robot position. The estimated

distribution is compared to a previously generated map

of the terrain and the optimal landmark is selected by

minimizing the predicted uncertainty in the localiza-

tion. This approach can be used to generate a sensor

uncertainty �eld for use by the robot's planning com-

ponent. Experiments indicate that landmark selection

improves not only the localization uncertainty, but also

the likelihood of success.

1 Introduction

In the localization process, a robot must decidewhat landmarks to use in order to determine whereit is. Robots that use sensors with a limited �eld-of-view (for example, a rover with stereo cameras) mustdecide how to position the sensor(s) in order to opti-mize the ability of the robot to perform localization.

Several recent papers have discussed strategies forsensor placement or landmark selection for use in nav-igation or localization. A common approach is toconsider which landmarks, from a pre-determined setof landmarks, will yield the best localization result.Sutherland and Thompson [11] developed one of theearliest methods for landmark selection. They appliedheuristic functions to select a landmark triple, fromthe set of such triples, that is likely to yield a goodlocalization result. Greiner and Isukapalli [2] learna function to select landmarks that minimize the ex-pected localization error. A related technique is givenby Thrun [13], who trains a neural network to learnlandmarks that optimize the localization uncertainty.

Yeh and Kriegman [14] select the subset of fea-tures from a set of possible features that minimizes

a Bayesian cost of localization. Deng et al. [1] selecta set of landmarks in order to minimize the cost ofsensing over a path segment. Murphy et al. [6] �rst se-lect candidate landmark triples using heuristics. Thehighest ranked candidate (and others, if necessary)are tested using experimentation with a robot. Simand Dudek [9] consider image locations with high edgedensity as possible landmarks, which are representedusing an appearance-based method. Landmarks aredetected by matching in the image subspace and theresulting estimates are combined in a robust manner.Little et al. [4] �nd stable landmarks by �rst detectingimage corners. The corners that lie on depth discon-tinuities are eliminated using stereo vision.

Each of these papers considers a problem wherelandmarks are selected from a pre-determined set ofpossible landmarks. Research that does not assumea pre-determined set of landmarks includes work bySimhon and Dudek [10]. They choose regions in whichgood metric maps can be established according toa distinctiveness measure. Grudic and Lawrence [3]learn a mapping between an image and the robot lo-cation, but they do not address the problem of whereto best place the camera to obtain the image.

Little of the research to date can be successfully ap-plied to localization in unstructured outdoor terrain,which is the problem that we address. We describe atechnique that selects the best landmark to view for lo-calization knowing only an elevation map of the terrainand an estimate of the robot's position. We assumethat the robot is equipped with a limited �eld-of-view(FOV) range sensor, such as sonar or stereo cameras.Our method selects the position to aim the range sen-sor in order to optimally perform localization in theunstructured three-dimensional terrain.

The landmark selection technique that we use isbased upon performing uncertainty estimation using amaximum-likelihood localization method [7, 8]. Priorto performing localization, the robot analyzes the ter-rain in the global map to select a localization target,which is the position in the terrain that the robotsenses in order to generate a local map to compare

c 2000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising orpromotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrightedcomponent of this work in other works must be obtained from the IEEE.

Page 2: Pro ceedings of the 2000 IEEE - University of Washingtonfaculty.washington.edu/cfolson/papers/pdf/icra00.pdfPro ceedings of the 2000 IEEE In ternational Conference on Rob otics and

against the global map. We desire a location that hasdistinctive terrain and thus allows the localization tobe performed with a low uncertainty. This assumesthat the error in the robot position is not so largethat the localization target will be outside of the viewof the robot when it attempts to sense this location.Active vision techniques can be used if, after the robotattempts to sense the localization target, no distinc-tive terrain is seen.

The �rst step in determining the localization tar-get is estimating the error present in the global mapand the error expected from sensing the terrain at therobot's current position. These errors are encoded in aprobability map of the terrain expected to be seen bythe robot. Each cell in this map contains an estimateof the probability that the cell will be seen as occupiedby the robot if the robot performs sensing with the cellin the �eld-of-view. By treating this probability mapas a terrain map, we can apply previously developeduncertainty estimation techniques [7] to predict theuncertainty that will occur for any target in the prob-ability map. The location with the lowest predicteduncertainty is selected as the localization target.

In addition to improving localization, these tech-niques can be applied to determining a sensory uncer-tainty �eld for the robot. The sensory uncertainty�eld is a concept introduced by Takeda et al. [12]that measures the expected distribution of errors inthe robot position as the robot moves through someenvironment, performing sensing at some interval inorder to improve localization. Given the uncertaintyestimation and target selection methods, we can de-termine the expected localization uncertainty for anyrobot position in the environment.

These techniques have been applied to localizationfor Rocky 7, which is a research prototype for Marsexploration and science operations. In addition tobody-mounted stereo cameras on the front and back,Rocky 7 has a stereo pair of cameras on a retractablemast that allows it survey the terrain. See Fig. 1.We thus concentrate on localization using stereo rangedata. However, most of this discussion applies equallywell to other range sensors. Experiments on Rocky 7and with synthetic data indicate that the landmark se-lection not only decreases the robot's localization un-certainty, but also increases the probability of achiev-ing a qualitatively correct localization result.

2 Terrain matching

The basic localization technique that we use is tocompare a map generated at the current robot position

Figure 1: The Rocky 7 Mars rover prototype in the JPLMars yard with its mast deployed.

(the local map) to a previously generated map of theenvironment (the global map) [8]. This technique isreviewed here.

We generate both the local map and the global map(which may be the combined result of previous localmaps) using stereo vision on-board the robot. Therange image is converted into a digital elevation mapunder the assumption that we know the robot orien-tation through other sensors, although this restrictioncan be removed, if desired. To further simplify theproblem, we use a high-pass �lter on the heights sothat the search for the robot position needs to beperformed only in the x and y directions. The gen-erated representation is then converted into a three-dimensional occupancy grid.

2.1 Map similarity measure

We formulate the map matching problem in termsof maximum-likelihood estimation. A convenient setof measurements that can be used for this problem arethe distances from the occupied cells in the local mapto their closest occupied cells in the global map. De-note these distances DX

1 ; :::; DXn for the robot position

X . The likelihood function for the robot position canbe formulated as the product of the probability dis-tributions of these distances [8]. For convenience, wework with logarithms:

lnL(X) =

nXi=1

ln p(DXi ) (1)

For the uncertainty estimation to be accurate, it isimportant that we use a probability distribution func-

1448

Page 3: Pro ceedings of the 2000 IEEE - University of Washingtonfaculty.washington.edu/cfolson/papers/pdf/icra00.pdfPro ceedings of the 2000 IEEE In ternational Conference on Rob otics and

tion (PDF) that closely models the sensor uncertainty.This can be accomplished using a PDF that is theweighted sum of two terms [7]:

p(d) = �p1(d) + (1� �)p2(d) (2)

The �rst term describes the error distribution whenthe cell is an inlier (in the sense that the terrain po-sition under consideration in the local map also existsin the global map). In this case, d is a combinationof the errors in the local and global maps at this po-sition. In the absence of additional information withrespect to the sensor error, we approximate p1(d) as anormal distribution:

p1(d) =1

�p2�

e�(d)2=2�2 (3)

The second term describes the error distributionwhen the cell is an outlier. In this case the positionrepresented by the cell in the local map does not ap-pear in the global map (e.g. due to range shadows orstereo outliers). In practice, we have found that mod-eling this term as a constant is both convenient ande�ective [7].

p2(d) = K (4)

Although, p2(d) is not a probability distribution (itdoes not integrate to one), using the expected proba-bility density for a measurement generated by a ran-dom outlier point yields excellent results:

K =

Z1

�1

Z1

�1

p(d)2dxdy (5)

This value can be estimated quickly through examina-tion of the Euclidean distance transform of the map.

In Equation (2), � is the probability that any par-ticular cell in the local map is an inlier. For our oc-cupancy grids, we assume that this value is relativelylarge (� = 0:95). In practice, the localization is in-sensitive to the value of this variable. Finally, � isthe standard deviation of the measurements that areinliers. This value can be determined from the char-acteristics of the sensor, or it can be estimated em-pirically by examining real data, which is the methodthat we have used for localization on Rocky 7.

2.2 Uncertainty estimation

We determine the uncertainty in the localization es-timate by �tting a parameterized surface to the likeli-hood function in the neighborhood of the highest peak[7]. Since the likelihood function measures the proba-bility that each position in the pose space is the actual

robot position, the uncertainty in the localization ismeasured by the rate at which the likelihood functionfalls o� from the peak.

We assume that the likelihood function can be ap-proximated as a normal distribution in the neighbor-hood around the peak location. Fitting such a normaldistribution to the computed likelihoods yields bothan estimated variance in the localization estimate anda subpixel estimate of the peak location. While theapproximation of the likelihood function as a normaldistribution may not always be ideal, it yields a good�t to the local neighborhood around the peak and ourexperimental results indicate that very accurate re-sults can be achieved under this assumption.

We �t the peak in the likelihood function with:

ke�

1

2(1��2)

h( x��x

�x)2�2�( x��x

�x)�y��y

�y

�+�y��y

�y

�2i; (6)

where k = 1

2��x�yp1��2

; �x and �y represent the sub-

pixel position estimate, �x and �y are the standarddeviations along the axes, and � describes the orienta-tion of the axes with respect to the global coordinateframe. The function is �t using the peak value and theeight neighboring values using a least-squares criterionin the log-likelihood domain.

In addition to estimating the uncertainty in the lo-calization estimate, we can use the likelihood scoresto estimate the probability of a failure to detect thecorrect position of the robot [7]. This is particularlyuseful when the terrain yields few landmarks or otherreferences for localization and thus many positions ap-pear similar to the robot.

3 Probability mapping

In order to predict the uncertainty achievable bysensing some location (or combination of locations),we make a probabilistic prediction of the appearanceof the terrain to the sensor. Each cell in this proba-bilistic map stores a probability estimate that the cellwill be seen as occupied in the sensed map. We callthis the probability mapping of the terrain. This map-ping should encompass the errors present both in thegeneration of the global map and the expected errorsin the new local map.

For the case of stereo vision, Matthies [5] hasfound that the errors are well approximated by a two-dimensional normal distribution with the major axisaligned away from the cameras. We thus convolve theglobal map with two normal distributions, one repre-senting the error in the global map and one represent-

1449

Page 4: Pro ceedings of the 2000 IEEE - University of Washingtonfaculty.washington.edu/cfolson/papers/pdf/icra00.pdfPro ceedings of the 2000 IEEE In ternational Conference on Rob otics and

N(x; y; i; j) =1

2��1�2p1� �2

e�

1

2(1��2)

h�x

�1

�2�2�

�x

�1

��y

�2

�+�

y

�2

�2i(7)

�1 = E�1 [x+ i; y + j] �2 = E�2 [x+ i; y + j] � =

��1

�2

�tan �[x+ i; y + j]

ing the error in the local map. Note, however, thatthe error in the map is a function of the location be-ing sensed. The expected error grows with the squareof the distance to the camera. We thus allow the widthof the normal distributions to vary with the positionin the environment.

Our position-variant spreading function is given byEquation (7) above. In this equation, E�1 [x+ i; y+ j]and E�2 [x+ i; y+ j] are the expected standard devia-tions at the location (x+ i; y+ j), and �[x+ i; y+ j] isthe orientation of the distribution (i.e. the direction ofthe sensor position with respect to (x+ i; y+ j) whenthe map is created).

To estimate the error in the global map, we use:

PG(i; j) =

WXx=�W

WXy=�W

M(x+i; y+j)N(x; y; i; j); (8)

where M(x; y) is the global map, 2W +1 is the size ofthe convolution window, and N(x; y; i; j) is the distri-bution described above. Incorporating the expectederror in the local map, we get:

P (i; j) =

WXx=�W

WXy=�W

PG(x+i; y+j)N(x; y; i; j): (9)

Of course, the instances of N(x; y; i; j) in (8) and(9) will be somewhat di�erent since the expected stan-dard deviations and orientations will be di�erent forthe points in the global map versus the local map.

4 Landmark selection

Given the probability mapping of the terrain, wecan now estimate the uncertainty that will result frompointing the range sensor at some location in the envi-ronment and performing localization using the visibleterrain. This is performed by treating the correspond-ing terrain patch in the probability mapping as thelocal map and using the uncertainty estimation equa-tions described above.

In our implementation, we approximate the generalnormal distributions used to model the sensor erroras rotationally symmetric 2-D normal distributions.While the error due to stereo vision is much greater

along the direction parallel to the camera axes, error inthe robot's knowledge of its orientation will yield addi-tional errors in the perpendicular direction. Further-more, our experiments indicate that the precise shapeof the distribution does not have a large e�ect on thelandmark selected. The use of rotationally symmet-ric normal distributions makes the function separableand we can thus perform the convolutions eÆcientlyby treating the x and y directions sequentially. Onthe other hand, it is crucial to use a wider and at-ter distribution at locations further from the sensor,in order to model the increase in error with distance.We must thus continue to vary the distribution as afunction of the location in the space.

We can make the computation even more eÆcientby discretizing the space of allowable standard devia-tions and pre-computing the normal distributions cor-responding to them. In our implementation, we selectten standard deviations (related by powers of

p2).

For each position in the space, we select the distribu-tion with the closest standard deviation to the desiredvalue. This approximation allows the probability mapto be computed quickly upon demand for a region ofthe terrain map.

Finally, we use dynamic programming to computethe likelihood function from Section 2.1 for each ofthe terrain regions considered as a possible landmarkfor performing localization. This is performed at theoptimal localization position and the neighboring lo-cations in the pose space in order to apply the un-certainty estimation techniques from Section 2.2. Theterrain landmark yielding the lowest localization un-certainty is selected as the localization target.

5 Results

We have tested these techniques in several experi-ments using real and synthetic data. An example us-ing a synthetically generated elevation map is shownin Fig. 2. This case models a scenario where the robotis moving in a terrain consisting of rocks of varioussizes, like the terrain a rover would encounter on thesurface of Mars. The positions near large rocks areconsidered to be good targets, as shown by the un-certainty scores represented by Fig. 2(b). The target

1450

Page 5: Pro ceedings of the 2000 IEEE - University of Washingtonfaculty.washington.edu/cfolson/papers/pdf/icra00.pdfPro ceedings of the 2000 IEEE In ternational Conference on Rob otics and

(a)

(b)

Start Target Finish

(c)

Figure 2: Landmark selection example. The boxed �'sare the (interchangeable) robot beginning and ending po-sitions. The selected target region is marked by a largerbox. (a) Digital elevation map. (b) Estimated uncertain-ties for landmarks centered at each location. (c) Three-dimensional map.

that is chosen is a position that contains not only alarge rock, but also smaller rocks that are also usefulin performing the localization.

In order to test the localization performance whenusing the target selection techniques, we simulated lo-calization problems by sampling local maps from thedistribution speci�ed by the probability map of theterrain and then performing localization against theglobal map. Our experiment selected robot positionsat random from the terrain. We next performed targetselection and, �nally, localization using the selectedtarget. In addition, we tested localization using thetarget at the position directly between the robot start-ing and ending positions, and eight other targets onan evenly space grid around this position.

The results of this experiment are dramatic. Whentarget selection was used in 1000 trials using the ter-rain shown in Fig. 2, localization found the qualita-tively correct position in 97.8% of the trials. However,when target selection was not used, the localizationsucceeded in only 29.5% of the trials, since much ofthe terrain provides little useful information for local-ization. In addition, the successful cases were 15.3%more accurate when target selection is used. This ex-periment thus demonstrates a case where target se-lection is not only useful in reducing the localizationuncertainty, but also critical in obtaining the correctqualitative position.

6 Sensor uncertainty �eld

Our approach can be used to generate a sensor un-certainty �eld [12] for a known terrain map. This �eldis the expected distribution of error in the sensed robotposition as a function of the robot location. While,in general, the uncertainty will depend on the pathtaken to each position, we consider the uncertainty asa function of only the robot position.

We can, of course, compute the sensor uncertainty�eld using a brute-force method, where the best land-mark is selected for each location of the robot and theresulting expected uncertainty is stored for each. Un-fortunately, this process requires much computation.Note, however, that the uncertainties change slowlyas the robot position that is examined is moved inthe pose space. Our strategy is to �rst sample thepose space at a coarse resolution and then examinelocations of interest, such as those that yield low un-certainties, subsequently at a �ner resolution.

Figure 3 shows an example where a sensor uncer-tainty �eld was generated for the terrain shown in

1451

Page 6: Pro ceedings of the 2000 IEEE - University of Washingtonfaculty.washington.edu/cfolson/papers/pdf/icra00.pdfPro ceedings of the 2000 IEEE In ternational Conference on Rob otics and

Figure 3: Sensor uncertainty �eld generated for the terrainin Fig. 2. Bright values correspond to low uncertainties.

Fig. 2. As expected, lower uncertainties occur nearlarge rocks. However, the uncertainty is increased atthe location of the rocks, since we use a method wherethe rock is not useful for localization if the robot is di-rectly on top it.

7 Summary

We have described a method to select the sens-ing location for performing mobile robot localizationthrough matching terrain maps. The localizationmethod that we use constructs a likelihood functionin the space of possible robot positions. The uncer-tainty is estimated for localization using a local mapby �tting a normal distribution to the likelihood func-tion generated. We select the best landmark for lo-calization by minimizing the expected uncertainty inthe robot localization. In order to predict the un-certainty obtained by localization using various land-marks, our method constructs a probabilistic repre-sentation of the terrain expected to be sensed at anyposition in the global map. Treating the patches ofthis \probability mapping" of the terrain as a localmap allows the uncertainty expected by sensing theterrain patch to be estimated using the surface �ttingtechniques. We have applied this technique to robotlocalization in rocky terrain with excellent results.

Acknowledgements

The research described in this paper was carriedout by the Jet Propulsion Laboratory, California Insti-tute of Technology, under a contract with the NationalAeronautics and Space Administration. This work is

an element of the Long Range Science Rover project,which is developing technology for future Mars mis-sions. It is funded as part of the NASA Space Teler-obotics Program, through the JPL Robotics and MarsExploration Technology OÆce.

References

[1] X. Deng, E. Milios, and A. Mirzaian. Landmark selectionstrategies for path execution. Robotics and AutonomousSystems, 17(3):171{185, May 1996.

[2] R. Greiner and R. Isukapalli. Learning to select usefullandmarks. IEEE Transactions on Systems, Man, andCybernetics - Part B : Cybernetics, 26(3):437{449, June1996.

[3] G. Z. Grudic and P. D. Lawrence. A nonparametric learn-ing approach to vision based mobile robot localization. InProceedings of the IEEE/RSJ International Conference onIntelligent Robots and Systems, pages 724{729, 1998.

[4] J. J. Little, J. Lu, and D. R. Murray. Selecting stable imagefeatures for robot localization using stereo. In Proceedingsof the IEEE/RSJ International Conference on IntelligentRobots and Systems, pages 1072{1077, 1998.

[5] L. Matthies and S. A. Shafer. Error modeling in stereo nav-igation. IEEE Transactions on Robotics and Automation,3(3):239{248, June 1987.

[6] R. R. Murphy, D. Hershberger, and G. R. Blauvelt. Learn-ing landmark triples by experimentation. Robotics and Au-tonomous Systems, 22(3-4):377{392, 1997.

[7] C. F. Olson. Subpixel localization and uncertainty estima-tion using occupancy grids. In Proceedings of the Interna-tional Conference on Robotics and Automation, volume 3,pages 1987{1992, 1999.

[8] C. F. Olson and L. H. Matthies. Maximum-likelihood roverlocalization by matching range maps. In Proceedings ofthe International Conference on Robotics and Automation,volume 1, pages 272{277, 1998.

[9] R. Sim and G. Dudek. Mobile robot localization fromlearned landmarks. In Proceedings of the IEEE/RSJ In-ternational Conference on Intelligent Robots and Systems,pages 1060{1065, 1998.

[10] S. Simhon and G. Dudek. Selecting targets for local ref-erence frames. In Proceedings of the IEEE Conference onRobotics and Automation, pages 2840{2845, 1998.

[11] K. T. Sutherland and W. B. Thompson. Localizing in un-structured environments: Dealing with the errors. IEEETransactions on Robotics and Automation, 10(6):740{754,December 1994.

[12] H. Takeda, C. Facchinetti, and J.-C. Latombe. Planningthe motions of a mobile robot in a sensory uncertainty �eld.IEEE Transactions on Pattern Analysis and Machine In-telligence, 16(10):1002{1017, October 1994.

[13] S. Thrun. Bayesian landmark learning for mobile robot lo-calization. Machine Learning, 33(1):41{76, October 1998.

[14] E. Yeh and D. J. Kriegman. Toward selecting and recogniz-ing natural landmarks. In Proceedings of the IEEE/RSJInternational Conference on Intelligent Robots and Sys-tems, volume 1, pages 47{53, 1995.

1452


Recommended