Robotic Mapping 6.834 Student Lecture Itamar Kahn, Thomas Lin, Yuval Mazor.

Post on 11-Jan-2016

214 views 0 download

Tags:

transcript

Robotic Mapping

6.834 Student Lecture

Itamar Kahn, Thomas Lin, Yuval Mazor

Outline

Introduction (Tom)

Kalman Filtering (Itamar)J.J. Leonard and H.J.S. Feder. A computationally efficient method for large-scale concurrent mapping andlocalization. In J. Hollerbach and D. Koditschek, editors, Proceedings of the Ninth International Symposium onRobotics Research, Salt Lake City, Utah, 1999

Hybrid Mapping Approaches (Yuval)S. Thrun, W. Burgard, and D. Fox. A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping. In Proceedings of the IEEE Internatinoal Conference on Robotics and Automation(ICRA), San Francisco, CA, 2000. IEEE

Conclusion (Tom)

Vision / Steps

Truly autonomous mobile robots

Sense the environment Acquiring models of the environment Reason Act on environment

State of the Art

20 years of research

Do well on static, structured, limited size

Difficulty with dynamic, unstructured,

large scale

Simulated versus Real-life

What is Robotic Mapping?

Acquiring spatial models of physicalenvironments with robots

Qui ckTi me™ and a Graphi cs decompressor are needed to see thi s pi cture.

P a ul N e w m a n' s mobil e robo t m a pping M I T

What is Robotic Mapping?

Sensors with different limitations

Cameras, Sonar, Lasers, Radar,Compasses, GPS

Main Challenges

Noise

High Dimensionality

Correspondence Problem

Changing Environments

Robotic Exploration Planning

Challenges - Noise

Measurement errors accumulate over time

Odometry error will accumulate and throw off an entire map [Thrun 2002]

Challenges - High Dimensionality

3-D visual maps can take millions ofnumbers

Challenges - Correspondence Problem

Do these sensor readings from differenttimes correspond to the same object?

Is the blue object the same one it sensed earlier, or it a different object that seemslike it's in the same location because of accumulated sensor noise?

[Thrun 2002]

Challenges - Changing Environments

Moving furniture, moving doors

Even faster: Moving cars, moving people

Hard to distinguish sensor noise and

moving items

Challenges - Robotic Exploration Planning

How robots should explore usingincomplete maps

Today's Methods

All Probabilistic

Better models uncertainty, sensor noise

Kalman Filtering (Itamar will present), Hybrid

Methods (Yuval will present)

EM, Occupancy Grids, Multi-Planar Maps

(not presenting)

Decoupled Stochastic Mapping

A Computationally Efficient Method for Large-Scale Concurrent Mapping and Localization

John J. Leonard and Hands Jacob S. Feder, MIT, 2000

Robotic Mapping Problem

• Identify features in the environment– E.g., landmarks, distinctive objects or shapes in the

environment, etc.

• Estimate the robot location in reference to the features

• Correct for noise (error in estimation) contributed by the sensors and controls

Acquire a spatial model of a robot’s environmentAcquire a spatial model of a robot’s environment

What is DSM?

• SM: Use Extended Kalman filtering (EKF) to build a map through spatial relationship of features

– PROBLEM: EKF based solutions are O(n2), where n is the number of features

• Results from the number of correlations between the vehicles and features

– SOLUTION: Break into submaps and apply SM only on submaps

Feature based approach to Concurrent Mapping and Localization (CML)

Feature based approach to Concurrent Mapping and Localization (CML)

What is a Feature?

• Determine the relevant visual features– These maybe specific to the to be mapped environment (e.g.,

walls in a room, obstacles in an under water envirnoment, etc.)

A map is obtained by defining visual features dynamics and observation function

A map is obtained by defining visual features dynamics and observation function

QuickTime™ and a JPEG 2000 decompressor are needed to see this picture. QuickTime™ and a JPEG 2000 decompressor are needed to see this picture.

F[4]: the whole wall is a single feature in the mapF[4]: the whole wall is a single feature in the map

Overview

• Kalman and Extended Kalman Filters

• Conventional Stochastic Mapping

• Decoupled Stochastic Mapping

• Algorithm Testing

Kalman Filter Mini Tutorial

• The mini tutorial is an adaptation of a tutorial presented at ACM SIGGRAPH 2001 by Greg Welch and Gary Bishop (UNC).

– The slides of the tutorial are available at http://www.cs.unc.edu/~tracker/ref/s2001/kalman/index.html

– More information (papers, software, links , etc) is available athttp://www.cs.unc.edu/~welch/kalman/index.html

Kalman Filter• KF operates by

– Predicting the new state and its uncertainty– Correcting with the new measurement

• IN: Noisy data --> OUT:less noisy

Kalman Filter Example2D Position-Only (e.g., 2D Tablet)

Process Model:

Measurement Model:

xk

yk

1 0

0 1

xk 1

yk 1

~ xk 1

~ yk 1

uk

vk

Hx 0

0 Hy

xk

yk

~ uk

~ vk

statestate

transition state noise

measurementmeasurement

matrixstate noise

x k Ax k 1 w k 1

z k Hx k v k

Kalman Filter ExamplePreparation and Initialization

State transition:

Process Noise Covariance:

Measurement Noise Covariance:

Initialization:

A 1 0

0 1

QE w * w T Qxx 0

0 Qyy

R E v *v T Rxx 0

0 Ryy

x 0 H 1z 0

P0 0

0

state at t0

error covariance estimate at t0

Kalman Filter ExamplePredict

Correct

x k Ax k 1

Pk APk 1A

T Q

x k x k K z k Hx k

Pk I KH P

K Pk HT HPk

HT R 1

predict next state

predicted error covariance

correct for the discrepancy between predicted and actual measurement

minimize the a posteriori error covariance (Kalman gain)

correct state and error covariance

Kalman Filter

Predict Correct

(1) Project the state ahead

x k Ax k 1

(2) Project the error covariance ahead

Pk APk 1A

T Q

(1) Computer the Kalman gain

K Pk HT HPk

HT R 1

(2) Update the estimate with measurement z k

x k x k K z k Hx k

(3) Updtae the error covariance

Pk I KH P

Kalman Filter ExampleExtend example to 2D Position-Velocity

Process model:

Measurement model:

state transition state

1 0 dt 0

0 1 0 dt

0 0 1 0

0 0 0 1

x

ydx

dtdy

dt

measurement matrix state

Hx 0 0 0

0 Hy 0 0

x

ydx

dtdy

dt

Kalman Filter

• But, Kalman filter is not enough !!!

– Only matrix operations allowed (only works for linear systems)

– Measurement is a linear function of state– Next state is linear function of previous state– Can’t estimate non-linear variables (e.g., gain,

rotation, projection, etc.)

Extended Kalman Filter

• Nonlinear Process (Model)– Process dynamics: A becomes a(x)

– Measurement: H becomes h(x)

• Filter Reformulation– Use functions instead of matrices

– Use Jacobians to project forward, and to relate measurement to state (first order Taylor expansion)

(1) Project the state ahead

x k f x k 1,uk,0

(2) Project the error covariance ahead

Pk APk 1A

T WkQk 1WkT

Extended Kalman Filter

Predict Correct

(1) Computer the Kalman gain

K Pk Hk

T HkPk Hk

T VkRkVkT 1

(2) Update the estimate with measurement z k

x k x k K z k h x k

,0 (3) Updtae the error covariance

Pk I KkHk Pk

• A is the Jacobian matrix of partial derivatives of f with respect to x• W is the Jacobian matrix of partial derivatives of f with respect to w• H is the Jacobian matrix of partial derivatives of h with respect to x• H is the Jacobian matrix of partial derivatives of h with respect to v

• A is the Jacobian matrix of partial derivatives of f with respect to x• W is the Jacobian matrix of partial derivatives of f with respect to w• H is the Jacobian matrix of partial derivatives of h with respect to x• H is the Jacobian matrix of partial derivatives of h with respect to v

Stochastic Mapping

• Size-varying Kalman filter

• Add and Update of representation

• Build a map through spatial relationship

Use SM to generate maps (solve CML)Use SM to generate maps (solve CML)

Stochastic Mapping• Estimated locations of the robot and the features in

the map

• Estimated error covariance

x k x r k T x f k T T where x r xr yr v T

and x f k T x 1 k T ...x N k T , such that x i x i y i T

P k Prr k Prf k Pfr k Pff k

Stochastic Mapping• The dynamic model of the robot is given by

• The observation model for the system is given by

x k +1 = f x k , u k + dx u k where u k T

z k = h x k + dz

Augmented Stochastic Mapping

• Given these assumptions, an extended Kalman filter (EKF) is employed to estimate the state and covariance .

x

P

Decoupled Stochastic Mapping

• Stochastic Mapping: complexity O(n2)

• Solution: DSM– Divide the environment into multiple submaps– Each submap has a vehicle position estimate

and a set of features estimates

Decoupled Stochastic Mapping

dependencies are local

Map of landmarks Inverse covariance matrix

the map is divided in 4 sub maps

DSM: Divide the map into smaller submapsDSM: Divide the map into smaller submaps

How do we move from map to map?

Cross-map relocation

A B

Cross-map updating

A B

Single-pass vs. Multi-pass DSM

Decoupled Stochastic Mapping

• Vehicle travels to a previously visited area:Cross-map relocation

x B k x r

A k x r

B j

,P B k

PrrA k Prr

B j PrfB j

PfrB j Pff

B j

Decoupled Stochastic Mapping

• Facilitate spatial convergence by bringing more accurate vehicle estimates from lower to higher maps:Cross-map updating

Using EKF, estimate vehicle location in submap B: Use state as measurement and covariance in A, as predictions for state in B.

x B k B

x fB k

,PB k Prr

B j B PrfB j

PfrB j 2Pff

B j

x rA k

PrrA k

z

Methods Comparison

Full covariance ASM

Single-pass DSM

Multi-pass DSM

Testing

QuickTime™ and a YUV420 codec decompressor are needed to see this picture.

Limitations

• Sensor noise modeled by Gaussian process

• Limited map dimensionality

Hybrid Approaches

A Real-Time Algorithm for Mobile Mapping with Applications to Multi-Robot and 3D Mapping

Sebastian Thrun, Carnegie Mellon UniversityWolfram Burgard, University of FreibergDieter Fox, Carnegie Mellon University

Overview

• Concurrent mapping and localization using 2D laser range finders

• Mapping: Fast scan-matching

• Localization: Sample-basedprobabilities

• Motivation: 3D-Maps and large cyclic environments

Benefits

• Computation is all real-time

• Builds 3D maps

• Handles cycles in a map

• Accurate map generation in the absence of odometric data

Mapping Basics

• A map is a collection of sensor scans, o, and robot positions (poses), s

• For every time, t, a new data scan and pose is added to the map:

mt { o ,s } 0,1,...,t

Map likelihood

P(m | dt ) P(m) P(o 0

t

| m,s )

P(s1 | a ,s 0

t 1

)ds1...dst

• The most likely map:

where: dt {so,ao ,s1,a1, ...,st}

arg maxm

P(m | dt )

Mapping

Posterior pose, s, after moving distance a from s’:

• The PDF has an elliptical/banana shape

PDF Intuition

• If a scan shows free space it is unlikely that future scans will show obstacles in that space

• Darker regions indicate lower probability of an obstacle

Maximizing Map Likelihood

• Goal: Find the most-likely map given all the data the robot has seen

• Infeasible to maximize in real-time• Two possibilities:

– Have the robot stop and calculate after every scan (not real-time)

– Assume map is correct, add new data (large error growth)

Background Methods

• Incremental Localization

• Expectation Maximization

Incremental Localization (IL)

• Assume previous map and localizations are accurate

• Append new sensor scans to the old map• Localize based on updated map• Can be done in real-time• Fail on cyclic environments as error grows

unbounded

Incremental Localization

• IL never corrects old errors based on new information

• Errors can grow unbounded

• While traversing a cycle in a map, error growth leads the robot to “get lost” and the map breaks down

Expectation Maximization (EM)

• Store scans and pose data probabilistically

• Search through all possible previous maps (from times 0-t) and find the most likely maps

• After each scan, or set number of scans, recalculate

Expectation Maximization

• Can handle cyclic environments

• Batch algorithms - not real-time

Goal

• Combine IL and EM in a real-time algorithm that can handle maps with cycles

• Use posterior estimation like in EM

• Incremental map construction with maximum likelihood estimators as in IL

Conventional Incremental Map

• Given a scan and odometry reading, determine the most likely pose.

• Use that pose to increment the map. Never go back to change it.

ˆ s t argmax P(st | ot , at 1, ˆ s t 1)

mt1 mt { ot , ˆ s t }

Conventional Incremental Map

• This approach works in non-cyclic environments

• Pose errors necessarily grow

• Past poses cannot be revised

• Search algorithms cannot find solutions to close loops

Incremental Map Problem

Posterior Incremental Mapping

• Basic premise:Use Markov localization to compute the full posterior over robot poses

• Probability distribution over poses based on sensor data:

Bel(st )P(st | dt , mt 1)

Posterior Incremental Mapping

• Posterior is where the robot believes it is.

• Can be incrementally updated over time

• Updated pose and maps:

Bel(st )P(ot | st ,mt 1)

P(st | at 1, st 1)Bel(st 1 )dst 1

s t argmaxst

Bel(st ) mt1 mt { ot ,s t }

Posterior Incremental Mapping

• Use the posterior belief to determine the most likely pose

• Uncertainty grows during a loop

• The robot has a larger window to search to close the loop

Implementation Details

• Take samples of posterior beliefs

• Save computation and easier to generalize

• Use gradient descent on each sample to find globally maximum likelihood function.

Backwards Correction

• When a loops closes successfully, we can go back and correct our pose estimates

• Distribute the error ∆st among all poses in

the loop

• Use gradient descent for all poses in the loop to maximize likelihood

sts t ˆ s t

Handling a Cycle

Multi-Robot Extensions

• Using posterior estimation extends naturally to environments with multiple robots

• Each robot need not know any other robot’s initial pose

• BUT every robot localize itself within the map of an initial Team Leader robot

Multi-Robot Extensions

• Use Monte Carlo Localization

• Initially any location is likely

• Posterior estimation localizes the robot in the Team Leader’s map

Results - Cycle Mapping

• Groundrules:– Every scan used for localization– Scans appended to map every two meters

• Random odometric errors (30˚ or 1 meter)• Error generates large error during the cycle

but within acceptable range of “true” pose• Posterior estimation finds the true pose and

corrects prior beliefs

Mapping without Odometry

• Same as before but with no odometric data

• Traversing the cycles leads to very large error growth

• Once again, on cycle completion the errors are found and fixed

• Final map is virtually identical to map generated with odometric data

Limitations

• Non-optimal

• Nested cycles

• Dynamic environments

• Changing the map backwards in time can be dangerous

• Pseudo-Real Time

Brief Comparison

Kalman Filtering Hybrid MethodsRepresentation landmark locations point obstaclesSensor Noise Gaussian any

Map Dimensionality limited unlimitedDynamic Env's limited no

Scenario 1 - Infinite Corridor at Night

Which algorithm is better for a robotmapping the infinite corridor late at night,when one janitor is walking around? Vote Kalman Filtering Hybrid Approaches Don't Know

Scenario 1 - Infinite Corridor at Night

Changing environment problem Kalman - good! (Itamar will explain)

Infinite corridor has few features Can handle janitor (limited dynamics) Hybrid - bad! (Yuval will explain)

Can't handle dynamic environments

Scenario 2 - Airport Parking Lot

Which algorithm is better for a robotmapping an airport parking lot withhundreds of cars but no people? Vote Kalman Filtering Hybrid Approaches Don't Know

Scenario 2 - Airport Parking Lot

High dimensionality problem Kalman - bad! (Itamar will explain)

Only handles limited map dimensionality Hybrid - good! (Yuval will explain)

Nothing moving Handles unlimited map dimensionality

Scenario 3 - Amusement Park

Which algorithm is better for a robotmapping a busy amusement park duringChristmas? Vote Kalman Filtering Hybrid Approaches Don't Know

Scenario 3 - Amusement Park

Both fail Kalman - bad! (Itamar will explain)

Only does limited dynamics Hybrid - bad! (Yuval will explain)

Can't handle such a dynamic environment Almost no algorithms learn meaningful

maps in such a dynamic environment

Recap

The Mapping Problem

Main Challenges

Kalman Filtering

Hybrid Methods

Comparison

Contributions

Provided overview of robotic mapping

Presented Kalman Filtering in depth

Presented Hybrid Methods in depth