+ All Categories
Home > Documents > COMP 417 – Jan 12 th , 2006

COMP 417 – Jan 12 th , 2006

Date post: 30-Dec-2015
Category:
Upload: wilma-bonner
View: 16 times
Download: 0 times
Share this document with a friend
Description:
COMP 417 – Jan 12 th , 2006. Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization. Introduction. Who am I? Overview, Camera Networks for Robot Localization What Where Why How (technical stuff). Introduction - Hardware. Intro - What. - PowerPoint PPT Presentation
Popular Tags:
51
COMP 417 – Jan 12 th , 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization
Transcript
Page 1: COMP 417 – Jan 12 th , 2006

COMP 417 – Jan 12th, 2006

Guest Lecturer: David MegerTopic: Camera Networks for

Robot Localization

Page 2: COMP 417 – Jan 12 th , 2006

Introduction

Who am I? Overview, Camera Networks for

Robot Localization What Where Why How (technical stuff)

Page 3: COMP 417 – Jan 12 th , 2006

Introduction - Hardware

Page 4: COMP 417 – Jan 12 th , 2006

Intro - What

Previously: Localization is a key task for a robot. It’s typically achieved using the robot’s sensors and a map.

Can “the environment” help with this?

Page 5: COMP 417 – Jan 12 th , 2006

Typical Robot Localization

Page 6: COMP 417 – Jan 12 th , 2006

Sensor Networks

Page 7: COMP 417 – Jan 12 th , 2006

Sensor Networks

Page 8: COMP 417 – Jan 12 th , 2006

Intro - Where

In cases where there is sensing already in the environment, we can invert the direction of sensing.

Where is this true? Buildings with security systems Public transportation areas (metro) More and more large cities (scary but

true)

Page 9: COMP 417 – Jan 12 th , 2006

Intro – Why

Advantages: In many cases sensors already exist Many robots operating in the same

place, can all share the same sensors Computation can be done at a

powerful central computer, saves robot computation

Interesting research problem

Page 10: COMP 417 – Jan 12 th , 2006

Intro – How As the robot appears in images, we can

use 3-D vision techniques to determine its position relative to the cameras

What do we need to know about the cameras to make this work? Can we assume we know where the cameras

are? Can we assume we know the camera

properties?

Page 11: COMP 417 – Jan 12 th , 2006

Problem

Can we use images from arbitrarycameras placed in unknown

positions inthe environment to help a robot

navigate?

Page 12: COMP 417 – Jan 12 th , 2006

Proposed Method

1. Detect the robot2. Measure the relative positions3. Place the camera in the map4. Move robot to the next camera5. Repeat

Page 13: COMP 417 – Jan 12 th , 2006

Detection – An algorithm to detect these robots?

Page 14: COMP 417 – Jan 12 th , 2006

Detection (cont’d) Computer Vision techniques attempt

detection of (moving) objects Background subtraction or image

differencing Image templates Color matching Feature matching

A robust algorithm for arbitrary robots is likely beyond current methods

Page 15: COMP 417 – Jan 12 th , 2006

Detection – Our Method

Page 16: COMP 417 – Jan 12 th , 2006

ARTag Markers

Page 17: COMP 417 – Jan 12 th , 2006

Proposed Method

Detect the robot2. Measure the relative positions3. Place the camera in the map 4. Move robot to the next camera5. Repeat

Page 18: COMP 417 – Jan 12 th , 2006

Position Measurement

Question: Can we determine the 3-D position of an object relative to the camera from examining 2-D images?

Hint: start from the introduction to Computer Vision from last time

Page 19: COMP 417 – Jan 12 th , 2006

Pinhole Camera Model

Page 20: COMP 417 – Jan 12 th , 2006

Camera Calibration An image depends on BOTH scene

geometry and camera properties

For example, zooming in and out and moving the object closer and farther have essentially the same effect

Calibration means determining relevant camera properties (e.g. focal length f)

Page 21: COMP 417 – Jan 12 th , 2006

Projective Calibration Equations

Page 22: COMP 417 – Jan 12 th , 2006

Coordinate Transformation

Page 23: COMP 417 – Jan 12 th , 2006

Calibration Equations

Matrix AT is a 3x4 and fully describes the geometry of image formation

Given known object points M, and image points m, it is possible to solve for both A and T

How many points are needed?

Page 24: COMP 417 – Jan 12 th , 2006

Calibration Targets

Page 25: COMP 417 – Jan 12 th , 2006

3-Plane ARTag Target

Page 26: COMP 417 – Jan 12 th , 2006

Position Measurement Conclusion

With enough image points whose 3-D location are known, measurement of coordinate transformation T is possible

The process is more complicated than traditional sensing, but luckily, we only need to do it once per camera

Page 27: COMP 417 – Jan 12 th , 2006

Proposed Method

Detect the robot Measure the relative positions3. Place the camera in the map 4. Move robot to the next camera5. Repeat

Page 28: COMP 417 – Jan 12 th , 2006

Mapping Camera Locations

Given the robot’s position, a measurement of the relative position of the camera allows us to place it in our map

Question: What affects the accuracy of this type of relative measurement?

Page 29: COMP 417 – Jan 12 th , 2006

Proposed Method

Detect the robot Measure the relative positions Place the camera in the map 4. Move robot to the next camera5. Repeat

Page 30: COMP 417 – Jan 12 th , 2006

Robot Motion

A robot moves by using electric motors to turn its wheels. There are numerous strategies here in each of the important aspects: Physical Design Control algorithms Programming Interface High-level software architecture

Page 31: COMP 417 – Jan 12 th , 2006

Nomad Scout

Page 32: COMP 417 – Jan 12 th , 2006

Differential Drive Kinematics

Page 33: COMP 417 – Jan 12 th , 2006

Odometry Position Readings

Page 34: COMP 417 – Jan 12 th , 2006

Robot Motion - Specifics

Robot control accomplished by using an in-house application – Robodaemon

Allows “point and shoot” motion, not continuous control

Graphical and programmatic interface to query robot odometry, send motion commands, collect sensor data

Page 35: COMP 417 – Jan 12 th , 2006

Proposed Method Detect the robot Measure the relative positions Place the camera in the map Move robot to the next camera Repeat

Are we done?

Page 36: COMP 417 – Jan 12 th , 2006

Challenges In general, it’s impossible to know the

robot or camera positions exactly. All measurements have error

What should the robot do if the cameras can’t see the whole environment?

I didn’t say anything about how the robot should decide where to go next

More?

Page 37: COMP 417 – Jan 12 th , 2006

Mapping with Uncertainty

Given exact knowledge of the robot’s position, mapping is possible

Given a pre-built map, localization is possible

What if neither are present? Is it realistic to assume they will be? If so, when?

Page 38: COMP 417 – Jan 12 th , 2006

Uncertainty in Robot Position In general, kinematics equations do

not exactly predict robot locations Sources of error

Wheel slippage Encoder quantization Manufacturing artifacts Uneven and terrain Rough/slippery/wet terrain

Page 39: COMP 417 – Jan 12 th , 2006

Typical Odometry Error

Page 40: COMP 417 – Jan 12 th , 2006

Simultaneous Localization and Mapping (SLAM)

When both the robot and map features are uncertain, both must be estimated

Progress can be made by viewing measurements as probability densities instead of precise quantities

Page 41: COMP 417 – Jan 12 th , 2006

SLAM Progress

Page 42: COMP 417 – Jan 12 th , 2006

SLAM (cont’d) A quantity of the work in robotics in the

last 5-10 years has involved localization and SLAM, results are now very pleasing indoors with good sensing

These methods apply to our system

More on this later in the course, or after class today if you’re interested

Page 43: COMP 417 – Jan 12 th , 2006

Motion Planning

The mapping framework described is dependant on the robot’s motion: The robot must pass in front of a

camera in order to collect any images Numerous points are needed for each

camera to perform calibration SLAM accuracy affected by order of

camera visitation

Page 44: COMP 417 – Jan 12 th , 2006

Local and Global Planning

Local: how should the robot move while in front of one camera, to collect the set of calibration images?

Global: in which order should the cameras be visited?

Page 45: COMP 417 – Jan 12 th , 2006

Local Planning

Modern calibration algorithms are quite good at estimating from noisy data, but there are some geometric considerations Field of view Detection accuracy Singularities in calibration equations

Page 46: COMP 417 – Jan 12 th , 2006

Local Planning

We must avoid configurations where all points collected lie in a linear sub-space of R3

For example, a set of images of a single plane moved only through translation, gives all co-planar points

Page 47: COMP 417 – Jan 12 th , 2006

Projective Calibration Equations

Page 48: COMP 417 – Jan 12 th , 2006

Global Planning

Camera positions estimated by relative measurements from the robot

This information is only as accurate as our knowledge about the robot

“Re-localizing” is our only way to reduce error

Page 49: COMP 417 – Jan 12 th , 2006

Distance / Accuracy Tradeoff Returning to well-known cameras

helps our position estimates but causes the robot to travel farther than necessary

An intelligent strategy is needed to manage this tradeoff

Some partial results so far, this is work in progress

Page 50: COMP 417 – Jan 12 th , 2006

Review

Using sensors in the environment, we can localize a robot

In order to use previously un-calibrated and unmapped cameras, a robot can carry out exploration, and SLAM

This must only be done once, and then accurate localization is possible

Page 51: COMP 417 – Jan 12 th , 2006

Future Work

Better motion planning strategies globally

Integrate other sensing (especially if the cameras have blind spots)

Lose the targets? Other types of ubiquitous sensing

(wireless, motion detection, etc)


Recommended