Collaborative Visual SLAMFramework for a Multi-Robot
System
Nived Chebrolu, David Marquez-Gamez andPhilippe Martinet
7th Workshop on Planning, Perception and Navigation for Intelligent VehiclesHamburg, Germany
28th September, 2015
1 / 27
Motivation for a collaborative system
Multi-robot system for disaster relief operations1
1Picture taken from project SENEKA, Fraunhofer IOSB2 / 27
Contribution of this paper
System For Collaborative Visual SLAM
3 / 27
Perception sensor
Monocular Camera
4 / 27
Main components of the system
Monocular Visual SLAM
Place Recognition System
Merging Maps
Collaborative SLAM Framework
5 / 27
Monocular visual SLAM
GoalGiven a sequence of images, obtain the trajectory ofthe camera and the structure/model of theenvironment.
Mono SLAM PTAM DTAM
6 / 27
Large-Scale Direct Visual SLAM
LSD-SLAM Output
7 / 27
Monocular SLAM: System Overview
Tracking Depth Estimation Map Optimization
8 / 27
Monocular Visual SLAM
Place Recognition System
Merging Maps
Collaborative SLAM Framework
9 / 27
Place Recognition System: Context
Where?Where?
Is the place already visited?
10 / 27
FAB-MAP Approach
Overlap Detection Scheme
11 / 27
Experimental Results - A Simple Scenario
Image Num. P(Seen) P(New)1 0.991 0.0012 0.085 0.9103 0.922 0.0024 0.991 0.0015 0.911 0.131
12 / 27
Monocular Visual SLAM
Place Recognition System
Merging Maps
Collaborative SLAM Framework
13 / 27
Merging Maps: Context
What is the transformation between two views?
14 / 27
Procedure for Merging Maps
Initial EstimateUsing
Horn's Method
Refine EstimateUsing Direct Image Alignment
Refine Final EstimateUsing ICP
15 / 27
Experimental Results: Input
(a) First Image (b) Second Image
(c) Depth map forfirst image
(d) Depth map forsecond image
16 / 27
Experimental Results: Output
(e) Before ApplyingTransformation
(f) After ApplyingTransformation
17 / 27
Monocular Visual SLAM
Place Recognition System
Merging Maps
Collaborative SLAM Framework
18 / 27
Overall Scheme
Figure: Overall scheme of our collaborative SLAM system
19 / 27
Experimental Results
Case study: Experimental Settings
Robotic Platform: Two Turtlebots.
Sensor: uEye Monocular Camera with wide-eyelens.
Images: 640 × 480 pixels @ 30 Hz
Environment: Area: 20m × 20m, Indoor(Semi-Industrial)
Computation: Core 2 Duo Laptop
Software: ROS, OpenCV, g 2o library.
20 / 27
At Instance 1
(a) Robot R1
(b) Robot R2
21 / 27
At Instance 2
(c) Robot R1
(d) Robot R2
22 / 27
At Instance 3
(e) Robot R1
(f) Robot R2
23 / 27
Global Map
(g) Combined Trajectory (h) Combined Depth Map
Global map computed at the central server.
24 / 27
Summary
A collaborative visual SLAM framework with:1 Monocular SLAM process for each robot.2 Detection of scene overlap amongst several
robots3 Global map computation fusing measurements
from all robots.4 Feedback mechanism for global information to
be communicated back to each robot.
25 / 27
Scope For Future Work
1 Investigate advantage due to feedback in termsof localization accuracy and map quality.
2 Towards a Decentralized system: Direct robotto robot communication.
3 Adapting for a hybrid team of robots (ex.UAVs and ground robots).
26 / 27
Thank you
Thank you very much for your attention !!!
Q&A
27 / 27