Juan Saez Pons, Prof. Dr. Hanspeter A. Mallot
ROBUST VISUAL NAVIGATION USING OPTICAL FLOW FIELDS IN DYNAMICAL ENVIRONMENTS
Cognitive Neuroscience, University of Tübingen, Germany
This work fucuses on the visual navigation for autonomous mobile robot tasks. Themain effort is set in finding efficient solutions for different navigation tasks related tomobile robots, such as scene-based homing[1], obstacle avoidance, path integrationand more.
The basis of the work is set in estimate an optimal optical flow field using the well-studied Lucas Kanade [2] method. Afterwards a independent motion detection isapplied for the usage in outdoor environments. Thus, after egomotion is estimated onecan apply the different navigation tasks mentioned above.
Introduction
Optical Flow & Motion DetectionOptical flow is the apparent motion of brightness patterns in the image. Optical flow field estimation remains an opens problems after over 25 years of effort. Dozens of methods have been proposed, each with its owns strengths and weaknesses. Regarding the literature, the Lucas Kanade method is among the most accurate and realiable of the methods. An improved and more efficient form of the Lucas Kanadealgorithm is employed to obtain the point correspondences (OF).
∑
∑
∈
∈
−+≈
−++=
ROIyxtyx
ROIyx
yxFvyxFuyxF
xGvyuxFvuE
,
2
,
2
)),(),(),((
))(),((),(
Spatial Gradient Temporal Gradient
ROI
ROIF G
(u,v)
Outdoor ExtensionVision for outdoor navigation presents a number of challenges including real-time processing, independently moving objects, long-term changes of illumination and scene configuration, etc. Real-time processing of images with a large field of view has been addressed by a coarse-to-fine implementation of the Lucas Kanade algorithm for motion detection. Input images are represented as a pyramid, i.e. a stack of downsampled versions with different resolution levels. Motion is initially detected on the coarsest level using the standard algorithm. Thus, a coarse estimate of the image flow can be detected quickly. This coarse estimate is subsequently refined using the finer levels of the resolution pyramid. As a result, real-time estimation of image flow has been obtained both in natural and synthetic video sequences.
For a moving observer, the detection of independently moving objects is complicated by the ubiquitous presence of self-generated image motion. In principle, this can be solved by exploiting the well-known geometric structure of optic flow fields in static environments. In principle, if ego-motion is known, independent motion can be detected by finding deviations between the measured and the predicted flow field.
Ego-motion Estimation
Visual Navigation Tasks
Given a set of filtered optical flow vectors, the next step is to derive an estimate of the robot incremental motion during the last couple of frames and to integrate it over time to obtain global estimate of robot position. Steps to do: (1) Summarize the derivation of the mapping between image pixels and points in the world. (2) Decompose incremental motion into a rotation and translation and estimate each separately. (3) Combine the information to recover the robot's motion in a world coordinate frame.
OBSTACLE AVOIDANCE
The basic idea is that discontinuities in the optical flow field signal the presence of obstacles, in contrast to traditional obstacle detection techniques where for example proximity sensors are used.
PATH INTEGRATION
This is an active calculation process, which continuously updates a homing vector coding distance and direction relative to the starting point. Path Integration is commonly based in odometry. Such visual odometry using optical flow fields must be supplied by the system.
VISUAL HOMING
The act of returning to a goal position by comparing the image currently viewed with a snapshot taken when at the goal is known as visual homing. According to [3] is possible matching the intensity or the gradient of the intensity (known as well as Optical Flow) instead of matching images for visual homing task.
References
[1] M.O. Franz, B. Scholkopf, H.A. Mallot and H.H. Bulthoff. Where did I take that snapshot? Scene-based homing by image matching. Biological Cybernetics, 79-191-202, 2001.
[2] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. International Joint Conference on Artificial Intelligence, pp.674-679, 1981.
[3] A. Vardy and R. Moller. Biologically plausible visual homing methods based on optical flow techniques. Connection Science, Special Issue: Navigation, 2005.
(a) Obstacle Avoidance (b) Path Integration
Real- time estimation of image flow in synthetic video sequence
T
R
/ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputCondition () /PDFXRegistryName (http://www.color.org) /PDFXTrapped /Unknown
/Description >>> setdistillerparams> setpagedevice