Automatic Camera Calibration Using Pattern Detection for Vision-Based Speed Sensing Neeraj K....

Post on 23-Dec-2015

215 views 0 download

Tags:

transcript

Automatic Camera Calibration Using Pattern Automatic Camera Calibration Using Pattern Detection for Vision-Based Speed SensingDetection for Vision-Based Speed Sensing

Neeraj K. KanhereNeeraj K. KanhereDr. Stanley T. BirchfieldDr. Stanley T. Birchfield

Department of Electrical EngineeringDepartment of Electrical Engineering

Dr. Wayne A. Sarasua, P.E.Dr. Wayne A. Sarasua, P.E.Department of Civil EngineeringDepartment of Civil Engineering

College of Engineering and ScienceCollege of Engineering and ScienceClemson UniversityClemson University

IntroductionIntroduction

Traffic parameters such as volume, speed, and vehicle classification are fundamental for…

Traffic parameters such as volume, speed, and vehicle classification are fundamental for…

Intelligent Transportation Systems (ITS)

Traffic impacts of land use

Traffic engineering applications

Transportation planning

Collecting traffic parametersCollecting traffic parameters

Different types of sensors can be used to gather data:

Inductive loop detectors and magnetometers

Radar or laser based sensors

Piezos and road tube sensors

Different types of sensors can be used to gather data:

Inductive loop detectors and magnetometers

Radar or laser based sensors

Piezos and road tube sensors

Data quality deteriorates as highways reach capacity Inductive loop detectors can join vehicles Piezos and road tubes can miscalculate spacing

Motorcycles are difficult to count regardless of traffic

Problems with these traditional sensors

Machine vision sensorsMachine vision sensors

Proven technology

Capable of collecting speed, volume, and classification

Several commercially available systems

Uses virtual detection

Proven technology

Capable of collecting speed, volume, and classification

Several commercially available systems

Uses virtual detection

Provides rich visual information for manual inspection

No traffic disruption for installation and maintenance

Covers wide area with a single camera

Benefits of video detection

Why tracking?Why tracking?

Tracking enables prediction of a vehicle’s location in consecutive frames

Can provide more accurate estimates of traffic volumes and speeds

Potential to count turn-movements at intersections

Detect traffic incidents

Tracking enables prediction of a vehicle’s location in consecutive frames

Can provide more accurate estimates of traffic volumes and speeds

Potential to count turn-movements at intersections

Detect traffic incidents

Current systems use localized detection within the detection zones which can be prone to errors when camera placement in not ideal.

Current systems use localized detection within the detection zones which can be prone to errors when camera placement in not ideal.

Initialization problemInitialization problem

Partially occluded vehicles appear as a single blobPartially occluded vehicles appear as a single blob

Contour and blob tracking methods assume isolated initializationContour and blob tracking methods assume isolated initialization

Depth ambiguity makes the problem harderDepth ambiguity makes the problem harder

Our previous workOur previous work

Feature segmentationFeature segmentation Vehicle Base FrontsVehicle Base Fronts

Results of feature-trackingResults of feature-tracking

Rejected sub-windows

Stage 1 Stage 2 Stage 3 Detection

Pattern recognition for video detectionPattern recognition for video detection

Viola and Jones, “Rapid object detection using a boosted cascade of simple features”, CVPR 2001

Calibration not required for countsImmune to shadows and headlight reflections Helps in vehicle classification

Calibration not required for countsImmune to shadows and headlight reflections Helps in vehicle classification

Boosted cascade vehicle detector Boosted cascade vehicle detector

Need for pattern detectionNeed for pattern detection

Feature segmentationFeature segmentationFeature segmentationFeature segmentation Pattern detectionPattern detectionPattern detectionPattern detection

• Works under varying camera placement

• Needs a trained detector for significantly different viewpoints

• Eliminates false counts due to shadows but headlight reflections are still a problem

• Does not get distracted by headlight reflections

• Handles lateral occlusions but fails in case of back-to-back occlusions

• Handles back-to-back occlusions but difficult to handle lateral occlusions

Pattern detection based trackingPattern detection based tracking

Why automatic calibration?Why automatic calibration?

Fixed view cameraFixed view cameraFixed view cameraFixed view camera Manual set-upManual set-upManual set-upManual set-up

PTZ CameraPTZ CameraPTZ CameraPTZ Camera

Why automatic calibration?Why automatic calibration?

PTZPTZPTZPTZ

Calibration approachesCalibration approaches

Estimation of parameters for the assumed camera model

Direct estimation of projective transform

Goal is to estimate 11 elements of a matrix which transforms points in 3D to a 2D plane

Harder to incorporate scene-specific knowledge

Goal is to estimate camera parameters such as focal length and pose

Easier to incorporate known quantities and constraints

Image-world correspondences

M[3x4] M[3x4]

f, h, Φ, θ …

Manual calibrationManual calibration

Bas and Crisman (1997)Kanhere et al. (2006)

Lai (2000) Fung et al. (2003)

Automatic calibrationAutomatic calibration

Song et al. (2006)Song et al. (2006)

• Known camera heightKnown camera height• Needs background imageNeeds background image• Depends on detecting road Depends on detecting road markingsmarkings

Dailey et al. (2000)Dailey et al. (2000)

Schoepflin and Dailey (2003)Schoepflin and Dailey (2003)

• Avoids calculating camera Avoids calculating camera ParametersParameters• Based on assumptions that Based on assumptions that reduce the problem to 1-D reduce the problem to 1-D geometrygeometry• Uses parameters from the Uses parameters from the distribution of vehicle distribution of vehicle lengths.lengths.

• Uses two vanishing pointsUses two vanishing points• Lane activity map sensitive of spill-over Lane activity map sensitive of spill-over • Correction of lane activity map needs Correction of lane activity map needs background imagebackground image

Lane activity map Peaks at lane centers

Our approach to automatic calibrationOur approach to automatic calibration

Input frameInput frame

BCVD

Tracking data

CorrespondenceCorrespondence

exis

ting

vehi

cles

dete

ctio

nsne

w v

ehic

les TrackingTracking

strong gradients?strong

gradients?

VP-0 Estimation

VP-0 Estimation

VP-1 Estimation

VP-1 Estimation

CalibrationCalibration SpeedsSpeeds

Yes

RANSACRANSAC

Input frameInput frame

BCVD

Tracking data

CorrespondenceCorrespondence

exis

ting

vehi

cles

dete

ctio

nsne

w v

ehic

les TrackingTracking

strong gradients?strong

gradients?

VP-0 Estimation

VP-1 Estimation

VP-1 Estimation

VP-2 Estimation

CalibrationCalibration SpeedsSpeeds

Yes

RANSACRANSAC

• Does not depend on road markings• Does not require scene specific parameters such as lane dimensions• Works in presence of significant spill-over (low height)• Works under night-time condition (no ambient light)

• Does not depend on road markings• Does not require scene specific parameters such as lane dimensions• Works in presence of significant spill-over (low height)• Works under night-time condition (no ambient light)

Automatic calibration algorithmAutomatic calibration algorithm

Results for automatic camera calibrationResults for automatic camera calibration

Let’s see a demoLet’s see a demo

ConclusionConclusion

A real-time system for detection, tracking and classification of vehicles

Automatic camera calibration for PTZ cameras which eliminates the need of manually setting up the detection zones

Pattern recognition helps eliminate false alarms caused by shadows and headlight reflections

Can easily incorporate additional knowledge to improve calibration accuracy

Quick setup for short term data collection applications

Future workFuture work

Extend the calibration algorithm to use lane markings when available for faster convergence of parameters

Develop an on-line learning algorithm which will incrementally “tune” the system for better detection rate at given location

Evaluate the system at a TMC for long-term performance

Extend classification to four classes

Handle intersections (including turn-counts)

Thank youThank you

For more info please contact:For more info please contact:

Dr. Stanley T. BirchfieldDr. Stanley T. BirchfieldDepartment of Electrical EngineeringDepartment of Electrical Engineering

stb at clemson.edustb at clemson.edu

Dr. Wayne A. Sarasua, P.E.Dr. Wayne A. Sarasua, P.E.Department of Civil EngineeringDepartment of Civil Engineering

sarasua at clemson.edusarasua at clemson.edu