Edge Detection Lecture 2: Edge Detection Jeremy Wyatt.

Post on 20-Dec-2015

232 views 1 download

Tags:

transcript

Edge Detection

Lecture 2: Edge Detection

Jeremy Wyatt

Visual pathway

The striate cortexEye-cortex mapping has certain properties

Neighbouring areas in the retina are approximately mapped to neighbouring areas in the cortex

Half the image in each half of the cortex

Middle of retinal image on the outer edge of the relevant half of the cortex

Mapping is spatial distorted

Hypercolumns & Hyperfields

3-4mm

0.5-1mm

Surface • Each hypercolumn processes information about one area of the retina, its hyperfield.

• 400-600 columns in each hypercolumn.

• Each column has its own receptive field.

• All the cells in one column are excited by line stimuli of the same orientation.

Column

Cells within a column

Light on the right and dark on the left of this cell causes excitation

The less the contrast the lower the excitation

Different cells in a single column respond to different patterns with the same orientation

Orientation across columns

Different columns are tuned to different orientations

Adjacent columns are tuned to similar orientations

Cells can be excited to different degrees

More excited Less excited

Slabs and Hyperfields

Each hypercolumn is composed of about 20 slabs of columns

Each slab is tuned to one orientation

Each column in a slab is centred on a different portion of the hyperfield

But each column takes input from the whole hyperfield Columns in each slab Slabs

Learning

We learn the orientation selectivity of cells in the early months of life

This has been shown by depriving animals of certain orientations of input

Sole visual input Orientations present in cortex

Edge detection in machines

How can we extract edges from images?

Edge detection is finding significant intensity changes in the image

Images and intensity gradients

The image is a function mapping coordinates to intensity

The gradient of the intensity is a vector

We can think of the gradient as having an

x and a y component

( , )f x y

[ ( , )]df

x dx

dfy dy

GG f x y

G

G

x

yG

x

y xG

yG

2 2( ) x yM G G G

1( , ) tan y

x

Gx y

G

magnitude direction

Approximating the gradient

Our image is discrete with pixels indexed by i and j

We want and to be estimated in the same place

[ , 1] [ , ]xG f i j f i j 1 1 1 1

0 1 1 1

0 0

1 1

0 0 0 1j j+1

i

i+1[ , ] [ 1, ]yG f i j f i j

-1 1

1

-1

xG yG

Approximating the gradient

So we use 2x2 masks instead

For each mask of weights you multiply the corresponding pixel by the weight and sum over all pixels

1 1 1 1

0 1 1 1

0 0

1 1

0 0 0 1j j+1

i

i+1

-1 1

-1 1xG

yG 1 1

-1 -1

Other edge detectors

Roberts

Sobel

1 0

0 -1xG yG

0 -1

1 0

xG yG

-1 0 1

-2 0 2

-1 0 1

1 2 1

0 0 0

-1 -2 -1

Convolution

This process is very general

0 1 1 3 4 5 4 5 6 7 8

0 0 2 3 3 4 5 4 6 4 5

0 0 4 6 3 5 4 7 2 4 3

0 0 0 4 4 3 5 5 4 6 4

0 0 0 3 5 2 6 7 3 4 5

0 0 0 0 5 5 6 7 8 9 8

0 0 0 0 4 3 4 5 6 7 5

-1 0 1

-2 0 2

-1 0 1

mask

image

Original After Sobel Gx Threshold =30 Threshold=100

What do these do?

After Roberts Threshold=5 Threshold=20

Noise

It turns out we will need to remove noise

There are many noise filters

We can implement most of them using the idea of convolution again

e.g. Mean filter

1

9

1

9

1

9

1

9

1

9

1

9

1

9

1

9

1

9

Reading

RC Jain, Chapter 5, Edge Detection