Extraction of edges and straight lines
Waleed abrar (895195))
Multimedia Signal Processing (seminar)
April 1, 2014
Extraction of edges and straight lines | Waleed abrar (895195)
Page 1 of 21
Abstract
This essay throws light on extracting straight lines from an image the technique
explained in this paper is based on the paper by Burns, J. B., Hanson, A. R. & Riseman,
E. M. (1986). Extracting straight lines The main revolutionary idea in this paper was
they have used Line orientation along with their magnitude to determine the line
segments that should be incorporated for Line support Region (LSR). Using the
Orientation and magnitude along with some additional parameters that Burns, J. B.,
Hanson, A. R. & Riseman have identified during their experiment the line localization
is pretty fast and false likely hood is also decreased. In the later stages of the paper I
have added an addition experiment based on their model to check the parameter
explained by them and to see the visual effect on edge detection which is the
preliminary step for line detection.
Extraction of edges and straight lines | Waleed abrar (895195)
Page 2 of 21
Background:
For the extraction on Lines from the image there should be as image beforehand, so
lets start with the process of Image Acquisition Fig 1
1
Figure 1: The Process of Image Acquisition
As you can see, for an image to be captured the source of illumination is required. It
throws light on the object .The image system captures the light thats is reflected by
the object and Intensity and brightness of at a spatial coordinate is determine by two
factors illumination component i(x,y) the illumination and r(x,y) the reflectance where
0< i(x,y)
Extraction of edges and straight lines | Waleed abrar (895195)
Page 3 of 21
process of image acquisition is Quantization. Quantization is the number of variations
or grey levels. Below figure illustrate the effect of quantization and sampling in details
Figure 2: Explaining the process of sampling and quantization
As you can see clearly from the image if you move from left to right the number of
samples decrease so is the number of information similarly in second column if you
move from left to right the number of intensity variation decreases.
Classification of processes in DSP
Now we are ready to define an image which can easily be represented as a two
dimensional signal f(x, y) where x and y represent the spatial coordinates and value
of the function f represent intensity level or grey level as far as classification is
required we can generally classify them in three types according to the processes
related to them which are low-level processes, mid-level process and High level
processes.
Extraction of edges and straight lines | Waleed abrar (895195)
Page 4 of 21
Low-level processes are those where input is an image and output is also an image
similarly Mid-Level processes are those where input is an image and output are the
attributes rather than the image. The High-Level processes are those which are
mostly related to machine vision and cognations. Table 1 explain the concept in detail.
LOW-LEVEL
Input is image with noise
output (right) is the image
without noise,
MID-LEVEL
Input is the image and
output (right) is the image
with the edges.
HIGH-LEVEL
Input is image and output
is constant detection and
feedback to the
microcontroller
Table 1: Classification of the Processes related to image
Operations related to the images:
Mainly three process are related to the processing of images, this concept is necessary
because its one of the most important process while extracting edges or straight lines.
The three processes are
Extraction of edges and straight lines | Waleed abrar (895195)
Page 5 of 21
A) Point/Pixel operations:
The output value at a specific coordinate is dependent on the input value at (x, y) of a
specified image.
Fig 3: The Blue is the original point in image and red is enhanced point in the output
B) Local operations:
The output value at any point p(x y) is dependent upon the input values of the neighboring
pixels not on the individual pixels.
Fig 4: The Blue is the original point the point shows its composed to neighbors in image and red is output
C) Global operations:
The output value at point (x, y) is dependent upon the whole image
Fig 5: The Blue is the original point the point shows its composed to neighbors in image and red is output
Extraction of edges and straight lines | Waleed abrar (895195)
Page 6 of 21
Introduction:
Neighbors and connectivity are the most important aspect whenever any kind of
information is extracted from the image so basically neighbors are the surrounding
pixel of any given pixel. Basically there is always some problem while performing
those processes at the boundaries, for that reason based on the type of neighbor some
rows horizontally and some rows vertically are skipped another option would be the
padding of zeroes to make it work for boundaries as well so here in this paper I will
just explain 4 and 8 neighbor connectivity as there are many more. When one point
P(x, y) is connected to 4 other point vertically and horizontally its called as 4
neighbors as below figure shows
Similarly when the point P(x, y) is connected to 8 other point vertically, horizontally
and diagonally then they make 8 neighbors as shown in below picture.
And the point is connected if it shares all the connectivity represented by particular
method e.g 4 connectivity or 8 connectivity
Extraction of edges and straight lines | Waleed abrar (895195)
Page 7 of 21
Boundary and region:
Lets take R as a subset of pixels then we call R a region if R is connected set of pixels and
connectivity can be anyone 4, 8, 16. Similarly boundary of a region R is set of pixels in region
that have one or more neighbors that dont belong to R. Following image illustrates the
concept more clearly.
Fig 6: Illustrating the concept of Region and boundary with cricket field example
Discontinuities in the image:
There are mostly three types of grey level discontinuities in a digital image which are:
Point, Lines and Edges
Point is basically any threshold value in the image above, if a value exist above the threshold
value then we can detect the anomalies among the image. Lines are continuities of those
anomalies with a particular orientation as well and edges are basically the set of connected
pixels that lie on the boundary between two regions.
Extraction of edges and straight lines | Waleed abrar (895195)
Page 8 of 21
Types of Edges:
There are many types of edges in the image but I will discuss two most important
types that are ideal and Ramp edge.
Ideal edge is the one where there is sudden altering of values at a particular region
where as Ramp edge is the normal or real life edge when edge start and mesh up till
its end following image explain the two edge types in detail
In the Left image one can see that there is sudden or sharp change from dark pixels
to light Pixel ad the localization of the edge is fairly easy as one can see from the
graph as well there is a sudden change from low pixel value to maximum suddenly.
Whereas the image on the left side the change from dark to light pixels are very slow
and there is a large region for pixel mesh up and edge localization is hard, The
increase in slope slowly with respect to time shows the problem
Extraction of edges and straight lines | Waleed abrar (895195)
Page 9 of 21
Edge detection methods and problems:
There are many methods to detect the edge from the image like canny and more but
two most basic methods are explained by me in this paper as these methods are
used by Hanson and Burns in the paper.
The two most popular methods of edge detection is first and second derivatives. By
using the first derivative we get the peek where the edge start and where it end.
Whereas while using second derivative we can get more information regarding
edge, which is not only we can detect the edge by maxima and minima but the rise
and fall of peeks can also tell us rather the change is from brighter pixels to darker
or darker pixels to lighter pixels.
The above image is of a ramp edge as you can see from the plot on the right side of
image. Then when we take first derivative you can see where the edge is located is a
constant positive square wave, whereas if we take the second derivative of the same
Extraction of edges and straight lines | Waleed abrar (895195)
Page 10 of 21
image we get the relative maxima and minima at some points and the direction of
wave also tell the direction of change as first peak is high and second peak is in ve y
direction this tells us that the pixels are changed from dark to bright as seen in the
above image.
The problem in localization of the edge starts when its added with a little bit of noise
for further explanation we can use an image to illustrate the effect as shown below
From the figure we can easily see that that when there is no noise the localization is
very easy (the top right picture with noise 0) then 0.1 of Gaussian noise is added to
the image and the edge localization is difficult (noise 0.1) then in 1.0 and 10 the edge
is nearly impossible to find and if one is using these two methods to detect edge from
the image then the results are totally wrong so for these methods to be used, the
image should be normalized or smoothed to remove the noise effects before applying
the derivatives for edge localizations.
Extraction of edges and straight lines | Waleed abrar (895195)
Page 11 of 21
Simple line detection methods:
As you know that the line is represented by the equation of line as Y=MX+C where M is the
gradient or slope whereas C is the y intercept or the crossing point of line. Convolving he
image with a simple mask can help to detect the lines from a particular image
So if the image is convolved by horizontal mask from the above image it intensify the
Horizontal lines from the image as it double there pixel value and compress the other pixels
which are not following the same order. Same is the case for other masks .The vertical mask
will enhance the vertical lines from the image and suppress the other lines.
The above image show the effects of line detection and line detection the above image
is the original one and the bottom left is convolved with -45 degrees and one can see
that those lines are enhanced and bottom left is parsed with threshold of pixel values
and certain pixels above threshold are allowed to pass from the filter others low
Extraction of edges and straight lines | Waleed abrar (895195)
Page 12 of 21
frequencies are blocked by the filter or mask. In this one can detect line from the
image.
The process of straight line detections proposed by BURNS and Hanson:
The algorithm proposed by them is very effective even though its given in 1986 even
now many modern algorithms are using their method because its not only effective
but computationally very fast and the number of false alarm are very less. The
algorithm is explained as under.
Magnitude and Direction:
Gradient is basically directional change in the intensity of the pixels. Gradient magnitude is
the sum of movement in the point G(x) x axis and G(y) y axis directions as explained in
Pythagoras theorem .we assume its the sum to save the number of calculations that are
needed to calculate the magnitude of the image. Or in simple worlds change in x-y direction
Similarly the gradient orientation can be obtained by taking the tan-1 of G(y), y direction by
x direction G(x).where G(x) and G(y) tell the gradient in x direction and y direction. Or in
simple words angle of change of the particles.
Extraction of edges and straight lines | Waleed abrar (895195)
Page 13 of 21
So the Algorithm works by computing the magnitude and gradient of the image firstly it
create a coarsely quantized buckets which will serve as labels as to which block these points
belong to.
Figure: showing buckets with labels.
So buckets here are 45 degree interval as 45*8 =360 so pixels in the second step are
assigned the bucket number it belongs. Then connected component algorithm is
applied to create a line support region for the pixels and based on some additional
attributes defined in the paper the like steepness orientation and length of pixel
segments a line is formed. The whole process is explained in detail in the following
Figure
Extraction of edges and straight lines | Waleed abrar (895195)
Page 14 of 21
(a) Shows the intensity profile plot showing gradient magnitude orientation and
steepness. (b) Explain the process of adding the bucket to those pixels and if more the
bucket is only assigned when we have more than 50% of confidence otherwise the
bucket number will not change. (c) Shows the process of applying CCA (connected
component analysis and line support region. (d) Explain the process of fitting the line
in line support region with planar intersection.
CCA (connected component Analysis) Process:
There are basically two passes for the CCA to work during the first iteration all the elements
are labeled. All the pixels are scanned and checked if they belong to background or not if
they are not from the background then their neighbors are checked and a label is assigned
to those component if its not previously labeled otherwise parent label is maintained Fig
below explain the process.
During the second iteration which can be called as aggregation, all pixels are scanned again to see if
all pixels are labeled if the pattern belong to the list add to existing list otherwise assigned to new
list. Figure explain the second iteration of CCA algorithm.
Extraction of edges and straight lines | Waleed abrar (895195)
Page 15 of 21
Following is the snippet of the result obtained by CCA algorithm:
Line Creation Process and Attributes of Lines:
Some of the attributes that are discover while using the orientation along with the
magnitude for improving the line detection results are Contrast which is intensity
change in an image, Width of line which is basically the ratio about the edge and its
propagation to the image so that the line is according to the edge or the size of the
interval of the profile across which the bulk of intensity change occurs and steepness
is basically the slope within each interval
Input image in 2 D
Result
Output CCA Aggregated distinct classes
Extraction of edges and straight lines | Waleed abrar (895195)
Page 16 of 21
The above image explain the process of line drawing after localization in the top left
(a) of the image gradient region is formed based on grouping of those pixels. Then in
(b) top right, shows pixels included in the line support region highlighted by dots.
Then in (c), (d) straight line is obtained by intersecting the weighted planar fit to the
intensities with horizontal plan representing averages. In the end line is overlaid on
those pixels.
Results:
Below Table explain the results obtained by applying Burns and J. B., Hanson techniques
A) In this image all those elements are
taken that have line support >=0.5
or 50 or 50% elements fall in similar
bucket.
B) In this image the addition
parameters Gradient Steepness is
used and Gradient steepness>2.5 GL
(grey level). So you can see the that
darkness is lessened in image
Extraction of edges and straight lines | Waleed abrar (895195)
Page 17 of 21
C) Length >= 5 grey level or steepness
greater than 10 GL
D) Steepness >10 GL are shown in this
picture and the straight lines are
now more visible as low
frequencies are blocked.
E) All line segments with length >=5
are shown .Now only big lines are
visible mostly.
F) In this image the orientation 3-28
degrees are passed rest are blocked
G) In this image the orientation 165-
177 degrees are passed rest are
blocked
Extraction of edges and straight lines | Waleed abrar (895195)
Page 18 of 21
H) In this image the orientation 81- 95
degrees are passed rest are
blocked. By their process we can
not only get lines but also filter
them with respect to orientation
from the image.
My Experiment:
I have done small experiment to see the effect of the parameters find out by them in the paper
and see their effect on live video feed and see how they effect in edge detection and line
detection, for my experiment I have used Action script 3 and pixelbender as there Libraries
are precompiled over there2
Figure Description
In this figure left side is the original
feed from the camera with 3 frames PS
And high frequency component from
the edges are allowed to pass. Edges
are bold as well
2 author: eugene, libraries can be found at http://blog.inspirit.ru/?p=297
Extraction of edges and straight lines | Waleed abrar (895195)
Page 19 of 21
Then in this figure again 3 frames PS
are utilized and all low frequencies are
allowed to pass and image is binary so
lots of dark region is seen and
extremely high frequencies are
allowed to pass
In this figure on left one is the input
and right one is the result and one can
see all the low frequencies are blocked
and only the extremely high frequency
components are allowed to pass.
This figure left input right out put
shows the effects of normalization of
the contrast .As the edges are
everywhere because the information is
gained by normalization of the
contrast values.
This figure shows the bold edges .As I
explained earlier the ramp edge where
it start and end so the slope is taken as
whole for edge formation thats why
we get bold edges
Applications of Edge and line Detection:
Some of the applications for the line and edge extraction processes are
Extraction of edges and straight lines | Waleed abrar (895195)
Page 20 of 21
Robotics and machine vision.
Autonomous Road follower
GIS Systems
Missile Technologies and many more.
Conclusions:
This paper involves a new direction on line extraction method by focusing on gradient
orientation along with gradient magnitude and while finding line support region other
interesting attributes related to line detection are also identified which enhance the existing
process of edge and line detection and perform those operation with great accuracy. Many
modern research are still based on the work of Burns, J. B., Hanson because there
algorithm for straight line detection is not only accurate but computing efficient as
well. My experiment helps me to understand the process of extraction of edges
based on those parameters and to see there result and to alter them to see their
effects.
Bibliography
Rafael C. Gonzalez, R. E. (2002). Digital Image Processing.
ftp://217.219.170.14/Electronic%20Group/Hashemi%20kamangar/Image%20Processing/(E
book)%20Prentice%20Hall%20-%20Digital%20Image%20Processing%20By%20Gonzalez%
202nd%20Edition%202002.pdf. New Jersey, USA: Tom Robbins. Retrieved from New Jersey.
Extraction of edges and straight lines | Waleed abrar (895195)
Page 21 of 21