+ All Categories
Home > Documents > A LANE MARKER DETECTION ALGORITHM SUITABLE FOR EMBEDDED REAL-TIME

A LANE MARKER DETECTION ALGORITHM SUITABLE FOR EMBEDDED REAL-TIME

Date post: 03-Feb-2022
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
6
A LANE MARKER DETECTION ALGORITHM SUITABLE FOR EMBEDDED REAL-TIME SYSTEMS Wilfried Kubinger Austrian Research Centers - ARC Vienna, Austria email: [email protected] Christoph Loibl Univ. of Appl. Sciences Technikum Wien Vienna, Austria email: [email protected] urgen Kogler Austrian Research Centers - ARC Vienna, Austria email: [email protected] ABSTRACT We present the design of a lane marker detection algorithm suitable for embedded real-time systems. The algorithm was particulary designed for fast execution. The achieved execution time was about 11ms for a search window of 200x90 pixels on a single state-of-the-art Digital Signal Processor (DSP). The algorithm was integrated into an em- bedded real-time stereo vision sensor. This sensor was part of the sensor suite of the autonomous vehicle RED RAS- CAL of the team SciAutonics / Auburn Engineering. RED RASCAL participated at the National Qualification Event (NQE) in the DARPA Urban Challenge 2007 in Victorville, CA, USA. The embedded stereo vision sensor did a great job and detected lane markers and lane borders with a high reliability. KEY WORDS Computer Vision, Embedded Systems, Robotics, Lane Marker Detection 1. Introduction Autonomous driving involves a lot of - mostly complex - applications. One of them is Computer Vision, often used for navigation of autonomous vehicles. Algorithms and systems in this field help to identify the drivable road and to avoid obstacles and other critical or dangerous surfaces on the vehicle’s trajectory. Autonomous driving requires real- time systems where the execution times of the algorithms are fast enough for the autonomous vehicle to adjust course and speed accordingly [1]. This paper presents the design of a lane marker de- tection algorithm suitable for embedded real-time systems. The algorithm is an extension of the algorithm presented in [2] and [3], with some redesign and several improve- ments in design and implementation to cope with the re- quired performance improvements. The improved algorithm was embedded in an embed- ded real-time stereo vision sensor. It was part of the sensor suite of RED RASCAL and contributed to the autonomous driving capabilities of this vehicle. RED RASCAL was a ve- hicle which had been developed for the participation in the DARPA Urban Challenge 2007 1 and is described in sec- 1 DARPA Urban Challenge 2007 was a competition of unmanned au- tonomous vehicles in the USA (www.grandchallenge.org). tion 3. More details about the competition itself are out- lined in section 2. These sections are followed by section 4 where our embedded stereo vision system is described in detail. The description includes implementation and opti- mization of the lane marker detection algorithm, too. Sec- tion 5 summarizes the achieved results. The following sec- tion 6 concludes the paper and presents a short outlook on future work to be done. 2. DARPA Urban Challenge The DARPA Urban Challenge 2007 was the third in a se- ries of competitions of the Defense Advanced Research Projects Agency (DARPA) in the USA to foster the devel- opment of autonomous robotic ground vehicle technology for various - mostly military - applications [4]. The focus of the 2007 event was shifted from desert to an urban environment. The urban area selected was the (abandoned) housing area at the former George Air Force Base in Victorville, CA, USA. The challenge for the par- ticipating teams was to complete a complex 60 miles ur- ban course with live traffic which had to be completed in less than six hours. During the course, the vehicles had to cope with traffic intersections, live traffic, road follow- ing, 4-way-stops, lane changes, three-point turns, U-turns, blocked roads, and pulling in and out of a parking lot [5]. The vehicles reached a speed of up to 30 mph during the race. This was an indicator why real-time was an essential factor for the systems used. Vehicle restrictions were given by the DARPA officials. The vehicle had to weigh between 2000 lbs and 30000 lbs and had to have a wheelbase of 6 ft or more. The height was limited with 12 ft and the width with 9 ft. Figure 1. RED RASCAL (left) and the Team SciAutonics / Auburn Engineering (right) 633-080 Proceedings of the 11th IASTED International Conference November 16 - 18, 2008 Orlando, Florida, USA Intelligent Systems and Control (ISC 2008) 175
Transcript

A LANE MARKER DETECTION ALGORITHM SUITABLE FOR EMBEDDEDREAL-TIME SYSTEMS

Wilfried KubingerAustrian Research Centers - ARC

Vienna, Austriaemail: [email protected]

Christoph LoiblUniv. of Appl. Sciences Technikum Wien

Vienna, Austriaemail: [email protected]

Jurgen KoglerAustrian Research Centers - ARC

Vienna, Austriaemail: [email protected]

ABSTRACTWe present the design of a lane marker detection algorithmsuitable for embedded real-time systems. The algorithmwas particulary designed for fast execution. The achievedexecution time was about 11ms for a search window of200x90 pixels on a single state-of-the-art Digital SignalProcessor (DSP). The algorithm was integrated into an em-bedded real-time stereo vision sensor. This sensor was partof the sensor suite of the autonomous vehicle RED RAS-CAL of the team SciAutonics / Auburn Engineering. REDRASCAL participated at the National Qualification Event(NQE) in the DARPA Urban Challenge 2007 in Victorville,CA, USA. The embedded stereo vision sensor did a greatjob and detected lane markers and lane borders with a highreliability.

KEY WORDSComputer Vision, Embedded Systems, Robotics, LaneMarker Detection

1. Introduction

Autonomous driving involves a lot of - mostly complex -applications. One of them is Computer Vision, often usedfor navigation of autonomous vehicles. Algorithms andsystems in this field help to identify the drivable road and toavoid obstacles and other critical or dangerous surfaces onthe vehicle’s trajectory. Autonomous driving requires real-time systems where the execution times of the algorithmsare fast enough for the autonomous vehicle to adjust courseand speed accordingly [1].

This paper presents the design of a lane marker de-tection algorithm suitable for embedded real-time systems.The algorithm is an extension of the algorithm presentedin [2] and [3], with some redesign and several improve-ments in design and implementation to cope with the re-quired performance improvements.

The improved algorithm was embedded in an embed-ded real-time stereo vision sensor. It was part of the sensorsuite of RED RASCAL and contributed to the autonomousdriving capabilities of this vehicle. RED RASCAL was a ve-hicle which had been developed for the participation in theDARPA Urban Challenge 20071 and is described in sec-

1DARPA Urban Challenge 2007 was a competition of unmanned au-tonomous vehicles in the USA (www.grandchallenge.org).

tion 3. More details about the competition itself are out-lined in section 2. These sections are followed by section 4where our embedded stereo vision system is described indetail. The description includes implementation and opti-mization of the lane marker detection algorithm, too. Sec-tion 5 summarizes the achieved results. The following sec-tion 6 concludes the paper and presents a short outlook onfuture work to be done.

2. DARPA Urban Challenge

The DARPA Urban Challenge 2007 was the third in a se-ries of competitions of the Defense Advanced ResearchProjects Agency (DARPA) in the USA to foster the devel-opment of autonomous robotic ground vehicle technologyfor various - mostly military - applications [4].

The focus of the 2007 event was shifted from desertto an urban environment. The urban area selected was the(abandoned) housing area at the former George Air ForceBase in Victorville, CA, USA. The challenge for the par-ticipating teams was to complete a complex 60 miles ur-ban course with live traffic which had to be completed inless than six hours. During the course, the vehicles hadto cope with traffic intersections, live traffic, road follow-ing, 4-way-stops, lane changes, three-point turns, U-turns,blocked roads, and pulling in and out of a parking lot [5].The vehicles reached a speed of up to 30 mph during therace. This was an indicator why real-time was an essentialfactor for the systems used. Vehicle restrictions were givenby the DARPA officials. The vehicle had to weigh between2000 lbs and 30000 lbs and had to have a wheelbase of 6 ftor more. The height was limited with 12 ft and the widthwith 9 ft.

Figure 1. RED RASCAL (left) and the Team SciAutonics / Auburn Engineering (right)

633-080

Proceedings of the 11th IASTED International Conference

November 16 - 18, 2008 Orlando, Florida, USAIntelligent Systems and Control (ISC 2008)

175

Of the 89 teams which initially applied, 35 teamswere invited to enter the National Qualification Event(NQE) and to compete in a rigorous eight-day vehicle test-ing period. The teams had to demonstrate the safety fea-tures and robust driving of the vehicles in different situa-tions and scenarios. Out of these 35 teams, only 11 teamsmanaged to qualify for the final event [6, 7, 8].

3. The Vehicle RED RASCAL

The team SciAutonics / Auburn Engineering successfullyparticipated in the Grand Challenges 2004 and 2005 withtheir ATV-based vehicle RASCAL [9].

For 2007, the team built a new vehicle based on thelessons learned from the former challenges. The new ve-hicle was a modified Ford Explorer and was adapted for”drive-by-wire”. The vehicle remained ”street legal” andreadily drivable on public highways. It was called REDRASCAL to reflect its close relation to its predecessor. Thevehicle and the team SciAutonics / Auburn Engineering areshown in Fig. 1

3.1 Vehicle Architecture

The high-level architecture of RED RASCAL is shown inFig. 2. The vehicle is at the bottom. On the lower right,sensors send data up to a series of processing stages. Onthe lower left, actuators on the vehicle receive commandsfrom the control logic. In the center is the Vehicle State Es-timator, which continually makes optimal estimates of thevehicle’s position, velocity, heading, roll, pitch, and yaw,based on inputs from the GPS (Global Positioning System),IMU (Inertial Measurement Unit), vehicle sensors, and ob-stacle sensors. The vehicle state estimates are used by allparts of the control system, including the Path Planner.

At the top of the diagram is the Mission State Esti-mator. It determines what phase of a mission the vehicleis in and identifies when unique behaviors such as passingor parking are required. When the vehicle is driving in anurban environment, the key challenge to be addressed issituational awareness. Not only must the vehicle follow apath and avoid moving obstacles, it also must demonstratecorrect behavior depending on the situation. The estimatortreats each type of situation as a state and uses state dia-grams to cope with behaviors and transitions [10].

3.2 Sensor Suite

Sensors were needed for both localization (GPS, IMU, etc.)and perception (obstacle and road detection). A strategy ofredundancy was employed to provide measurements fromsome sensors when others were not available or in thecase of the failure of a particular sensor. The main sen-sor for vehicle localization was a single antenna NavcomSF-2050 DGPS (Differential GPS) receiver with Starfire

Figure 2. RED RASCAL overall vehicle architecture

satellite-based corrections provided by Navcom. A Hon-eywell HG1700 tactical grade six degree of freedom IMUwas used to measure translational accelerations and angu-lar rates. A NovAtel Beeline RT20 dual antenna GPS sys-tem was used for obtaining the initial vehicle orientation,as well as longitudinal and lateral velocities [10]

The vehicle used two types of environmental sensorsfor obstacle and vehicle avoidance: LIDAR (Light Detec-tion and Ranging) sensors and the embedded stereo visionsensor. The capabilities of these sensors overlapped to pro-vide the redundancy desired to protect against sensor fail-ure and improve reliability of measurements. The embed-ded stereo vision system was used as near- and mid-range(13 ft to 65 ft) obstacle sensing system.

4. Embedded Stereo Vision Sensor

Beside detecting obstacles up to a distance of 65 ft, theembedded stereo vision sensor also provided informationabout lane markers and lane borders. However, the neededcalculations for the algorithm were very time consumingand it was very difficult to fulfill the real-time requirements.The original algorithm needed about 45ms for a calcula-tion of a frame – leaving only 45ms for the obstacle de-tection algorithm. Therefore, we decided to redesign thealgorithm to significantly improve the performance and touse the freed processor budget for obstacle detection to in-crease its robustness and reliability.

4.1 System Concept of the Sensor

The stereo vision sensor consisted of a pair of Basler A601fmonochrome cameras with a resolution of 656 (H) x 491(V) and a quantization of 8 bits/pixel. Both cameras wereconnected by a 400MBit-FireWire interface to the embed-ded vision system. The frame rate of the sensor was 10fps (frames per second) to cope with the real-time require-ments of the vehicle. Of the available calculation time of100ms, about 90ms were allocated to the vision algorithms

176

and the remaining 10ms were used by system tasks of theembedded vision system.

The embedded system was based on a Texas Instru-ments TMS320C6414 Digital Signal Processor (DSP) run-ning at 1GHz. The operating system was DSP/BIOS IIfrom Texas Instruments. The embedded vision system wasresponsible for the synchronous acquisition of both images,for the execution of the computer vision algorithms, and thecommunication with the vehicle central brain via an Ether-net interface using UDP (User Datagram Protocol) sockets[11]. The stereo head of the sensor was mounted on thedashboard while the DSP box was located in the trunk ofthe autonomous vehicle.

Figure 3. The Embedded Stereo Vision Sensor integrated into RED RASCAL (left – stereo head, right – DSP box).

The stereo vision sensor included a debugging inter-face for real-time logging of the left sensor input image.Extracted obstacles, lane markers, lane borders, and someinternal states were logged for field-testing and evaluation,too.

4.2 Obstacle Detection Algorithm

The main task of the embedded stereo vision sensor wasthe detection of obstacles, lane markers, and lane bordersin front of the vehicle. For obstacle detection, a fast stereomatching method was used. Furthermore, the bouncing ofthe vehicle was predicted and compensated in the stereoimages to improve the detection and classification of ob-stacles. A detailed description of this obstacle detectionalgorithm was presented in [9] and [11].

4.3 Lane Marker Detection Algorithm

This lane marker detection algorithm was an extension ofthe algorithm presented in [2] and [3]. For extraction oflane markers and lane borders, only the left camera imagewas used. The original version of this algorithm was ableto detect lane boundaries, but with low speed and had prob-lems with weak edges, too. To overcome the performancebottlenecks and quality problems and to push the algorithmto achieve real-time performance, we redesigned the algo-rithm. The layout of the new and improved version of thisalgorithm is shown in Fig. 4.

For the extraction of lanes, five independent stepswere necessary. First of all, all potential edges were en-

hanced using a Finite-Impulse-Response (FIR) filter [12].In the second step, the image was converted into a binaryimage. The threshold for this operation was calculated au-tomatically for each image using the Otsu algorithm [13].In the next step, the binary image was searched for lines –which represented possible lane markers – using the Houghtransformation [14]. Local maxima in the Hough matriceswere an indicator for lines with specific parameters. Here,it was important to identify candidates for possible lanes inthe image. In the last step, all found lines were analyzedby a decision engine to eliminate all those which were notpart of the lane boundaries due to reasoning.

The difference equation of a FIR filter of n-th order is

y[k] =n∑

i=0

βi · u[k − i], (1)

where y[k] is the output signal, u[k − i] is the input signalon position k − i, and βi denotes the n filter coefficients.Since we were looking mainly for vertical lines (since wewere interested in straight lanes in front of the vehicle –and not in lanes crossing our way), Equ. 1 can be rewrittenas

Pout =n∑

i=0

βi · Pin[x− i, y], (2)

where x is the column index and y is the row index. Toachieve a very fast implementation we decided to use a 3rd

order filter with coefficients β0 = −1 ,β1 = 0, and β2 = 1.Therefore, the calculation was reduced to a single substrac-tion.

Pout[x, y] = Pin[x, y]− Pin[x+ 2, y] (3)

Fig. 5 presents sample results of a street scene of an FIR-filtered input image.

The goal of the auto threshold block was to convertthe gray-scale image into a binary image. All pixels withgray values below a threshold were set to zero, all otherswere set to one. The most important operation was to finda new optimal threshold for each new image. Based on thisthreshold, the image could be converted into a binary one.

We used a technique introduced in [13] to calculatean optimal threshold. By using this threshold, the imagewas divided into two classes and the within-class varianceσ2

w(t) was minimized.

σ2w(t) = w1(t)σ2

1(t) + w2(t)σ22(t) (4)

Weights wi are probabilities of the two classes sep-arated by a threshold t and σ2

i are the variances of theseclasses. This optimal threshold t was used to divide the im-age into a class containing potential lane border elementsand a background class.

T (P (x, y)) ={

1 If P (x, y) ≥ t0 If P (x, y) < t

(5)

The thresholding was implemented using the packeddata processing feature of the DSP [15]. Using that pro-cessing method four pixels were processed in parallel

177

Figure 4. Overview of the lane marker detection algorithm.

and the operations were speeded up significantly. Fig. 6presents the results of a thresholding of an image based onan automatically generated threshold using Otsu’s method.

Figure 5. Image before (left) and after (right) the FIR filter block.

The Hough transform is a feature extraction tech-nique, classically used for the identification of lines, buthas been extended to identify arbitrary shapes, too. Forlines, each line can be represented with the parameters rand Θ [14]. Each point in the original image representsa sinusoidal curve in the 2-dimensional Hough space (r,Θ). Therefore, all found edge pixels (edge elements) resultin additional sinusoidal curves in the Hough space. In theHough space intersections are accumulated and local max-ima are searched for. A local maxima represents a potentialline in the original image. In Fig. 7 an example of a Houghspace, some found maxima, and the corresponding lines inthe original image are shown. A more detailed descriptionof the Hough transform and the local maxima search canbe found in [2] and [3].

Finally, the Decision Engine analyzed and eliminatedall those lines, which were not part of a lane. Assumingthat the vehicle was on the lane, the line parameters couldbe limited as shown in Fig. 8. Based on a lot of field exper-iments with the vehicle we experimentally determined best

Figure 6. Original image (left), after the FIR filterblock (middle), and after thresholding using Otsu’s method (right).

values for α and d,

α ≤ 15◦ (6)

d ≥ w

4(7)

where w = 7ft is the width of our vehicle RED RASCAL.We used a method introduced in [16] to transform the pla-nar road scene into the image. This model of the lane bor-der and the given parameters had been proven useful forour application.

5. Results

We integrated the new modules into our embedded stereovision sensor. Furthermore, we did a code proofiing to com-pare the achieved performance with the original one pre-sented in [2, 3]. The results of the proofing are shown inTab. 1. As shown here, the algorithm is now approximatelyfour times faster than before, thanks to the new methodof edge extraction and thresholding. For future optimiza-tions, both the Hough transform and the find local maximamethod are now candidates for further performance opti-mizations, since now they consume more than 90% of theoverall processor budget.

178

Figure 7. Image after thresholding (left), resulting maxima in Hough space (middle), and corresponding lines in the original image (right).

Figure 8. Model of a Lane Border, used by the Decision Engine.

During the qualification runs for the Urban Challengeevent at Victorville, CA, USA, the embedded stereo visionsensor was used for obstacle and lane marker detection.Fig. 9 presents an exemplary output of detected lane mark-ers and obstacles on a road with live traffic.

Due to an accident in the second test run leading tosome serious mechanical damages on the vehicle, REDRASCAL was not able to qualify for the final event onNovember 3, 2008, the Urban Grand Challenge. However,our embedded stereo vision sensor did an excellent job dur-ing all pre-tests and test runs. Our sensor had proven to bea valuable assistance system for autonomous vehicles.

6. Conclusion

This paper presented a lane marker detection algorithmused in an embedded stereo vision sensor. The algorithmwas designed to detect lane markers and lane borders infront of an autonomous vehicle. Furthermore, it had to ful-

Figure 9. Results of Lane Marker Detection. The rectanglemarks the 200x90 region-of-interest, where lane markers are searched for.

fil real-time requirements, too. The algorithm was inte-grated – as part of the sensor – into the autonomous vehicleRED RASCAL from the team SciAutonics / Auburn Engi-neering. The vehicle participated at the National Qualifica-tion Event (NQE) at the DARPA Urban Challenge 2007 inVictorville, CA, USA. The embedded stereo vision sensordid a great job and detected lane markers and lane borderswith a high reliability.

Since most of the processing power is now occupiedby Hough transform and finding local maxima, a deeperanalysis of both blocks would make sense to reduce the pro-cessing time. The Hough transform itself could be replacedby another one, like the Randon transform, to improve boththe performance and the quality of the line detection (e.g.,to better cope with noise artifacts).

Additionally, we have just recently started two newprojects for advanced driver assistance systems in next gen-eration intelligent cars. In these projects we plan to dissem-inate and extend the used technology for next generationintelligent sensor devices.

179

Module New approach Old approachFIR filter 0.489msOtsu method 0.226ms 34.5msThresholding 0.023msHough transform 6.938ms 6.9msFind local maxima 3.400ms 3.4msTotal 11.076ms 44.8ms

Table 1. Comparison of the performance of the modules ofthe lane marker detection algorithm. The old approach useda different way of edge detection. Thus, only the sum of thecalculations times for that part can be given. The decisionengine was not used in the original algorithm. Therefore, no comparison is performed here.

Acknowledgments

The authors would like to thank Mr. Daniel Baumgart-ner and Mr. Christoph Schonegger from the University ofApplied Sciences Technikum Wien, Austria and all peopleof the SciAutonics / Auburn Engineering team, ThousandOaks, CA, USA for their support during this project.

The research leading to these results has receivedfunding from the European Community’s Sixth and Sev-enth Framework Programs (FP6/2003-2006, FP7/2007-2013) under grant agreements n◦ FP6-2006-IST-6-045350(robots@home) and n◦ ICT-216049 (ADOSE).

References

[1] R. Behringer, V. Sundareswaran, B. Gregory, R. El-sley, B. Addison, W. Guthmiller, R. Daily, andD. Bevly, “The DARPA Grand Challenge - Develop-ment of an Autonomous Vehicle,” in Proceedings ofthe IEEE Symposium on Intelligent Vehicles (IV’04),Parma, Italy, June 14-17 2004.

[2] J. Kogler, H. Hemetsberger, B. Alefs, W. Kubinger,and W. Travis, “Embedded Stereo Vision Systemfor Intelligent Autonomous Vehicles,” in Proceedingsof the IEEE Intelligent Vehicles Symposium (IV’06),Tokyo, Japan, June 13-15 2006.

[3] J. Kogler, H. Hemetsberger, W. Kubinger, and S. Bor-bely, “Model-based Design of an Embedded VisionApplication: A Field Report,” in Proceedings ofthe Fourth IASTED International Conference on Sig-nal Processing, Pattern Recognition, and Applica-tions (SPPRA’07), Innsbruck, Austria, February 14-16 2007, pp. 233 – 238.

[4] S. Gourley, “DARPA ’Urban Challenge’ results showthe way forward,” BATTLESPACE C4ISTAR TECH-NOLOGIES, pp. 13–20, 2007.

[5] DARPA, “Urban Challenge Rules,”http://www.darpa.mil/grandchallenge/, 2007.

[6] D. Ferguson, C. Baker, M. Likhachev, and J. Dolan,“A Reasoning Framework for Autonomous UrbanDriving,” in Proceedings of the 2008 IEEE IntelligentVehicles Symposium (IV’08), Eindhoven, The Nether-lands, June 4-6 2008, pp. 775 – 780.

[7] A. Broggi, A. Cappalunga, C. Caraffi, S. Cattani,S. Ghidoni, P. Grisleri, P. Porta, M. Posterli, P. Zani,and J. Beck, “The passive sensing suite of the Terra-Max autonomous vehicle,” in Proceedings of the 2008IEEE Intelligent Vehicles Symposium (IV’08), Eind-hoven, The Netherlands, June 4-6 2008, pp. 769 –774.

[8] J. Ziegler, M. Werling, and J. Schroder, “Navigat-ing car-like Robots in unstructured Environments us-ing an Obstacle sensitive Cost Function,” in Proceed-ings of the 2008 IEEE Intelligent Vehicles Symposium(IV’08), Eindhoven, The Netherlands, June 4-6 2008,pp. 787 – 791.

[9] W. Travis, D. Daily, D. Bevly, K. Knoedler,R. Behringer, H. Hemetsberger, J. Kogler, W. Kub-inger, and B. Alefs, “SciAutonics-Auburn En-gineering’s Low-Cost High-Speed ATV for the2005 DARPA Grand Challenge”,” Journal of FieldRobotics, vol. 23, no. 8, pp. 579–597, 2006.

[10] D. Bevly and J. Porter, “Development of the Sci-Autonics / Auburn Engineering Autonomous Car forthe Urban Challenge,” SciAutonics, LLC and AuburnUniversity College of Engineering Technical Report,June 1 2007.

[11] H. Hemetsberger, J. Kogler, W. Travis, R. Behringer,and W. Kubinger, “Reliable Architecture of an Em-bedded Stereo Vision System for a Low Cost Au-tonomous Vehicle,” in Proceedings of the IEEE Intel-ligent Transportation Systems Conference (ITSC’06),Toronto, Canada, September 17-20 2006, pp. 945–950.

[12] R. Gonzalez and R. Woods, “Digital Image Process-ing,” Prentice Hall, Pearson Education International,2. Edition, 2002.

[13] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems,Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979.

[14] R. Duda and P. Hart, “Use of the Hough Transforma-tion to Detect Lines and Curves in Pictures,” Commu-nications of the ACM, vol. 15, pp. 11–15, 1972.

[15] C. Zinner, W. Kubinger, and R. Isaacs, “PfeLib: APerformance Primitives Library for Embedded Vi-sion,” EURASIP J. on Embed. Syst., vol. Vol. 2007,no. 1, pp. 70–83, 2007.

[16] E. Dickmanns, “Dynamic Vision for Perception andControl of Motion,” Springer-Verlag London, 2007.

180


Recommended