+ All Categories
Home > Documents > Advanced Driver Assistance System - Walter Scott, Jr...

Advanced Driver Assistance System - Walter Scott, Jr...

Date post: 13-Apr-2018
Category:
Upload: duongdan
View: 221 times
Download: 4 times
Share this document with a friend
45
Advanced Driver Assistance System Final Project Report Spring Semester 2017 Full Report By Drew DeVos Derek Isabelle Jordan Tunnell Department of Electrical and Computer Engineering Colorado State University Fort Collins, Colorado 80523 Project advisor(s): Dr. Sudeep Pasricha, Dr. Thomas Bradley, Graduate Student Vipin Kumar Kukkala Approved by: Dr. Sudeep Pasricha
Transcript

Advanced Driver Assistance

System

Final Project Report

Spring Semester 2017

Full Report

By

Drew DeVos

Derek Isabelle

Jordan Tunnell

Department of Electrical and Computer Engineering

Colorado State University

Fort Collins, Colorado 80523

Project advisor(s): Dr. Sudeep Pasricha, Dr. Thomas Bradley, Graduate

Student Vipin Kumar Kukkala

Approved by: Dr. Sudeep Pasricha

Abstract

There has been a major shift in the car industry in recent years. Almost all new vehicles

come equipped with some system that will assist the driver. The reason for this is that ADAS

systems can save lives. Systems such as automatic braking, smart cruise control, and even full

autonomous control give the user additional points of safety while driving. By developing tools

and systems that alert the driver of possible hazards, many accidents can potentially be avoided.

This paper outlines the continuation of the work done by last year’s EcoCAR3 ADAS (Advanced

Driver Assistance System) team at Colorado State University. The scope of the project

encompasses one subsystem, namely ADAS, from the larger EcoCAR3 competition guidelines.

The goal of EcoCAR3 is to develop a 2016 Chevrolet Camaro into a hybrid-electric car,

maintaining performance while increasing fuel efficiency. ADAS goals encompass developing a

standalone, real-time system for detecting and tracking various roadway objects and features as

well as developing a simplified version for an embedded processor, to be used in the vehicle later.

ADAS project goals were to continue to build upon the computer vision algorithms designed last

year and ultimately to implement a subset of the algorithms on a laptop system compatible with a

custom designed stereo vision camera system. In addition, the project also focused on porting and

developing code for the provided NXP S32V board. Under the supervision of Dr. Sudeep Pasricha

and Vipin Kumar, the ECE graduate student advisor for EcoCAR, the team implemented computer

vision methods that assist drivers to be safer on the road.

The team designed and calibrated a Stereo Vision Camera system that provides estimation

of distance for road objects. Using the MATLAB stereo vision toolkit, two video feeds once

calibrated, rectified, and developed into a Point Map provide depth information for captured

images. The team experimented with rig implementations and camera spacing to provide a suitable

setup. Raw driving footage was collected and a ground truth database was developed. The team

also gathered data from other sources online. This data was used for training and testing object

detectors which utilize various machine learning algorithms. The team combined algorithms for

vehicle, sign, pedestrian, and lane detection into a single unified application designed to record a

video and annotate it with driving data at 5 frames per second. Lane detection was implemented

on the NXP S32V board by compiling MATLAB code into mex code and tailoring the code to

suite the embedded platform.

After experimenting with two stereo camera setups it was determined that the best tradeoff

for accurate capture was to use the Stereo Labs ZED stereo vision camera over a custom

implementation using two 1080p Logitech web cameras. The ZED camera provided hardware

frame synchronization which was determined to be highly valuable even though the ZED’s range

lies 5m short of the competition target. Experimentation and discussions with computer vision

experts led the team to experiment with alternate methods for object detection outside of the

methods suggested by the competition. Some work with convolutional neural networks was done

to improve the accuracy obtained from cascade object detection. Time was spent profiling runtime

performance and efforts were made to increase efficiency by reducing computational bottlenecks.

The team is currently preparing for the final competition in May, 2017. The final year for the

EcoCAR3 ADAS project will take place next year with an emphasis on further application

refinement and vehicle integration.

Contents A. Introduction .............................................................................................................................................. 1

A.1 Overview of ADAS ............................................................................................................................... 1

A.2 EcoCar3 and ADAS .............................................................................................................................. 2

A.3 Deliverable Description for ADAS ....................................................................................................... 2

B. Summary of Previous Work ...................................................................................................................... 3

B.1 Color Space Conversion ...................................................................................................................... 3

B.1.1 Program Flow ............................................................................................................................... 4

B.2 Filtering and Blob detection ............................................................................................................... 4

B.3 Lane Detection .................................................................................................................................... 5

C. Design........................................................................................................................................................ 5

C.1 Stereo Rig ............................................................................................................................................ 5

C.1.1 Physical Design ............................................................................................................................. 5

C.1.2 Calibration .................................................................................................................................... 7

C.1.3 Distance Estimations .................................................................................................................... 8

C.1.4 Video Capture and Ground Truth Labeling .................................................................................. 9

C.2 Object Detection ............................................................................................................................... 10

C.2.1 Vehicle Classifier Training .......................................................................................................... 11

C.2.2 Pedestrian Classifier Training ..................................................................................................... 12

C.2.3 Traffic Sign Detection ................................................................................................................. 12

C.2.3 Lane Detection ........................................................................................................................... 13

C.3 Unified Real-time Application ........................................................................................................... 14

C.4 Embedded System ............................................................................................................................ 15

D. Future Work and Conclusion .................................................................................................................. 16

D.1 Work Until Competition ................................................................................................................... 16

D.2 Continuation ..................................................................................................................................... 16

D.3 Future Work Recommendation ........................................................................................................ 16

D.3.1 Optimizing Current Code ........................................................................................................... 16

D.3.2 Port to embedded system ......................................................................................................... 16

D.3.3 Real Time System ....................................................................................................................... 16

D.3.4 Add Driving ................................................................................................................................ 17

D.4 Conclusion ........................................................................................................................................ 17

D.5 Lessons Learned ............................................................................................................................... 17

E. References............................................................................................................................................... 19

F. Bibliography ............................................................................................................................................ 20

G. Acknowledgments .................................................................................................................................. 20

Appendix A – Abbreviations ........................................................................................................................ 21

Appendix B – Budget ................................................................................................................................... 22

Appendix C - Project Plan Evolution ........................................................................................................... 24

Appendix D – Ethics in ADAS ....................................................................................................................... 29

Appendix E – FMEA ..................................................................................................................................... 31

Appendix F – DTVC ...................................................................................................................................... 32

Appendix G – Car Classification .................................................................................................................. 37

Appendix H – Code Profile .......................................................................................................................... 38

List of Figures

Figure 1 - 3D Representation of HSV Color Space(Left) and YCbCr Color SPace (right).......................... 3

Figure 2 - Program Flow ............................................................................................................................... 4

Figure 3 - Custom Built Stereo Vision System ............................................................................................. 5

Figure 4 - ZED camera rig .............................................................................................................................. 6

Figure 5 - Backup Logitech Camera Rig ......................................................................................................... 6

Figure 6 - Stereo Vision Checkerboard Calibration ...................................................................................... 7

Figure 7 - Unrectified Image ......................................................................................................................... 8

Figure 8 - Rectified Image ............................................................................................................................ 8

Figure 9- Disparity Map ................................................................................................................................ 9

Figure 10 - 3D Point Cloud ............................................................................................................................ 9

Figure 11 – Cascade Classifier Model Diagram ......................................................................................... 10

Figure 12 - Vehicle Detection 1 ................................................................................................................... 11

Figure 13 - Vehicle Detection 2 ................................................................................................................... 11

Figure 14 - Pedestrian Detection with False Positives Figure 15 - Pedestrian Detection ........................ 12

Figure 16 - Stop Sign Detection Figure 17 - Color Thresholding ............................................................ 13

Figure 18 - Lane Detection 1 Figure 19 - Lane Detection 2 .................................................................... 14

Figure 20 - Unified Application Output showing vehicle detection and distance coloring ........................ 14

Figure 21 - Algorithm for determining isInLane .......................................................................................... 15

Figure 22 - Old Timeline .............................................................................................................................. 24

Figure 23 - Example Sprint from Jira Board ................................................................................................ 27

Figure 24 - Example Backlog from Jira Board.............................................................................................. 27

Figure 25 - Example Sprint Report from Jira ............................................................................................... 28

Figure 26 - Commit graph from GitLab ....................................................................................................... 28

Figure 27 - Git Repository Log ..................................................................................................................... 28

Figure 28 - Work Flow ............................................................................................................................... 32

Figure 29 - Car Orientations ....................................................................................................................... 37

Figure 30 - Profile Summary Matlab ........................................................................................................... 38

Figure 31 - MATLAB function bottlenecks .................................................................................................. 39

Figure 32 - Profile Stereovision ................................................................................................................... 39

List of Tables

Table 1 - Stereo Vision Camera Calibration Test Example .......................................................................... 8

Table 2 - Estimated Budget ......................................................................................................................... 22

Table 3 - Actual Spending .......................................................................................................................... 23

Table 4 - Team Due Dates and Major Deliverables .................................................................................... 25

Table 5 - Team Timeline ............................................................................................................................. 26

Table 6 - FMEA .......................................................................................................................................... 31

Table 7 - Test1 Vehicle Detection .............................................................................................................. 33

Table 8 - Test2 Distance Estimation ........................................................................................................... 34

Table 9 - Test3 Lane Detection ................................................................................................................... 35

Table 10 - Test4 NXP Board ...................................................................................................................... 36

1

A. Introduction

A.1 Overview of ADAS The Advanced Driver Assistance System, or ADAS, is an embedded system used

mainly as a safety device for automobile drivers. It is also the first step to self-driving, self-

sufficient cars, much like what Google and Tesla are developing now. Features of ADAS systems

include Smart Cruise Control, Automatic Breaking, Lane Departure Alert, and many others. To

accomplish these goals computer vision is needed to detect and track various objects one would

see operating a vehicle. In many current ADAS systems today a multitude of sensors are used to

develop the features mentioned above. Sensors such as LIDAR, Radar, Infrared, and Stereo

Cameras. However, the only sensor that can determine what the object is with any degree of

success is the camera, where all of the other sensors are used to determine speed and distance.

With all ADAS systems, this data is then compiled and analyzed in real time where necessary

information is communicated to the driver in the form of buzzers, HUD, and other graphical means.

The focus on this project is to utilize the camera to determine what the objects are, and with a

stereo vision camera, determine how far they are from the system.

Automobile accidents are a severe hazard on today’s roads. They frequently cause damage

to properties, vehicles, and most unfortunately, people. Accidents occur quickly and often are the

cause of thousands of injuries and deaths every year. According to the National Safety Council,

Motor Vehicle crashes are the primary cause of death in children and young adults ages 5 to 24,

as well as the #2 cause of death for adults. These fatalities are often caused by distracted drivers

or even drivers under the influence of drugs such as alcohol. Designing a tool that will mitigate

these fatalities and alert drivers of any potential hazards is a top priority for this project. The goal

for this project is to make a robust PC system that can operate in real time. This PC component

will utilize computer and stereo vision to detect and identify key objects such as lanes, pedestrians,

signs, and vehicles.

An embedded system provided by the EcoCAR3 competition will then be utilized to run a

subset of the vision algorithms developed. The use of the embedded system is ideal because of its

low cost and lower power usage.

Driving can be dull, especially when driving for long periods of time. That is why audio

feedback can be beneficial. A sharp beeping sound or light indication can cut through potentially

monotonous noises and wake the driver from daydreaming. Accurate visual feedback will also

make the driver more aware of their surrounding environment and alleviate some worry of crashing

into something. If this is not enough to cause the driver to drive safely, ADAS will eventually be

able to determine the driver is not reacting in a proper and timely manner and try to correct the

situation on its own. This is far down the timeline of the project and won’t be implemented anytime

soon. More on safety and ADAS systems see Appendix D.

Computer Vision unfortunately still has some limitations. The only sensor that can classify

an object is a camera, so we are bound by the operating condition of the sensor itself. Several

notable issues include hazard weather conditions and lighting. Times such as dawn and dusk where

2

sun light is directly shining into the lens of the camera will render an image that is useless. Similar

conditions would happen when entering and exiting a tunnel. Snow is an example when the

weather would completely blocks necessary information such as lane markings and even important

signs. It is critical that current ADAS systems know their limitations and rely on software that can

determine these hazards to avoid potentially fatal misidentifications.

A.2 EcoCar3 and ADAS EcoCAR3 is a competition sponsored by the U.S. Department of Energy, GM, Argonne

National Labs, MATLAB, NXP, and others to design the car of the future. Each Competition is

split up into four year sections. We are currently on the third year of the third EcoCAR competition.

The goal of EcoCAR3 is to develop an energy efficient high performance vehicle. Given a Camaro,

the CSU team has developed a hybrid electric vehicle and is competing against 15 other teams.

This is the second year of ADAS in the EcoCAR3 competition. As a part of the competition we

are given a portion of the budget.

A.3 Deliverable Description for ADAS As a part of the EcoCAR3 competition the ADAS team was required to submit two

deliverables. The first major deliverable for the EcoCar competition was the Stereo Vision Tool

check due on January 19th. The second was the Hardware tool check due May of this year where

the goal is to refine the PC development and port code over to competition provide board.

The requirements for the upcoming May 14th hardware tool check are as follows [1]:

Vehicle and Lane Identification

o Identify each vehicle in the roadway with a green bounding box, the estimated

distance to the vehicle, and a unique and persistent number label.

o If any vehicle is ahead in the same lane as the test vehicle, change the bounding

box color to yellow

o If vehicle is ahead in the same lane and is less than 25 meters away, change the

bounding box color to red.

Pedestrian and Sign Identification

o Identify each pedestrian in the frame with a blue bounding box, the estimated

distance to the pedestrian, and a unique numbered label.

o Identify each stop, yield, or speed limit sign in view with a white bounding box, the

estimated distance to the sign, and a unique numbered label.

System Fault Detection

o If either camera is obscured by a foreign object at any point during test, display text

at bottom left corner of screen noting which camera is obscured.

S32V Lane Identification

o Overlay lane markings for current lane with a solid blue line starting from the

bottom of the frame.

o Overlay school name on bottom left corner of screen

o Save a copy of the video to a flash drive to be collected.

3

B. Summary of Previous Work

B.1 Color Space Conversion In computer vision, often we want to convert from an RGB color model, to one that

separates color information from intensity. A primary reason for this is to filter the image from

different lighting conditions. In the HSV and YCbCr color spaces, hue holds the color information

for a HSV image, where Cb (blue shift) and Cr (red shift) holds the color information for an YCbCr

image. As mentioned before, the separation of image intensity from the color information is very

important for detecting an image without worrying how the intensity component will affect the

pixels. Many algorithms in computer vision remove the image intensity component all together,

thus removing any changing lighting conditions. Once the color information is read, identifying

colors or objects is more easily done. Each color space has its own unique properties, though in

general most of them are designed to separate color information from image intensity. As seen in

Error! Reference source not found. below on the left, hue primarily accounts for the change in

color where the arrow shows the direction if the H component was increasing. Saturation and value

are generally seen as the intensity determining the shade of the color. For the image on the right in

Error! Reference source not found., we can see that the Y component directly controls the

brightness of the image and is often referred to as the grayscale component of the image. Thus the

other two components represent the color information of the image itself. Further details for this

deliverable can be found in chapter 2.

Figure 1 - 3D Representation of HSV Color Space(Left) and YCbCr Color SPace (right)

4

B.1.1 Program Flow

Figure 2 - Program Flow

B.2 Filtering and Blob detection Blob detection plays a very important role in ADAS as it allows the detection of

important objects in a given frame. The most important of these objects include other vehicles,

people, and signs. After the detection of these objects we know the dimensions of the bounding

box, the position of the center of this bounding box, and how many objects are on the screen.

The dimensions of the box might be used to estimate distance, smaller for more distant objects

and large for closer objects. The change in these dimensions could give an idea of whether the

objects is moving toward or away from our vehicle. If the rectangle is taller than it is wide, the

object might be a person, if wider than tall or closer to a square it may be a vehicle. The distance

from the center of the screen, in conjunction with the same distance from the previous frame, can

also be used to give information about the speed and direction of the object. This information

could also be used to determine if a smaller object is actually part of a larger object. Finally, the

number of objects is important as it allows us to more accurately track objects from one frame to

the next, or to predict the performance of the system; a larger number of objects will take up

5

more resources. If all of these pieces of information are used together, a fairly robust object

detection system can be created. Further details for this deliverable can be found in chapter 2.

B.3 Lane Detection The idea behind thresholding an image, in computer vision, is to extract important features

of a video frame. Regarding ADAS, this can be anything from stop signs, to brake lights, to lane

lines, to any other number of wanted features. For this particular activity the idea is to extract only

the lane lines from the input video and return a binary output video with lane lines the only thing

showing as white in each frame. If this is done correctly, the resulting image is a good starting

point for determining the position of the car relative to the surrounding environment. Further

details for this deliverable can be found in chapter 2.

C. Design

C.1 Stereo Rig Stereo vision is the process of extracting 3-D information from multiple 2-D views of a

scene. This 3d information is estimated by using a pair of images (stereo pair), and matching

stereo pairs to estimate the relative depth of the scene. Stereo vision is very important to ADAS

systems, where it is used to estimate the actual distance or range of objects of interest from the

camera.

Initially the ecoCAR competition required that a custom stereo rig be built. This was to

ensure that any calibration or software done was created by the team. After the January 19th

stereovision tool check the competition decided that pre built stereo vision cameras could be

used, as long as the software behind the camera was made by the team. Ultimately it was decided

that the ZED stereo vision camera with a custom mount is what we would use.

C.1.1 Physical Design

The custom rig was designed based on the suggestions from the EcoCar fall workshop.

The parts used to create the rig were a vanguard multi mount, two 1080p Logitech cameras, and

a tri suction cup base. One requirement for the stereo vision rig is that it is easily interchanged

between different vehicles. The current design of the custom rig allows for each transition

between cars, along with easy transition between different camera setups. This camera system

can be seen in the figure below.

Figure 3 - Custom Built Stereo Vision System

In the end a ZED stereovision camera is going to be used for many different reasons.

First of all, one problem with our custom rig was that the cameras would take pictures at slightly

different times. An attempt at threading was made to solve this problem, but in the end it was

6

decided that specific hardware would be needed in the cameras. Another problem with the

custom stereo vision rig was that it was large and didn’t fit in all vehicles well. The ZED stereo

camera solved both of these problems. It has software to ensure that each camera takes a picture

at the same time, and is also small enough to fit on any windshield.

In order to mount the camera on the windshield a higher-grade suction cup was

purchased. A mount was also 3D printed to fit on that suction cup. This set up can be seen in

the below pictures.

Figure 4 - ZED camera rig

In April, the team encountered an unexpected setback when the ZED stereo camera permanently

failed. The cause has been attributed to static discharge. Due to shipment delays the team

decided to create a more suitable rig to be used with the 1080p Logitech cameras. This rig can be

seen below.

Figure 5 - Backup Logitech Camera Rig

7

C.1.2 Calibration

Once a distance between to cameras has been determined, the calibration of the stereo

vision rig has to only be done once. The steps for calibrating the cameras are as follows:

1.) Use the calibration checkerboard picture, and capture images from the left and right

cameras.

- The more pictures taken correlates to the accuracy of the calibration.

- The calibration checker board size along with smoothness will also effect the

calibration accuracy. Some example checkerboard images can be seen below.

Figure 6 - Stereo Vision Checkerboard Calibration

2.) Use the built in Matlab tool for stereovision calibration to take the images and create

a calibration file.

3.) Lower the mean pixel error, by taking out pictures that have a large mean pixel error,

and recalibrate.

4.) Export the Stereo Vision Parameters file for use.

One point of focus was checking the distance estimation for different widths of camera

lenses on our custom rig. This was checked by calibrating the camera for each width, and testing

the distance estimation in a controlled environment. A range of 0-35 meters was marked in 5

meter increments down a hallway, and objects were placed at different points to test the distance

estimation. A single image was taken from both cameras and was rectified and used to create a

disparity map and a point cloud.

The two main parameters that needed to be changed for this experiment were the block size

and disparity range for the disparity map. It was decided that smallest possible block size was the

optimal choice. While this did introduce a significant amount of noise into the disparity map, it

gave the best estimation for distance. Where the best distance estimation makes the tradeoff of

having more noise. The disparity range needed to be adjusted to match the distance between two

same points on the unrectified images.

The distance estimation at intermediate points was overall +-5% with still objects. The

result of changing the camera width was the minimum and maximum distance that could be

8

physically estimated. As the width of the camera lenses increased the maximum distance

increased and the minimum distance decreased.

Table 1 - Stereo Vision Camera Calibration Test Example

The final decision to use the ZED camera has led us to a fixed distance estimation. The

two cameras on the ZED are at a fixed distance part, so the distance estimation is locked in at a

max of 25 meters.

C.1.3 Distance Estimations

Once the stereo rig system is calibrated the calibration information is then used to rectify

the images. The goal of this is to line up each pixel so that a disparity map can be created. An

example of unrectified and rectified images can be seen below.

Figure 7 - Unrectified Image

Figure 8 - Rectified Image

As mentioned above the point of calibration is to align the stereo pair so that we can then

create the 3D representation of the space captured. This is done by creating a disparity map which

9

gives us the depth information we need to estimate distance. The use of notating the distance of

the calibration checkerboard now gives us the approximate distance in meters. An example of the

3d point cloud and disparity map can be seen below.

Figure 9- Disparity Map

Figure 10 - 3D Point Cloud

The final step is to then draw a bounding box over the object that needs to be detected and the

corresponding distance will be found from that 3D point cloud.

C.1.4 Video Capture and Ground Truth Labeling

The provided MATLAB ground truth labeler app was used to create the ground truth data

for the recorded footage. There are 10 labeled sessions containing the path to the footage that is

labelled. These labelling sessions have the required vehicle bounding boxes, along with at

minimum two stop signs, two speed signs, and two pedestrians labelled.

One issue that was encountered with the ground truth labeler app was the hard-coded pixel

width and height values in GroundTruthLabelerClass.m. This MATLAB file had 640x480

hardcoded, so when a labeling session was made it would scale the bounding boxes to that size

and location. The footage that was captured with the stereo rig was captured in 1280x720, so

modification to GroundTruthLabelerClass.m needed to be made. These alterations to the code did

not affect the methods that analyzed and created the SAS data, and only altered how the bounding

boxes were created when using the labeler.

10

C.2 Object Detection The detection or classification of objects in an image is an important part of any competent

ADAS system. Object classification allows an ADAS system to be aware of the types of objects

that surround the vehicle during driving. For the past January 19th tool check, as well as the current

version of the real time system for the competition in May, cascade object detection is used.

A cascade object detector is a piece of software that takes an input image and outputs

coordinates corresponding to bounding boxes which tell the location of certain objects in an image.

The objects are generally labeled by drawing the bounding box back into the original image. The

detector works by focusing on a small rectangular subsection of an image called a window. The

window slides across the image until every subsection has been searched for the object of interest.

For each window, the classifier searches for objects in stages. The first stage of a classifier looks

for simple patterns in the image window which can be something as simple as the upper portion

of the window being generally darker than the bottom. If the classifier finds the simple pattern,

then the region is analyzed using the next stage in the cascade. If a region fails to pass a classifier

stage, then the window moves along and the algorithm does not spend any more time or resources

analyzing that image region. Each stage of the cascade looks for increasingly complex features,

with the final stage making the final object classification. Figure 5 shows an abstract model of a

cascade classifier with its many stages.

There are different types of patterns (also known as features) that can be analyzed at a

classifier stage. Both Haar features and HOG (Histogram of Oriented Gradients) features were

tried during classifier development by the ADAS team. Since it is difficult to algorithmically

determine an object in an image, machine learning techniques must be utilized in order to train the

classifier stages to recognize similarities among images of the object of interest.

Using convolutional neural nets (cnn) for object detection is something that is currently

being investigated. Tenserflow is an open source python/c++ package that has built in functions

used for training cnns. We hope to use residual networks such as RezNet and ImageNet. These

residual networks are neural nets that have been pre-trained on thousands of images, with a much

larger feature set. For example, Reznet has a 2048 feature vector, which is much deeper than any

of the cascade object detectors we are currently using. Based on research papers we have read, this

method should be much more accurate, and produce a lot less false positives.

Figure 11 – Cascade Classifier Model Diagram

11

C.2.1 Vehicle Classifier Training

A vehicle classifier was trained using the MATLAB provided

trainCascadeObjectDetector() function and a custom collection of input data. Positive and negative

vehicle images were gathered from two major sources; publicly available online datasets and data

gathered during live driving in Fort Collins, CO. The Stanford Cars Dataset [2] provided the team

with 16,185 positive vehicle images in a wide variety of orientations complete with annotated

ground truth information. Approximately 5,000 of these images were further categorized by the

CSU team by orientation of the object in the image and used in training. For example, vehicle

fronts were labeled with an orientation of 1, and vehicle rears were labeled with an orientation of

2. This was done because training one classifier for all orientations of the car led to erroneous

results. Additionally, each of the 5,000 images were manually assigned, by ADAS team members,

a usability rating on a scale from 0 to 3 which indicated if the image represented a realistic driving

scenario. A conversion script was written to transform the Stanford annotations to a suitable format

for use with trainCascadeObjectDetector(). In addition to 463 positive and 864 negative images

gathered during live driving, 453 negative images were used with permission from Pinz [3] and

Veit [4].

Classifiers were trained in many long-running batches. Each batch trained 10 to 150

classifiers by permuting training parameters such as true positive rate (TPR), negative samples

factor (NSF), false alarm rate (FAR), and feature type. It was found that decreasing the FAR helped

to eliminate extraneous detections but also significantly reduced the total number of detections

making the classifiers too cautious. Similar side-effects were noticed when increasing the NSF.

Initially, too few negative images were thought to be the cause of poor classifier performance so

the negative images were increased. It was determined early on that the classifier should focus on

vehicle rears since classifiers trained on mixed vehicle orientations produced seemingly random

classifications. Positive images were increased in subsequent classifications with a goal of

gathering at minimum 526 positive images for orientation 1. This number derived from how the

CSU Year2 ADAS team used a moderately accurate classifier known to be trained with 526

positive images.

The evaluateObjectDetector() function provided in the GroundTruthLabeler app was used

to generate sensitivity and specificity (SAS) data. Then the viewSAS function was used to display

the classifier metrics. A script was created to automate

the process of running the evaluation and tabulating the

classifier metrics. Some examples images of our vehicle

classification can be seen below.

Figure 13 - Vehicle Detection 2 Figure 12 - Vehicle Detection 1

12

C.2.2 Pedestrian Classifier Training

The work with pedestrian detection has been strongly influenced by the research paper

named Pedestrian Detections: An Evaluation of the State of the Art by Piotr Dollar [5][6]. In this

paper, Piotr Dollar gathered a large pedestrian dataset including both negative and positive images,

created ground truth information, and tested 16 pre-trained classifiers.

The CSU ADAS team got in touch with Piotr Dollar and were granted permission to use

this data set to train some of our own classifiers. Piotr Dollar also provided a MATLAB toolkit

that allowed quick training of this data set to create an aggregate channel feature object detector

for pedestrians [5][6]. This object detector is an evolution of the Viola Jones algorithm [5][7].

One thing that was discovered later was that the MATLAB computer vision toolkit includes

a function named detectPeopleACF(frame). This function carries out the aggregate channel feature

detection that Piotr Dollar implemented in his paper from 2012. This function detects pedestrians

at a very high percentage, but it does introduce a lot of false bounding boxes.

Additional logic was investigated to reduce the number of these false bounding boxes. One

object that creates a lot of false positives are trees. One simple logic addition to reduce false

positives was checking the height of the bounding box and making sure it was a reasonable height

for a standard pedestrian. Another piece of logic that was added was reducing the region of interest

for the algorithm to check. By removing the ground and sky, any false positives due to clouds or

the hood of the car were removed.

The detection rate is relatively high, but the false positive rate is also still too high for this

classifier. Based on manually looking at the output video some deductions were made. First, the

true positive rate could be increased if the distance that a pedestrian is detected could be increased.

This will be difficult to do as the minimum size for bounding boxes is already set to be the lowest

it can. The second observation that can be made is that trees are the main problem causing false

positives. To fix this a new classifier must be trained with more negative images involving trees.

Figure 14 - Pedestrian Detection with False Positives Figure 15 - Pedestrian Detection

C.2.3 Traffic Sign Detection

For sign detection, several methods were explored to find an efficient method for detecting

stop and speed signs. The first method explored was training a cascade classifier much like the

steps taken in vehicle detection. As mentioned above, stop and speed sign images were collected

and trained using various parameters. The second method was to filter the image by color looking

for the red stop sign and the white speed sign. Once the bounding boxes were found the Optical

Character Recognition (OCR) was run over the bounding boxes to look for the text speed sign plus

limit number and stop.

For the object detection method, a few hundred images of stop signs and speed signs were

captured outside. The training image labeler app provided by MATLAB was then used to assign

13

bounding boxes over each stop and speed sign respectively. Once obtained, the positive and

negative images were passed to the trainCascadeObjectDetector() function which gave an .xml

file.

For the color thresholding and OCR method the first step was to create a binary video of

just the stop sign and speed sign location. This was done using the color thresholder app. For stop

signs it was easy to threshold the image based off the red sign. After stitching several stop signs in

various lighting conditions the YCbCr channel seemed to be the best to filter from. Where the Cr

channel was used to threshold the image to obtain a reasonable binary image of the stop signs. For

speed signs thresholding for the white sign didn’t work reliably, because the white sign couldn’t

be filtered by a specific channel given the color spaces RGB, HSV, and YCbCr.

To get any reliability with the OCR method some morphological operations needed to be

performed. This is because the OCR method uses the default binarization method from MATLAB.

In order to read the text properly imerode(), imdiliate(), and imtophat were used to filter the image

and remove background variations. The text was then read and filtered based on the given

confidence score.

Overall, the OCR and filtering method was too dependent on the specific footage. With the

goal being to create a reliable and robust system, current sign cascade is used by a cascade

classifier. However, for speed signs detecting the speed text is still being developed.

Figure 16 - Stop Sign Detection Figure 17 - Color Thresholding

Yield signs and speed signs have also been trained using the above cascade object detection

method. Road sign data for this training was provided by The LISA Traffic Sign Dataset [8].

C.2.3 Lane Detection

For lane detection, we used serval methods do detect the lines in the road. The first step

was to remove the lower portion of the input video. Where to detect the lane lines we do not care

about the sky or anything that isn’t related to the road. The next step was to filter the image and

saturate the values to prepare the image for line detection. The peaks of the lane lines were then

found from the Hough peak detection method which finds the pixels that have the highest

derivatives showing sharp lines in the image. A tracking object is then placed to track lane lines

from frame to frame making the function faster. Where it only needs to look for lines nearest the

last detected location. A function was modified to determine if the user is departing left or right

based off the slope of the lines. The lane lines are converted to the Hough lane notation to Cartesian

to get the lane line departure information. Color thresholding was used to determine the type of

the lane in the road. Such as dotted, solid, yellow, and white.

14

Figure 18 - Lane Detection 1 Figure 19 - Lane Detection 2

C.3 Unified Real-time Application The May 2017 year-end competition specified certain requirements to be tested during a

live driving demonstration. See “Deliverable Description for ADAS” section above for details on

competition specific requirements. Accomplishing this task meant combining lane, vehicle,

pedestrian, and sign detection into a single unified application capable of doing each of these

things in real-time. The competition specified a minimum of 5 frames per second. For the

upcoming demonstration, the stereo rig will be placed in a vehicle and driven on a closed course

complete with signs and dummy vehicles. The unified application was written in MATLAB and

then profiled. See appendix H for details on “Code Profiling”. Continued efforts are being made

to port as much of this code from MATLAB to Python using OpenCV to speed up the unified

application in time for the competition. The figure below shows an example output frame

generated by the unified application.

Figure 20 - Unified Application Output showing vehicle detection and distance coloring

As shown above, vehicles have been bound and labeled with unique tracking numbers. Bounding

boxes are colored green by default and yellow if the vehicle is determined to be in the same lane

as the ADAS vehicle. Boxes are colored red if the vehicle is both in the same lane and less than

25m ahead. Due to poor lighting conditions in the frame shown above, only one lane line is

15

shown on screen. However, from this image it is easy to see that vehicles detected above this line

are considered out of the lane, while vehicles under the line are labeled either yellow or red

depending on their perceived distance. More generally, a vehicle can be determined to be in the

lane using the method described in the figure below.

Figure 21 - Algorithm for determining isInLane

As shown in the figure, each lane line can be described as a point (x1, y1) and slope (m). A point

(x, y) is said to be in the lane if y > m (x - x1) + y1 for both lane lines.

C.4 Embedded System The EcoCar competition provides a S32V Micro Controller Unit to be used as an embedded

system in the vehicle. The goal for this embedded system is to implement and optimize lane

tracking, along with port some of the other pc algorithms over to the board. A vision SDK is also

provided by the competition, which utilizes different aspects of the architecture such as the image

signal processor, the apex vector unit, and the arm processor.

The competition also provided a development environment specifically for the board. This

IDE consists of the standard coding view, and a graphical view that lets the user visualize the

vision SDK. This IDE also includes a testing and debugging tool, which directly interfaces with

the board. This tool makes debugging programs that use multiple processors in the architecture

easier to handle.

The main challenge of the board is using ApexCV included in the S32V vision SDK. Using

this standard development kit it is possible to optimize a lot of the algorithms for the specific

hardware on the board. For example, the SDK has functions to perform color conversions with

the ISP, or perform low power processing on image data in parallel with the apex vector processing

unit.

16

For the scope of this year’s May competition the only

algorithm required for the board is lane detection. The board

will be aimed at a live video, and the algorithm should output

blue lines over the current lane. This video will then be saved

to a USB for later grading.

D. Future Work and Conclusion

D.1 Work Until Competition The final competition of this year is in Milford

Michigan and DC. The event starts on May 14th and will go until the 23rd of May. Where the

team will demonstrate the real time, system described above. Currently we have two separate

systems. One utilized MATLAB, where the other used Python and OpenCV. We are currently

adjusting several parameters on the MATLAB system as a backup for the competition in case the

python system isn’t developed enough.

D.2 Continuation This project will continue next year as the last iteration of EcoCAR3. In year four the

team will be required to refine and adjust the algorithms developed to meet a new goal set by the

EcoCAR3 competition. The continuation will also requirement refinement for the real-time

system and the embedded system.

D.3 Future Work Recommendation For the next ADAS team the next section is a recommendation of what to prioritize next

year. Points of focus include optimizing the real-time system for MATLAB and Python. Adding

additional functionality to the embedded system, such as object detection. Lastly, add an

autonomous driving frame work to the detection system for next year.

D.3.1 Optimizing Current Code

The real-time system is currently being designed to be a 5Hz system. The next step would

be to increase the capture resolution and frequency. Where 10-15Hz system should be, the target

moving forward. This increase frame rate will increase distance estimation and be more accuracy

for the next system.

D.3.2 Port to embedded system

For the Embedded System, the team will need to continue to port the existing code to the

board using the NXP IDE. The team will add and continue to develop existing and new for the

board.

D.3.3 Real Time System

The team should continue to develop in Python for the real-time system and add CNN

support for the system. All of which should be included in the current python system.

17

D.3.4 Add Driving

A frame work should be added to support system intervention. Such as break and drive by

wire. So that the next year system will be able to easily add these functions into the system in case

they are needed for next year.

D.4 Conclusion Everybody is on the move nowadays and a large portion of the population drive large metal

objects capable of over 100 miles per hour. With people allowing themselves to be distracted by

things such as cell phones, the merits of this system are powerful. It has the potential to prevent

fatal car accidents that are occur far more often than they should. ADAS can be an effective tool

at decreasing the car accident rate. Currently, companies like Google, Lexus, and, ECOCAR3’s

sponsor, GM are all developing technology for these types of systems with Google leading the

way.

Computer vision is an expansive process and has no universal way to solve specific tasks

like object detection or feature extraction. In general, computer vision algorithms are context

specific. For ADAS, the objects being detected do not widely vary, but the objects and features

need to be detected quickly, consistently and from the greatest distance possible. False positives

may make the driver doubtful of the system or worse cause an accident by alerting the driver when

there is no danger. Likewise, missing something important like a potential hazard in the path of

the driver’s vehicle is just as bad if not worse.

Our system currently has all the basics of ADAS that we set out to achieve at the beginning

of the semester. We can identify and track lanes, detect cars and other objects in each frame at a

frame rate of 5Hz. This system works in real-time as sub systems but will need to be implemented

on the computer in python and MATLAB. Which will be completed by the may competition.

D.5 Lessons Learned There are numerous different technical lessons we learned throughout the semester. First

of all, we learned about different object detection algorithms. Initially we used cascade object

detectors with feature sets like haar and hog. When we asked professor Bruce Draper, a

computer science professor who specializes in computer vision about cascade object detection he

told us there was better solutions. In the last couple of years’ convolutional neural nets have

really taken over the computer vision field. We talked to Professor Draper later in the second

semester, so we haven’t been able to investigate this as much as we would have liked. This is

something that next year’s ADAS team should investigate more. Another technical lesson we

learned this semester is the use of other coding languages that are good for computer vision. For

most of the semester we used MATLAB. While MATLAB does have very good computer

vision libraries, the speed at which it runs some functions is too slow. Currently we are

investigating Python, and have found that using OpenCV packages with python have resulted in

much fast computation time compared to MATLAB.

As far as planning goes one thing that we learned and changed as the project developed

was the use of tools like JIRA and gitlab. JIRA boards were an easy way for us to organize our

work and assign tasks to each member of the group. Being able to organize our work into two

18

week sprints helped with decision making along the way. Gitlab, a variant of github, was a

repository service that we used to keep track of our code. The reason we used gitlab instead of

github is because of gitlab’s option to host private repositories. Having version control, and also

a way for all of us to work on code without disrupting each other, made our team workflow go a

lot more smoothly. See figures in appendix for examples of working with Jira and Gitlab.

Lastly, something else we learned is to always be prepared for the unexpected. Near the

end of our semester both our ZED camera broke, as well as the solder on our NXP board. We

are unsure if the ZED camera will be shipped to us before competition and we are currently

working on a replacement system. As for the board, luckily some employees from NXP got back

to us quickly and we are going to be able to fix it.

19

E. References

[1] Year 3 Event Rules. Rep. no. Revision A/ Chicago: Argonne National Labs, n.d. Print.

[2] Jonathan Krause, Michael Stark, Jia Deng, Li Fei-Fei, 3D Object Representations for Fine-

Grained Categorization, 4th IEEE Workshop on 3D Representation and Recognition, at

ICCV 2013 (3dRR-13). Sydney, Australia. Dec. 8, 2013.

[3] Opelt, A., A. Pinz, M. Fussenegger, and P. Auer. "Generic Object Recognition with

Boosting." IEEE Transactions on Pattern Analysis and Machine Intelligence 28.3 (2006):

416-31. July 2004. Web. 1 Dec. 2016.

[4] T. Veit, J.-P. Tarel, P. Nicolle and P. Charbonnier, "Evaluation of Road Marking Feature

Extraction", in Proceedings of 11th IEEE Conference on Intelligent Transportation

Systems (ITSC’08), Beijing, China, October 12-15, 2008.

[5] P. Dollár, C. Wojek, B. Schiele and P. Perona, Pedestrian Detection: An Evaluation of the

State of the Art PAMI, 2012.

[6] P. Dollár, C. Wojek, B. Schiele and P. Perona, Pedestrian Detection: A Benchmark

CVPR 2009, Miami, Florida.

[7] Viola, P., and M. Jones. "Rapid Object Detection Using a Boosted Cascade of Simple

Features." Proceedings of the 2001 IEEE Computer Society Conference on Computer

Vision and Pattern Recognition. CVPR 2001 (n.d.): n. pag. Web.

[8] Andreas Møgelmose, Mohan M. Trivedi, and Thomas B. Moeslund, "Vision based Traffic

Sign Detection and Analysis for Intelligent Driver Assistance Systems: Perspectives and

Survey," IEEE Transactions on Intelligent Transportation Systems, 2012.

20

F. Bibliography Kekre, H., & Sonawane, K. (2013). Performance Evaluation of Bins Approach in YCbCr Color

Space with and without Scaling. International Journal of Soft Computing and Engineering, 3(3),

204-204.

H. Bay, A. Ess, T. Tuytelaars and L. Van Gool, 'Speeded-Up Robust Features (SURF)',

Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, 2008.

General statistics. (n.d.). Retrieved December 11, 2015, from http://www-

nrd.nhtsa.dot.gov/pubs/812115.pdf

General statistics. (n.d.). Retrieved December 11, 2015, from

http://www.iihs.org/iihs/topics/t/general-statistics/fatalityfacts/state-by-state-overview#Fatal-

crash-totals

M. Nieto, J. Arróspide Laborda and L. Salgado, 'Road environment modeling using robust

perspective analysis and recursive Bayesian segmentation', Machine Vision and Applications,

vol. 22, no. 6, pp. 927-945, 2010.

M. Nieto, A. Cortés, O. Otaegui, J. Arróspide and L. Salgado, 'Real-time lane tracking using

Rao-Blackwellized particle filter', Journal of Real-Time Image Processing, 2012.

Radic, Drago. “IT – Informatics Alphabet” Split-Croatita. Retrieved November 18, 2015. from

http://www.informatics.buzdo.com/p357-color-correction.htm

R. Tapia-Espinoza and M. Torres-Torriti, 'Robust Lane Sensing and Departure Warning Under

Shadows and Occlusions', Sensors, vol. 13, no. 3, pp. 3270-3298, 2013.

S. Waite and E. Oruklu, 'FPGA-Based Traffic Sign Recognition for Advanced Driver Assistance

Systems', Journal of Transportation Technologies, vol. 03, no. 01, pp. 1-16, 2013.

G. Acknowledgments Special thanks to Advisers and Professors Dr. Pasricha and Dr. Bradley, and graduate student adviser

Vipin Kumar Kukkala.

21

Appendix A – Abbreviations

ADAS – Advanced Driver Assistance System

ARM – Advanced RISC Machine

GM – General Motors

HSV – Hue Saturation Value

HUD – Heads Up Display

LiDAR – Light and Radar

MATLAB – Matrix Laboratory

PC – Personal Computer

RADAR – Radio Detection and Ranging

RGB – Red Green Blue

SONAR – Sound Navigation and Ranging

YCbCr- Luma Blue-Difference Red-Difference

OCR – Optical Character Recognition

HOG – Histogram of Oriented Gradients

22

Appendix B – Budget Our current total budget estimate is roughly around $2500.00 - $3000.00. The two primary

sources of funding include the ECE budget and the EcoCAR3 funding. The estimated expenses

account for the laptop, cameras needed for stereo vision, the embedded sensors, storage, and

enclose needs. As seen below in Table 4 and 5 that we are currently still under budget with most

of our parts needed already paid for. The only other foreseen expenses left are sensors for the board

and an enclosure all which will fit with the amount remaining. We still have yet to spend any

amount of the ECE budget which will provide us some additional funding if needed.

Estimated Budget

Equipment Quantity Price

Nvidia GPU Laptop 1 $2,000.00

USB 3.0 Cameras 2 $100.00 X 2 = $200.00

Mounting Equipment 1 $200.00

Buzzer for Board 1 $25.00

Enclosure for Board 1 $200

Miscellaneous 1 $100

Total Estimated Cost $2,725

Senior Design Amount +$827

Table 2 - Estimated Budget

Actual Spending

Equipment Quantity Price Method of Payment

MSI Laptop 1 $2,000.00 EcoCAR3 Budget

Logitech webcam 2 $135.94 EcoCAR3 Budget

Vanguard mulit Mount 1 $74.99 EcoCAR3 Budget

3 Cup Suction Mount 2 $27.98 Jordan Personal

Miscellaneous 1 $31.05 Jordan Personal

External Storage 3TB

Seagate

1 $99.99 EcoCAR3 Budget

USB 3.0 extension 1 $5.99 Jordan Personal

23

Ram Mount 2 $62.98 Jordan Personal

3D Printing Material 1 $20.00 Drew Personal

USB-TTL Serial Cabel 2 $41.00 Drew Personal

Current Cost $2499.99

Table 3 - Actual Spending

24

Appendix C - Project Plan Evolution Project Plan and Deliverables

The first team plan created on September, 5 as seen in figure 12 was a basic timeline that

didn’t have much weight considering the team didn’t all agree on these dates. Not having a more

detailed plan caused the team to fall behind. On October, 16 the team developed a timeline with

the use of sticky notes that made up what can be seen on tables 6 and 7. This format of a timeline

allowed the team to stay more on track gage where the team need to be at all times. We will plan

to follow this format and timeline as the next semester approaches.

Dealing with software we found that adhering to waterfall method of development wasn’t very

feasible. Where the software we developed with would change and would alter our method of

development. So, we went to an agile team. Where we used Jira to outline what we needed to get

done for the project. Figures can we seen below of our development for the project. The last

images in this section describe our Gitlab repository where we developed our code and managed

multiple revisions.

Figure 22 - Old Timeline

25

Team Roles and Due Dates

Table 4 - Team Due Dates and Major Deliverables

26

Team Gantt Chart

Table 5 - Team Timeline

27

Figure 23 - Example Sprint from Jira Board

Figure 24 - Example Backlog from Jira Board

28

Figure 25 - Example Sprint Report from Jira

Figure 26 - Commit graph from GitLab

Figure 27 - Git Repository Log

29

Appendix D – Ethics in ADAS

Often, when designing a new system, we first consider if the current technology given is adequate.

We often don’t think about the ethics involved until the system is already designed. Computer aid and

automation is not new. We have been slowly integrating computers into our everyday life. These

technological innovations provide savings in cost, energy efficiency, but most importantly safety. Certainly,

no other computer automation system has so clearly presented an ethical dilemma than complete or even

partial vehicle automation. Where a software error in a bank may cause the loss of a few million dollars, a

software error in a car can cost many people their lives. While driving, a person must sometimes make an

ethical decision that could cost someone’s life, designing a computer to make these decisions can have

major consequences. The first questions that must be asked, will this system be a safer and a more ethical

system.

According to the U.S. Department of Transportation 90 percent of traffic crashes are caused

by human error. An Advanced Driver Assistance System (ADAS) is an attempt to reduce the

amount of accidents that occur. Alerts, lane keep assistance, and even full automation are methods

to reduce driver error and reduce crashes and injury and even death. However, how these systems

are designed can make the system less safe than they were without any vehicle automation.

The first step for our project is to clearly state our capabilities. We are currently designing

a system that’s only purpose is to alert the driver. Identifying pedestrians, cars, lane departure,

and signs are the focus of this system. We are not implementing anything that will take control of

the vehicle so we have focused on the ethics of false alerts and inaccuracies in our algorithms. It

is also important that users of our system understand its limitations and that they realize the

expectation for the driver to maintain alertness on the road.

In the design process of our project we needed to determine what error rates are tolerable

for an alert system. Where the driver should rely on our system, but not be fully dependent on it.

This is done by educating the driver the capabilities of the system so they fully understand what

the system can and can’t do. Computer Vision still has very clear limitations, and these limitations

need to be made clear to any user of the designed system. While some other systems include extra

technology to mitigate these limitations, our system does not.

Another aspect of design is the testing of our system. While our system may adequately

work in certain driving scenarios, it is not guaranteed to be a flawless system in every environment.

For our system to be ethically sound, it needs to be tested under many possible driving scenarios

to mitigate any unexpected errors.

One ethical dilemma that we must consider is protecting the intellectual property (IP) that

is provided to us by our sponsors. The prime example of this is the NXP embedded system that

was donated to us. Some aspects of the architecture and SDK of this board are considered IP

belonging to NXP, and it is our responsibility to not release any of this information.

As a part of the EcoCAR3 competition, the system design is constrained by certain

competition guidelines. The guidelines contain several rules that prohibit certain hardware and

software. It is important that we understand these guidelines and make sure that we are within

the rules defined. If not, then we may be unfairly scored on something that the other teams didn’t

have access to.

30

Lastly, because we are using multiple libraries from many different sources we must

make sure that we honor the provided license agreements. It is important that we do not use code

developed by another person without getting the appropriate permission and crediting them

accordingly as it is unfair to the original author and in many cases, may leave us vulnerable to

legal action.

31

Appendix E – FMEA FMEA Risk Analysis

Risk /

Failure Severity Occurrence Detection Risk

Priority

Number

Risk

Treatment Who and What

Triggers the

Treatment Component

Delivery

Delay

5 2 10 10 Use prebuilt

stereo vision to

mitigate

slowdown in

algorithm

design.

If the parts are

ordered, or the

parts do not

arrive on the

estimated

delivery

date. Reorder

parts or contact

distributor. Hardware

purchased

doesn’t work

with desired

system

9 1 10 9 Negotiate with

vendors to

receive refund

and order the

desired parts.

The team will

need to reorder

the parts.

Missing

Deadlines 9 1 10 9 Proper project

management. Team

Stereo Vision

Camera

Frame offset

2 2 8 4 Create a

calibration

technique if

necessary

User recognition

and the user/

developer fixes it.

Misread of

frame due to

various

weather

conditions.

8 5 3 40 Develop

methods that

detect weather

related errors.

The conditional

code with

functions to

detect and alter

the parameters for

the computer

vision algorithms.

Pedestrian

not

recognized

8 5 8 40 Develop

proximity

automatic

detection.

Pedestrian

doesn’t fit normal

test case. Error in

input frame(s). Camera

mount not

properly

fitting in car.

9 3 10 27 Create a new

mount, or alter

current mount.

Improper

installation.

Buzzer

interface

failure.

6 4 9 24 Order new

buzzer to work

with board.

The team’s

improper order.

Table 6 - FMEA

32

Appendix F – DTVC

As seen in the above figures our goal this year is to develop an ADAS system that can track

pedestrians, vehicles, and lanes. The success of the EcoCar3 ADAS team in developing effective

and efficient recognition and tracking systems is highly dependent on the quantity and quality of

tests performed. As a part of the rubric given to us, the competition will focus on how the team

tested and modified parameters to reach a live tracking and distance estimation system.

Figure 28 - Work Flow

Our test plan involves design, development, test, and optimization. A large portion of our

project will be testing, and then optimizing our algorithms. This will involve gathering test data

in different conditions (described in the plans below). Some initial limits were set, but those values

may change as the test and optimization cycles continue. These limits can be seen in the test plans

below.

Due to the nature of our project most of what we will develop can be tested the same way.

Where the main categories include object detection, distance and velocity estimation, and porting

algorithms to the embedded system. Below are four representative field tests which describe how

the performance of the ADAS systems will be measured.

33

Test 1: Performance Test of Vehicle Detection (using the PC)

Prerequisite Tasks: 1. Classifier trained with training images.

2. Camera Rig setup to capture live footage.

3. Write script to compare ground truth information. Considered correct if bounding boxes cover

80% of pixels.

Design Requirements: 1. Detects vehicles on freeway/highway.

2. Detects vehicles on freeway/highway while road curves.

3. Detects vehicles through changing light conditions.

4. Detects vehicles on city streets.

5. Detects vehicles when changing lanes behind vehicle.

Setup Procedure: 1. Gather test footage/photos from an online database or through personal video capture in a

vehicle with camera rig setup.

2. Create ground truth database for all gathered test footage/photos. Use competition provided

MATLAB software to create bounding boxes around vehicles for each frame/photo.

3. For test with gathered footage/photos, create script to compare ground truth information

between algorithm generated bounding boxes, and manually created bounding boxes.

4. For live testing have camera rig setup with pc, and two people in the car.

5. Test with multiple sets of test images that fulfill design requirements.

Performance Requirements: Database Footage/Photo testing

1. True Positive % True Negative % >= 90%

2. False Positive % and False Negative % <= 10%

Live Testing 1. True Positive % True Negative % >= 90%

2. False Positive % and False Negative % <= 10%

Frame Rate 1. 15 < Frame Rate < 20 with 720p Resolution

Action Plan (based on test results): 1. If true positive or true negative rates are not above 90 %:

- Modify Classifier parameters (Type of classifier, classifier training images)

- Increase image resolution.

2. If Frame rate doesn’t fall in between the bounds: - Modify the number of stages in the classifier.

- Modify classifier to only look for certain angles of vehicles.

- Parallelize classifier to handle multiple angles with multiple threads.

- Decrease image resolution.

Table 7 - Test1 Vehicle Detection

34

Test 2: Performance Test of Vehicle Distance Estimation (using the PC)

Prerequisite Tasks: 1. Stereo camera rig functional.

2. Stereo camera rig calibrated.

3. Object detection can identify and bound each receding vehicle in each frame (90% true positive

rate).

Design Requirements: 1. Calculate the distance to each vehicle (may use whatever method desired).

2. Notate each bounding box with the distance in meters.

3. Estimations beyond 30 meters need not be accurate within 5%.

Setup Procedure: Static Test

1. Park two vehicles a known distance apart.

2. Place Camera rig in one vehicle.

3. Capture footage.

4. Run distance estimation algorithm on captured footage back in the lab.

Dynamic Test

1. Place camera rig in vehicle.

2. Follow another test vehicle down closed off road.

3. Estimate distance in real-time on laptop algorithm.

4. Capture “true” distance using radar gun or other similar devices.

5. Log results.

6. Analyze the results back in the lab (MATLAB).

Performance Requirements: Static Test

1. Estimation is within 5% of actual distance (for distance < 30 meters).

Dynamic Test

1. Estimation is within 5% of actual distance (for distance < 30 meters).

2. Estimates must be made on each frame.

3. Frame rate must be within 15-20 fps at 720p resolution.

Action Plan (based on test results): 1. If estimate is not within 5%:

- Increase image resolution.

- Change inter-camera distance of stereo vision system.

Calibrate the camera with pictures that have a higher confidence score.

2. If frame rate is too low (Dynamic test only):

- Decrease image resolution.

- Optimize object detection (see Test 1 Action Plan).

- Adjust pixel region the algorithm covers.

Table 8 - Test2 Distance Estimation

35

Test 3: Performance Test of Lane Tracking (using the PC)

Prerequisite Tasks: 1. Test footage of lanes recorded of varying clarity. (Solid white, solid faded white, obstructions

snow, no paint)

2. Camera Rig setup to capture live footage.

Design Requirements: 1. Highlight lanes on road and output the pixel width.

2. Detect lanes through changing light conditions.

3. Detects lanes on freeway/highway.

4. Detects lanes on freeway/highway while road curves.

5. Detects lanes on city streets.

6. Detects changing lanes Highlight red.

7. Determines which direction lane changes

Setup Procedure: 1. Gather footage of the needed lanes both still images and live video.

2. Develop ground truth database for all gathered test footage/photos. Use competition provided

MATLAB software to create bounding boxes around lanes for each frame/photo.

3. Create script to compare the developed algorithm vs ground truth information. Considered

correct if bounding boxes cover 80% of pixels.

4. Live test with two people in the car. Track frame rate, analyze video after capture.

Performance Requirements: Database Footage/Photo testing

1. True Positive % True Negative % >= 90%

2. False Positive % and False Negative % <= 10%

3. Detect missing lanes with 50% success

Live Testing 1. True Positive % True Negative % >= 90%

2. False Positive % and False Negative % <= 10%

3. Detect missing lanes with 40% success

Frame Rate 1. 15 < Frame rate < 20 with 720p resolution

Action Plan (based on test results): 1. If true positive rates not above 90%

- Adjust parameters of lane detection algorithm

- Use different algorithm Hough vs Canny edge detection

- Adjust color space to remove effects of lighting conditions

2. If Frame rate doesn’t fall within bounds

- Use lane tracking vs detection every 5 frames to increase performance

- Adjust pixel region the algorithm covers.

Table 9 - Test3 Lane Detection

36

Test 4: Performance Test of Lane Detection on NXP Board vs PC component

Prerequisite Tasks: 1. Lane detection implemented and tested on PC component

2. Lane detection algorithm ported over to NXP board.

Design Requirements: 1. Highlight lanes on road and output the pixel width

2. Detect lanes through changing light conditions

3. Detects lanes on freeway/highway

4. Detects lanes on freeway/highway while road curves

5. Detects lanes on city streets

6. Detects changing lanes Highlight red.

7. Determines which direction lane changes

Setup Procedure: 1. Gather footage of the needed lanes both still images and live video.

2. Test footage of lanes recorded of varying clarity. (Solid white, solid faded white, obstructions

snow, no paint)

3. Develop ground truth database for all gathered test footage/photos. Use competition provided

MATLAB software to create bounding boxes around lanes for each frame/photo.

4. For test with gathered footage/photos, create script to compare ground truth information

between algorithm generated bounding boxes, and manually created bounding boxes.

5. For live testing have camera rig setup with pc, and two people in the car.

6. Test with multiple sets of test images.

Performance Requirements: Database Footage/Photo testing

1. True Positive % True Negative % >= 90%

2. False Positive % and False Negative % <= 10%

Live Testing 1. True Positive % True Negative % >= 90%

2. False Positive % and False Negative % <= 10%

Frame Rate 1. Frame Rate must be at least 90 % of the pc components acceptable frame rate.

2. 13 < Frame Rate < 18 with 720p Resolution

Power Requirement 1. Power < 50 Watts

Action Plan (based on test results): 1. If true positive or true negative rates are not above 90 %:

- Adjust parameters of lane detection algorithm

- Use different algorithm Hough vs Canny edge detection

- Adjust color space to remove effects of lighting conditions

2.) If Frame rate doesn’t fall in between the bounds: - Use Apex vector processing unit to parallelize some aspects of the

algorithm.

Table 10 - Test4 NXP Board

37

Appendix G – Car Classification

Rules for Manually Classifying Positive Vehicle Images

Orientation:

front, back, side left, side right, passing left, passing right, oncoming left, oncoming right

Figure 29 - Car Orientations

Usability: 0 1 2 3

The usability is a measure of how useful the image will be for training for realistic driving

scenarios.

0 = This image should not be included in any training set.

1 = poor

2 = good

3 = great

38

Appendix H – Code Profile In this section, we analyzed the code for our real-time system to see where we would need to

optimize the code. From the figures below we can see that creating the stereo distance estimation is a

major bottle neck to the code. We have complied the MATLAB code into C known as a .mex file to

increase performance. We are currently implementing the ZED sdk to use GPU support to construct the

distance estimation using stereo vision.

Figure 30 - Profile Summary Matlab

39

Figure 31 - MATLAB function bottlenecks

Figure 32 - Profile Stereovision


Recommended