+ All Categories
Home > Documents > An on-board vision based system for drowsiness detection in automotive drivers

An on-board vision based system for drowsiness detection in automotive drivers

Date post: 23-Dec-2016
Category:
Upload: tara
View: 212 times
Download: 0 times
Share this document with a friend
10
An on-board vision based system for drowsiness detection in automotive drivers Anirban Dasgupta Anjith George S. L. Happy Aurobinda Routray Tara Shanker Published online: 3 August 2013 Ó Indian Institute of Technology Madras 2013 Abstract This paper proposes a system for on-board monitoring the loss of attention of an automotive driver, based on PERcentage of eye CLOSure (PERCLOS). This system has been developed considering the practical on- board constraints such as illumination variation, poor illu- mination conditions, free movement of driver’s face, limi- tations in algorithms etc. A novel framework for PERCLOS computation is reported in this paper. The system consists of an embedded processing unit, a camera, a near infra-red lighting system, power supply, a set of speakers and a voltage regulation unit. The image based algorithm is based on the PERCLOS as an indicator of the loss of attention of the driver. The authenticity of PERCLOS as an indicator of drowsiness has been validated using EEG signals. Keywords Real-time algorithm PERCLOS On-board testing NIR lighting Electroencephalography 1 Introduction A major cause of road accidents is the loss of attention in automotive drivers, which has been a concern for transpor- tation safety over the years. Hence, on-board monitoring of the alertness level of the driver is necessary to prevent such occurrences. The work in [1] highlights such attempts. Alertness level in drivers can be assessed using different measures such as Electroencephalogram (EEG) [24] ocular features [5], blood samples [5], speech [6, 7], skin conduc- tance [8] etc. The EEG based method has been reported to be the most authentic cue for estimating the reduction in alert- ness level [2, 3]. However, being a contact based method, its feasibility of implementation becomes impractical. In this paper, image cues have been considered for implementation on virtue of being non-contact based method. EEG based method have been used for off-board validation of the vision based algorithm. The first contribution to this paper is development of a novel framework for estimating the PER- centage of eye CLOSure (PERCLOS) in real-time thereby overcoming certain issues prevailing in literature. The sec- ond contribution is validation of the algorithm using EEG signals by conducting off-board experiments. Several approaches have been reported to develop sys- tems for on-board monitoring the alertness level in auto- motive drivers. The work in [9] reports a system named as Driver AssIsting System (DAISY) as a monitoring and warning aid for the driver in longitudinal and lateral control on German motorways. The warnings are generated based on the knowledge of the behavioural state and condition of the driver. However, this is a model based approach and proper on-board testing and validation has not been dis- cussed. Singh and Papanikolopoulos [10] proposed a non- invasive vision-based system for the detection of fatigue level in drivers. The system uses a camera that points directly towards the driver’s face and monitors the driver’s eyes in order to detect micro-sleeps. The system is reported to detect the eyes at 10 fps and track at 15 fps. The accuracy of the system is about 95 % with small movement of head bearing a tolerance on head tilt up to 30°. However in practical driving scenario, this system has drawbacks where driver’s head movement introduces inefficiency of the proposed system. Robotics Institute in Carnegie Mellon University has developed a drowsy driver monitor named A. Dasgupta (&) A. George S. L. Happy A. Routray Indian Institute of Technology Kharagpur, Kharagpur, India e-mail: [email protected] T. Shanker DeitY, NewDelhi, India 123 Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103 DOI 10.1007/s12572-013-0086-2 IIT, Madras
Transcript
Page 1: An on-board vision based system for drowsiness detection in automotive drivers

An on-board vision based system for drowsiness detectionin automotive drivers

Anirban Dasgupta • Anjith George •

S. L. Happy • Aurobinda Routray • Tara Shanker

Published online: 3 August 2013

� Indian Institute of Technology Madras 2013

Abstract This paper proposes a system for on-board

monitoring the loss of attention of an automotive driver,

based on PERcentage of eye CLOSure (PERCLOS). This

system has been developed considering the practical on-

board constraints such as illumination variation, poor illu-

mination conditions, free movement of driver’s face, limi-

tations in algorithms etc. A novel framework for PERCLOS

computation is reported in this paper. The system consists of

an embedded processing unit, a camera, a near infra-red

lighting system, power supply, a set of speakers and a voltage

regulation unit. The image based algorithm is based on the

PERCLOS as an indicator of the loss of attention of the

driver. The authenticity of PERCLOS as an indicator of

drowsiness has been validated using EEG signals.

Keywords Real-time algorithm � PERCLOS �On-board testing � NIR lighting � Electroencephalography

1 Introduction

A major cause of road accidents is the loss of attention in

automotive drivers, which has been a concern for transpor-

tation safety over the years. Hence, on-board monitoring of

the alertness level of the driver is necessary to prevent such

occurrences. The work in [1] highlights such attempts.

Alertness level in drivers can be assessed using different

measures such as Electroencephalogram (EEG) [2–4] ocular

features [5], blood samples [5], speech [6, 7], skin conduc-

tance [8] etc. The EEG based method has been reported to be

the most authentic cue for estimating the reduction in alert-

ness level [2, 3]. However, being a contact based method, its

feasibility of implementation becomes impractical. In this

paper, image cues have been considered for implementation

on virtue of being non-contact based method. EEG based

method have been used for off-board validation of the vision

based algorithm. The first contribution to this paper is

development of a novel framework for estimating the PER-

centage of eye CLOSure (PERCLOS) in real-time thereby

overcoming certain issues prevailing in literature. The sec-

ond contribution is validation of the algorithm using EEG

signals by conducting off-board experiments.

Several approaches have been reported to develop sys-

tems for on-board monitoring the alertness level in auto-

motive drivers. The work in [9] reports a system named as

Driver AssIsting System (DAISY) as a monitoring and

warning aid for the driver in longitudinal and lateral control

on German motorways. The warnings are generated based

on the knowledge of the behavioural state and condition of

the driver. However, this is a model based approach and

proper on-board testing and validation has not been dis-

cussed. Singh and Papanikolopoulos [10] proposed a non-

invasive vision-based system for the detection of fatigue

level in drivers. The system uses a camera that points

directly towards the driver’s face and monitors the driver’s

eyes in order to detect micro-sleeps. The system is reported

to detect the eyes at 10 fps and track at 15 fps. The

accuracy of the system is about 95 % with small movement

of head bearing a tolerance on head tilt up to 30�. However

in practical driving scenario, this system has drawbacks

where driver’s head movement introduces inefficiency of

the proposed system. Robotics Institute in Carnegie Mellon

University has developed a drowsy driver monitor named

A. Dasgupta (&) � A. George � S. L. Happy � A. Routray

Indian Institute of Technology Kharagpur, Kharagpur, India

e-mail: [email protected]

T. Shanker

DeitY, NewDelhi, India

123

Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103

DOI 10.1007/s12572-013-0086-2 IIT, Madras

Page 2: An on-board vision based system for drowsiness detection in automotive drivers

Copilot [11], which is a video-based system which esti-

mates PERCLOS. However the system has not been

addressed to act robust against illumination variation, head

rotation etc. The co-pilot system uses bright pupil dark

pupil method for the detection of eyes which requires

active illumination method. A similar system called Driver

Assistance System (DAS) has been developed by the group

at the Australian National University [12]. It uses a dash-

board-mounted head-and-eye-tracking system to monitor

the driver. The Distillation algorithm has been used to

monitor the driver’s performance. Feedback on deviation in

lane tracking is provided to the driver using force feedback

to the steering wheel which is proportional to the amount of

lateral offset estimation by the lane tracker. However in the

systems [10–12], the issues such as illumination variation

and head rotations have not been adequately addressed.

Smith et al. [13] described a system in which the driver’s

head is detected using colour predicates and tracked using

optical flow. The system described is reported to be robust

against face rotations, as well as head and eye occlusions.

Eye occlusions are detected by the likelihood of rotational

occlusion of eyes. The algorithm gives an approximate 3D

gaze direction with a single camera. The size of head is

assumed constant among different persons so the gaze

direction and can be estimated without prior knowledge of

the distance between head and camera. However the

accuracy of the system is illumination dependent. Zhu and

Ji [14] presented a real-time system for on-board moni-

toring the driver using active infrared illumination. Visual

cues such as PERCLOS, Average Eye Closure per Second

(AECS), 3D head orientation facial expressions are fused

with a Bayesian framework to obtain the estimation of

fatigue level. Bergasa et al. [15] developed a system which

monitors the driver’s fatigue level with six different

parameters—PERCLOS, eye closure duration, blink fre-

quency, head nodding frequency, face position, and fixed

gaze. These different cues are combined together using a

fuzzy classifier to get the alertness level of the driver.

Bright-pupil dark-pupil method is used for the detection of

pupil. However, the method being active illumination

based, requires special hardware design for robust perfor-

mance. In [16], Eriksson and Papanikolopoulos proposed a

system to locate and track the eyes of the driver. They used

a symmetry-based approach to locate the face for sub-

sequent detection and tracking of eyes. Template matching

has been used for eye state classification. Sacco and Far-

rugia [17] presented a real-time system for classification of

alertness of drivers. Viola-Jones frame work has been used

for the detection of eyes, and mouth, followed by tracking

of these features using template matching. The final deci-

sion about fatigue level is taken by considering three

parameters—PERCLOS, average eye closure interval and

the degree of mouth opening. SVM is used to fuse these

three different features. However the performance with

head rotations have not been considered. Moreover,

detection of features during night time and effect of illu-

mination has not been addressed in this work.

From the literature review, it is apparent that the existing

systems has limitations against illumination variation, head

rotation poor illumination for assessing driver fatigue. The

present system is a solution which addresses the existing

problems in the assessment of driver’s level of fatigue.

In this paper, a vision based system is proposed which

has been tested systematically on-board conditions for

both day as well as night driving conditions. The frame-

work of estimating the PERCLOS value has been modified

from the previous frameworks to make the system robust

against illumination variation, head rotation poor illumi-

nation etc. The system comprises of a single board com-

puter running on Intel Atom Processor, a camera, a set of

speakers, a near infra-red (NIR) lighting system, power

supply and a voltage regulation unit. The embedded sys-

tem has 1 GB RAM and executes at a clock speed of

1.6 GHz. The algorithms have been cross-validated using

EEG based technique. The algorithm has been validated

using EEG signals.

This paper is organised as follows. Section 2 describes the

vision based algorithm for estimating the state of alertness of

the driver. In Sect. 3, the on-board testing has been described

briefly. Section 4 describes the EEG based validation of the

image based algorithm. Section 5 describes the limitations

of the system. Section 6 concludes the paper.

2 Vision based algorithm

The vision based algorithm is based on the PERCLOS [18]

value of the driver. PERCLOS may be defined as the

percentage of eyelid closure over the pupil over a given

time interval as shown below.

P ¼ Ec

Et

� 100 % ð1Þ

In (1), P is the PERCLOS value, Ec is the number of

closed eye count over a predefined interval while Et is the

total eye count in the same given interval. Literature [18]

states that a higher value of P indicates higher loss of

attention and vice versa. In this paper, the PERCLOS value

is computed for each minute on a running average window

of 3 min duration [19]. Based on a threshold of 15 % [15],

the driver’s alertness state is classified as drowsy or alert.

It is evident from the definition of PERCLOS, that its

estimation precedes face and eye detection. The following

section describes the methodology for face and eye detection.

Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103 95

123

Page 3: An on-board vision based system for drowsiness detection in automotive drivers

2.1 Face detection

The face detection is carried out using a classifier based on

Haar-like features [20]. A robust classifier is trained using

the optimal sets of parameters obtained from our earlier

work [21]. A real-time framework is proposed in which the

runtime performance has been improved by down sampling

the frames. An optimal down sampling scale factor of 5 for

the specific application with camera resolution of

640 9 480 has been obtained experimentally as a tradeoff

between accuracy and runtime. Taking approximate dis-

tance of driver from camera and minimum face size after

down sampling into consideration, the algorithm has been

optimized so that Haar classifier performs accurately with

less time complexity.

2.1.1 Detection of in-plane rotated faces

The classifier mentioned above was found to be robust for

frontal faces. However the issue of detection of rotated

faces still prevailed. Towards achieving robustness for

detection of in-plane rotated faces, an affine transforma-

tion [22] of the input frame has been used. The idea is to

rotate the frame so as to align the face as frontal. The

rotation matrix R is found from the angle of rotation h of

the image matrix as shown in (2). If a point on the input

image is A ¼ x

y

� �and the corresponding point on the

affine transformed image is B ¼ x0

y0

� �, then B can be

related to A using (3).

R ¼ cosh �sinhsinh cosh

� �ð2Þ

A ¼ RB ð3Þ

If frontal face is not detected, a rotational matrix R with

h = ±15� is created, to transform the image using (3). It is

implicit that when a face is detected in a rotated image,

then there is higher probability that face will be detected by

rotating few subsequent frames by the same angle. To

achieve efficiency in time complexity, the algorithm is

optimized such that when a face is detected while rotating

the image by angle hs, then for the next frame the image

will be directly rotated initially to angle hs and then search

for face. If face is not detected then it will search for faces

by rotation of angles hs ± h, hs ± 2 9 h and hs ± 3 9 hThus, the algorithm was found to be robust for every

possible in-plane rotations.

The affine transformation only occurs in the event the

frontal face is not detected by the Haar classifier thereby

preventing appreciable loss in runtime owing to the com-

putational burden of the transform.

2.1.2 Detection of off-plane rotated faces

Affine transformation based method was found to aid the

Haar classifier for in-plane rotated face detection. The goal

of detection of off-plane rotated faces has been achieved by

the usage of Perspective transformation, which is a robust

and computationally inexpensive method for aligning the

head for small off-plane rotations [23]. It is a mapping of

one view of a face to another view. The off-plane rotated

image I x0; y0ð Þ can be related to its original frontal image

Iðx; yÞ using

x0 ¼ a0xþ a1yþ a2

a3xþ a4yþ 1ð4Þ

y0 ¼ a5xþ a6yþ a7

a3xþ a4yþ 1ð5Þ

The coefficients a0–a7 are estimated using non-linear

least squares method as in [23]. On obtaining the

coefficients, the desired perspective mapping is obtained.

For the driving context, cases of off-plane rotations are

observed very rarely and are restricted within ±30� and the

perspective transformation is found to compensate such

rotations.

2.1.3 Performance of face detection

The face detection algorithm is found to provide a hit rate

of around 95 % with a false alarm rate of 2 % under on-

board conditions. Once the face is detected, a region of

interest (ROI) is selected as the upper half of the detected

face region, based on morphology of the face, for eye

detection. This reduces the search space for detection of

eyes.

2.2 Eye detection

For estimation of PERCLOS, accurate detection and

classification of eye state is necessary. As reported in

literature, eye detection may be carried out using pixel to

edge information [24], optimal wavelet packets and radial

basis functions [25], combined binary edge and intensity

information [26], projection functions [27], principal

component analysis (PCA) [28], classifiers based on Haar-

like features [29], support vector machines (SVM) [30],

local binary patterns (LBP) [29] and other methods [31–

33]. The use of block-LBP histogram features is more

efficient in the extraction of information than global his-

togram based LBP [34]. In this paper, block LBP histo-

gram features have been used for real-time eye detection

owing to its illumination invariance property and compu-

tational simplicity.

96 Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103

123

Page 4: An on-board vision based system for drowsiness detection in automotive drivers

2.2.1 LBP features

The LBP operator labels the pixels of an image by thres-

holding the neighborhood of each pixel and considering the

result as a binary number. The use of LBP features for

objet detection has been detailed in [34]. In the present

work, the LBP texture descriptors have been used to build

several local descriptions of the eye and combine them into

a global description. These local feature based methods are

more robust against variations in pose or illumination than

holistic methods. The use of local histograms and pixel

level values of LBP to get the feature vector is given in

Table 1. In the detection phase, LBP histogram features are

obtained in the localized eye region and then projected into

the PCA feature space to get the energy components.

For evaluation of the performance of the algorithm, 300

prerecorded eye images are used for training. The algo-

rithm is tested on the whole database. Figure 1 shows some

eye localization results using block LBP features.

2.3 Eye state classification

For accurate estimation of PERCLOS, the detected eye

needs to be accurately classified into open or closed state.

The aforementioned problem being a binary classification

problem, SVM [31] being reported as a robust binary

classifier, has been used for the purpose.

The weights obtained as described in the eye detection

section, along with the ground truth (open or closed), are

used for the training the SVM. The energy components are

fed into the SVM which returns the class number.

The training of SVM has been carried out with 460 eye

images created using normal and NIR illumination. Some

training images are shown in Fig. 2. The testing is carried

out using another 1,700 images taken from the same

database. The block LBP histogram feature transform is

carried out on the training images. Then PCA is carried out

on the LBP feature vectors. Now the weight vectors cor-

responding to each sample is found by projecting the

samples to PCA subspace. These weights along with the

ground truth are used for SVM training. The training is

carried out with different kernels and accuracy levels and

the classification results are shown in Table 2. The results

reveal that the performance of the SVM is consistent under

normal and NIR illuminated images. The algorithm was

cross validated using EEG signals and reported in an earlier

work [3]. The real-time algorithm is coded into a single

board computer having Intel Atom processor, with

1.6 GHz clock speed and 1 GB RAM. The overall pro-

cessing speed is found to be 12 fps, which is acceptable for

accurate estimation of PERCLOS. The algorithm is tested

under laboratory conditions. However, on-board conditions

are quite different from the laboratory conditions and

therefore testing in practical driving scenarios is necessary.

Hence, two types of on-board tests were performed sepa-

rately for day and night to test the robustness of our

algorithm in actual driving conditions. The only difference

in the night driving tests was the use of NIR lighting.

Once the eye have been classified as open or closed, the

PERCLOS value is computed using (1). A 66.67 % over-

lapping time window of 3 min has been used to compute

the PERCLOS value for every minute and in the event it is

found to be [20 %, an alarm is sounded.

2.4 Overall schematic

The overall schematic is shown in Fig. 3. The PERCLOS

value is computed after an interval of 1 min after the eye is

localized and eye state is classified in real-time. The face

hit rate was also taken as an input, and in the event of low

face hit rate, the driver was alarmed to pay attention to the

road. Figure 4 shows the overall block diagram of the

system while Fig. 5 shows the process flow for estimating

PERCLOS.

3 On board testing

3.1 System description

The alertness monitoring system consists of a single USB

camera having maximum frame rate of 30 fps at a reso-

lution of 640 9 480 pixels. The camera is placed directly

on the steering column just behind the steering wheel to

obtain the best view of the driver’s face. The camera

placement was finalized after a lot of testing to obtain the

best result.

Table 1 Algorithm: block LBP

Training Phase

1. Eye images are resized to 50 9 40 resolution and each image is

divided into sub-blocks of size 5 9 4

2. The LBP feature values of the sub-blocks are found and 16 bin

histogram of each block is calculated

3. The histogram of each block is arranged to form a global feature

descriptor

4. PCA is carried out on the feature vectors and 40 Eigen vectors

having largest Eigen values were selected

Detection phase

1. ROI from face detection stage is obtained

2. For each sub window the block LBP histogram is found and is

projected to Eigen space

3. The sub window with minimum reconstruction error is found

4. If the reconstruction error is less than a threshold it is considered

as a positive detection

Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103 97

123

Page 5: An on-board vision based system for drowsiness detection in automotive drivers

The SBC is powered at 12 V DC from the car supply.

Power supply from the car battery has the advantage of

nullifying the requirement of a separate power source which

adds to the cost of the system as well as space requirement.

The typical current drawn from the input source is

*1,200 mA. The approximate typical power drawn from

the supply is 15 W. A voltage regulator unit, comprising of

IC LM317 along with some resistors, a capacitor and an

inductor, is used before the input to remove high voltage

spikes from the car supply. The voltage regulation unit

along with a 1.5 A fuse ensured protection to all other units.

The regulator unit specifications are obtained after mea-

suring the input current and voltages under actual operating

conditions. The processing unit comprises of Intel Atom

N450 processor having a clock speed of 1.6 GHz. The

motherboard of the embedded platform has x86 architec-

ture. The system has been made very rugged for automotive

environment specially to protect it from vibration and jerks

of the vehicle. An NIR lighting arrangement, consisting of a

matrix of 3 9 8 Gallium Arsenide LEDs, is also connected

across the same supply in parallel with the Embedded

Platform. After thorough testing it was found out that the

3 9 8 matrix was the minimum sufficient size to illuminate

the face correctly. The NIR module is operated at 10 V DC

and draws a typical current of 250 mA. The lighting system

is connected through a light dependent resistor (LDR), to

automatically switch on the NIR module in the absence of

sufficient illumination. A seven inch LED touch screen is

used to display the results. Table 3 gives the power ratings

of the components.

Figure 6 shows the different modules used in the sys-

tem. The speed of the algorithm was found to be 12 fps,

which is very acceptable for estimation of PERCLOS.

3.2 Testing

The main objective of the testing was to evaluate the

performance of the system under on-board conditions.

Using the sequence of frames captured from the camera,

the processor processes the frames to compute PERCLOS.

Based on a threshold value of 15 %, a voice alarm is

generated to warn the driver. The NIR system is switched

on automatically using a LDR based on external lighting

conditions. The tests were carried out on a rough road

followed by a smooth road (highway). Twenty subjects

were used for the tests under day as well as night driving

conditions. The face and eye detection was found to be

accurate for both smooth as well as jerky road conditions.

In first test, during the midday, when the light fell directly

on the camera lens, the camera sensor got saturated and

image features were lost. The problem was rectified in

Fig. 1 Eye localization using

LBP

Fig. 2 Sample training images for eye state classification using SVM

Table 2 Detection results with SVM

Kernel function tp tn fp fn tpr (%) fpr (%)

Linear SVM 838 808 42 12 98.58 4.94

Quadratic 827 765 85 23 97.29 10

Polynomial 848 807 43 2 99.76 5.32

98 Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103

123

Page 6: An on-board vision based system for drowsiness detection in automotive drivers

future tests by using a camera with a superior image sensor

to remove the saturation effect.

The tests were carried out on a rough road followed by a

high way road. The face and eye detection was found to be

fairly accurate. The face detection rate, when the driver was

looking straight, is found to be 98.5 % whereas the eye

detection rate was 97.5 % on an average for both day as well

as night. With head rotations of the driver, the face detection

rate dropped down to 95 % whereas that for eyes was 94 %.

The eye state classification was found to be 97 % accurate.

The overall false alarming of the algorithm was found to be

around 5 %. The PERCLOS values are shown in the display

Fig. 3 The overall vision based algorithm

Fig. 4 Schematic diagram of

the overall system

Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103 99

123

Page 7: An on-board vision based system for drowsiness detection in automotive drivers

for every minute, on a running average window basis of

3 min. The main issues faced during the day driving test are

the image saturation, noise due to vehicle vibration and lights

coming from other vehicles during night.

4 Results

The system was found to be robust in detecting the face and

eyes. The face detection rate, when the driver was looking

straight, is found to be 98.2 % whereas the eye detection

rate was 97.1 %. With head rotations of the driver, the face

detection rate dropped down to 94.5 % whereas that for

eyes was 94.2 %. The eye state classification was found to

be 98 % accurate. The overall false alarming of the algo-

rithm was found to be around 5.5 %. The error in PER-

CLOS estimation was calculated by the difference of true

PERCLOS computed manually with the PERCLOS value

obtained from the algorithm. The error is found out to be

4.9 % approximately. The NIR lighting system works well

for the test duration. From the above on-board tests, it is

evident that this system can be used as a safety precaution

system which might prevent a number of road accidents

due to drowsiness.

Figure 7 shows the robustness of the system against

invariance of gender of the subject as it is found to detect

female faces and eyes. The figure also reveals that the

system is invariant to the height of the driver owing to the

positioning of the camera.

From the above on-board tests, it is evident that this sys-

tem can be used as a safety precaution system which might

prevent a number of road accidents due to drowsiness.

5 Validation

The validation of PERCLOS as an effective measure of

drowsiness level has been carried out using EEG signals

which is considered to be an authentic indicator for the

same. In this work, the EEG analysis primarily comprises

of the parameter based approach described in [35–37]. An

experiment was conducted on 13 subjects where profes-

sional drivers were used in the age-group of 20–40 years.

They were asked to drive on a simulated driving environ-

ment and video of facial images as well as raw EEG data

were captured simultaneously. The details of the experi-

ment is reported in an earlier publication [38].

Fig. 5 Process flow for estimating PERCLOS

Fig. 6 a Surge protector

circuit. b Camera on dashboard.

c NIR lighting system. d LCD

screen on dashboard

Table 3 Components with power consumption

Component Power (W)

Embedded processing unit 9

Webcam 1

NIR lighting system 2

LCD screen 1

Speaker 2

100 Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103

123

Page 8: An on-board vision based system for drowsiness detection in automotive drivers

5.1 Preprocessing: artefact removal

Raw EEG data obtained in the experiment is contaminated

with artifacts. Such artifacts are characterized by ampli-

tudes in the milli-volt range (whereas the actual EEG is in

microvolt range) in the frequency band of 0–16 Hz [39].

The EEG signals were recorded at a frequency of 256 Hz

and the analysis was concentrated in the range of

0.4–30 Hz. Thus, the raw EEG was first processed using a

band pass filter with cutoff frequencies of 0.4 and 30 Hz.

After this, normalization is carried out to center the signal

at zero and force a unit standard deviation. Normalization

ensures removal of any unwanted bias that may have crept

into experimental recordings. The in-band artifacts were

then removed using wavelet thresholding.

5.2 Frequency decomposition

After artifact removal, the data was decomposed using a

4-level Daubechies-4 wavelet transformation. For wavelet

decomposition, suppose the coefficients are named corre-

sponding to D1–D4 and A4 as cD1, cD2, cD3, cD4 and

cA4 respectively. The coefficients cD1, cD2, cD3 and cD4

(clubbed with cA4) corresponds to the frequency bands b,

a, h and d (1 and 2) respectively.

It has already been discussed that sometime the basic

energy indices do not show a substantial change under mild

fatigued condition and suitable ratio parameters may be

better in differentiating such fatigue levels.

An increasing trend has been observed in entropy values

with fatigue at successive stages. A closer look at the graph

elucidates that the extensive entropies (SE and RERE) have

better performance than the non-extensive ones in classi-

fying the fatigue stages.

The figure shows the spatial variation of relative energy

in different bands of EEG signals as a comparison with the

PERCLOS value of a subject with increasing level of

fatigue (Fig. 8). The details are available in [38]. A stage

wise analysis of the entropy values brings out the physical

significance of this feature. As mentioned in energy and

ratio analysis, the first stage EEG signal is basically

Fig. 7 Face and eye detection

in female subjects

Fig. 8 Correlation of PERCLOS with EEG signals for 5 stages of

increasing fatigue

Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103 101

123

Page 9: An on-board vision based system for drowsiness detection in automotive drivers

composed of low frequency d waves. Hence the entire

energy distribution is skewed towards the lower frequen-

cies. This represents a signal with lower disorder. There-

fore, entropy values are lowest in case of first stage for all

the entropies. As the fatigue level increases the energy in aand b bands starts to increase. This leads to the flattening of

the energy spectrum. Flattening of energy distribution

means greater disorder, which ensures higher entropy val-

ues at later stages.

The PERCLOS value was found to be coherent with the

EEG signal analysis as obtained from the results.

6 Limitations

The system will show poor results in case of drivers

wearing spectacles. The reason for this is the glare caused

in the spectacles resulting in loss of image information

about the state of the eye. There has been an attempt to

detect the presence of sunglasses in [40]. The case of

drivers wearing sunglasses will fail to estimate PERCLOS

because of lack on information regarding the eye state.

7 Conclusion

In this paper, a robust real-time system for monitoring the

loss of attention in automotive drivers has been presented.

In this approach, a face detection framework has been

presented based on a Haar classifier. The in-plane and off-

plane rotations of the driver’s face have been compensated

using affine and perspective transformations of the input

frame respectively. Matching of concatenated block LBP

feature is used to localize the eyes. Finally SVM is used to

classify the eye states into open and closed to compute the

PERCLOS values over a window of 3 min interval. The

overall speed of the algorithm is found to be 12 fps, which

is good enough for correctly estimating the state of eye.

The threshold of 15 % PERCLOS has been used for

declaring the driver as drowsy or not. A voice alarm is set

to warn the driver in the event of loss of attention. The

overall algorithms have been validated off-board using

EEG based method. On-board testing has been carried out

for both day as well as night driving. The system was found

to be quite robust both in terms of speed and accuracy.

There is further scope of research in the detection of eyes

occluded by spectacles.

Acknowledgments The funds received from the Department of

Electronics and Information Technology, Government of India, for

this study is gratefully acknowledged. The authors would like to thank

the subjects for voluntary participation in the experiment for creation

of the database.

References

1. Arun, S., Sundaraj, K., Murugappan, M.: Driver inattention

detection methods: a review. In 2012 IEEE Conference on Sus-

tainable Utilization and Development in Engineering and Tech-

nology (2012)

2. Matousek, M., Petersen, A.: A method for assessing alertness

fluctuations from EEG spectra. Electroencephalogr Clin. Neuro-

physiol. 55(1), 108–113 (1983)

3. Kar, S., Bhagat, M., Routray, A.: EEG signal analysis for the

assessment and quantification of driver’s fatigue. Transp. Res. 13,

297–306 (2010)

4. Routray, A., Kar, S., Bhagat, M.: EEG analysis for the assessment

and quantification of driver’s fatigue. Transp. Res. Part F Traffic

Psychol. Behav. 13(5), 297–306 (2010)

5. Cajochen, C., Zeitzer, J.M., Czeisler, C., Dijk, D.J.: Dose–

response relationship for light intensity and ocular and electro-

encephalographic correlates of human alertness. Behav. Brain

Res. 115, 75–83 (2000)

6. Knipling, R. R., Wierwille, W. W.: Vehicle-based drowsy driver

detection: current status and future prospects. IVHS America

fourth annual meeting, pp. 2–24, April 1994

7. Dhupati, L., Kar, S., Rajaguru, A., Routray, A.: A novel drows-

iness detection scheme based on speech analysis with validation

using simultaneous EEG recordings. 2010 IEEE Conference on

Automation Science and Engineering (CASE) (2010)

8. Bundele, M. M., Banerjee, R.: Detection of fatigue of vehicular

driver using skin conductance and oximetry pulse: a neural net-

work approach. In Proceedings of the 11th International Con-

ference on Information Integration and Web-based Applications

& Services (2009)

9. Onken, R.: DAISY, an adaptive, knowledge-based driver moni-

toring and warning system, In Proceedings of the Intelligent

Vehicles ‘94 Symposium (1994)

10. Singh, S., Papanikolopoulos, N.: Monitoring driver fatigue using

facial analysis techniques. In Proceedings of IEEE/IEEJ/JSAI

International Conference on Intelligent Transportation Systems

(1999)

11. Ayoob, E. M., Steinfeld, A., Grace, R.: Identification of an

‘‘appropriate’’ drowsy driver detection interface for commercial

vehicle operations. In Proceedings of The Human Factors and

Ergonomics Society Annual Meeting (2003)

12. Fletcher, L., Petersson, L., Zelinsky, A.: Driver assistance sys-

tems based on vision in and out of vehicles. In Proceedings IEEE

on Intelligent Vehicles Symposium (2003)

13. Smith, P., Lobo, N.V., Shah, M.: Determining driver visual

attention with one camera. IEEE Trans. Intell. Transp. Syst. 4(4),

205–218 (2003)

14. Zhu, Z., Ji, Q.: Real time and non-intrusive driver fatigue mon-

itoring. In IEEE Intelligent Transportation Systems Conference,

Washington, DC (2004)

15. Bergasa, L.M., Nuevo, J., Sotelo, M.A., Barea, R., Lopez, M.E.:

Real-time system for monitoring driver vigilance. IEEE Trans.

Intell. Transp. Syst. 7(1), 63–77 (2006)

16. Eriksson, M., Papanikolopoulos, N.P.: Driver fatigue: a vision-

based approach to automatic diagnosis. Transp. Res. Part C 9(6),

399–413 (2001)

17. Sacco, M., Farrugia, R.: Driver fatigue monitoring system using

support vector machines. In 5th International Symposium on

Communications Control and Signal Processing (ISCCSP)

(2012)

18. Dinges, D., Mallis, M. M., Maislin, G., Powell, J. W., and

National Highway Traffic Safety Administration: Evaluation of

techniques for ocular measurement as an index of fatigue and the

basis for alertness management. (1998)

102 Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103

123

Page 10: An on-board vision based system for drowsiness detection in automotive drivers

19. Dingus, T.A., Hardee, H., Wierwile, W.W.: Development of

models for on-board detection of driver impairment. Accid. Anal.

Prev. 19(4), 271–283 (1987)

20. Viola, P., Jones, M.M.: Robust real-time face detection. Int.

J. Comput. Vis. 57(2), 137–154 (2004)

21. Gupta, S., Dasgupta, A., Routray, A.: Analysis of training of

parameters in Haar. In IEEE International Conference on Image

Information Processing, Shimla (2011)

22. Mohanty, P.K., Sarkar, S., Kasturi, R.: Designing affine trans-

formations based face recognition algorithms. In Proceedings of

the IEEE Computer Society Conference on Computer Vision and

Pattern Recognition (2005)

23. Lien, J.J., Kanade, T., Cohn, J.F., Li, C.: Detection, tracking, and

classification of action units in facial expression. Robotics Auton.

Syst. 31(3), 131–146 (2000)

24. Asteriadis, S., Nikolaidis, N., Hajdu, A., Pitas, I.: An eye

detection algorithm using pixel to edge information. In ICSSP

(2006)

25. Huang, J., Wechsler, H.: Eye detection using optimal wavelet

packets and radial basis functions (RBFs). Int. J. Pattern Rec-

ognit. Artif. Intell. 13(7), 1–18 (1999)

26. Song, J., Chia, Z., Liub, J.: A robust eye detection method using

combined binary edge and intensity information. Patt. Recognit.

39, 1110–1125 (2006)

27. Zhou, Z., Geng, X.: Projection functions for eye detection. Pat-

tern Recognit. 37, 1049–1056 (2003)

28. Wang, P., Green, M. B, Ji, Q., Wayman, J.: Automatic eye

detection and its validation. In Proceedings of the 2005 IEEE

Computer Society Conference on Computer Vision and Pattern

Recognition (2005)

29. Hadid, A., Heikkila, J., Silven, O., Pietikainen, M.: Face and eye

detection for person authentication in mobile phones. In First

ACM/IEEE International Conference on Distributed Smart

Cameras (ICDSC ‘07) (2007)

30. Liu, X., Xu, F., Fujimura, K.: Real-time eye detection and

tracking for driver observation under various light conditions. In

IEEE Intelligent Vehicle Symposium (2002)

31. Bhowmick, B., Kumar, K. S. C.: Detection and classification of

eye state in IR camera for driver drowsiness identification. In

IEEE International Conference on Signal and Image Processing

Applications (2009)

32. Eriksson, M., Papanikolopoulos, N. P.: Eye-tracking for detection

of driver fatigue. In Proceedings of the International Conference

on Intelligent Transportation Systems, Boston (1998)

33. Sirohey, S.A., Rosenfeld, A.: Eye detection in a face image using

linear and nonlinear filters. Pattern Recognit. 34(7), 1367–1391

(2001)

34. Ahonen, T., Hadid, A., Pietikainen, M.: Face description with

local binary patterns: application to face recognition. IEEE Trans.

Pattern Anal. Mach. Intell. 28(12), 2037–2041 (2006)

35. Jap, B.T., Lal, S., Fischer, P., Bekiaris, E.: Using EEG spectral

components to assess algorithms for detecting fatigue. Expert

Syst. Appl. 36(2), 2352–2359 (2009)

36. Papadelis, C., Kourtidou-Papadelis, C., Panagiotis, D., Bamidis,

D., Chouvarda, I., Koufogiannis, D.: Indicators of sleepiness in an

ambulatory EEG study of night driving. In 28th IEEE EMBS

Annual International Conference, New York, Aug 30–Sep 3,

(2006)

37. Papadelis, C., Chen, Z., Kourtidou-Papadelis, C., Bamidis, D.,

Chouvarda, I., Koufogiannis, D.: Monitoring sleepiness with on-

board electrophysiological recordings for preventing sleep

deprived traffic accidents. Clin. Neurophysiol. 118(9), 1906–1922

(2007)

38. Kar, S., Routray, A.: Analysis of electroencephalograph signals

for detecting fatigue in human drivers, PhD Dissertation (2012)

39. Nunej, P., Srinivasan, R.: Electric fields of the brain: the neuro-

physics of EEG, 2nd edn. Oxford University Press, New York

(2006)

40. Singvi, M., Dasgupta, A., Routray, A.: A real time algorithm for

detection of spectacles leading to eye detection. In 4th Interna-

tional Conference on Intelligent Human Computer Interaction

(IHCI) (2012)

Int J Adv Eng Sci Appl Math (April–September 2013) 5(2–3):94–103 103

123


Recommended