AutoViRe: Automated Visage Recognition Attendance System
A Project Work(A80088)
Submitted
in partial fulfillment of the requirements for
the award of the degree of
Bachelor of Technology
in
Computer Science and Engineering
By
Ms. ANCHAL AGRAWAL (15261A05C3)
Mr. MOHAMMED WASEF MOHIUDDIN (15261A05F4)
Under the Guidance of
Mrs. B. Prasanthi
(Assistant Professor)
and
Mrs. M. Mamatha
(Assistant Professor)
Department of Computer Science and Engineering
MAHATMA GANDHI INSTITUTE OF TECHNOLOGY
(Affiliated to Jawaharlal Nehru Technological University)
GANDIPET, HYDERABAD – 500 075. TELANGANA (INDIA)
APRIL 2019
i
MAHATMA GANDHI INSTITUTE OF TECHNOLOGY
(Affiliated to Jawaharlal Nehru Technological University Hyderabad)
GANDIPET, HYDERABAD – 500 075. Telangana
CERTIFICATE
This is to certify that the project entitled “AutoViRe: Automated Visage Recognition
Attendance System” is being submitted by ANCHAL AGRAWAL bearing Roll No:
15261A05C3 and MOHAMMED WASEF MOHIUDDIN bearing Roll No: 15261A05F4 in
partial fulfillment for the award of B Tech in Computer Science and Engineering to Jawaharlal
Nehru Technological University, Hyderabad is a record of bonafide work carried out by
him/her under our guidance and supervision.
The results embodied in this project have not been submitted to any other University or
Institute for the award of any degree or diploma
Supervisor Supervisor Head of the Department
Mrs. B. Prasanthi Mrs. M. Mamatha Dr. C.R.K. Reddy
Asst. Professor Asst. Professor Professor
External Examiner
ii
DECLARATION
This is to certify that the work reported in this project titled “AutoViRe: Automated Visage
Recognition Attendance System” is a record of work done by us in the Department of
Computer Science and Engineering, Mahatma Gandhi Institute of Technology, Hyderabad.
No part of the work is copied from books/journals/internet and wherever the portion is
taken, the same has been duly referred in the text. The report is based on the work done entirely
by us and not copied from any other source
ANCHAL AGRAWAL (15261A05C3)
MOHAMMED WASEF MOHIUDDIN (15261A05F4)
iii
ACKNOWLEDGEMENT
We would like to express our sincere thanks to Dr. K Jaya Sankar, Principal MGIT, for
providing the working facilities in college.
We wish to express our sincere thanks and gratitude to Dr. C R K Reddy, Professor and
HOD, Department of CSE, MGIT, for all the timely support and valuable suggestions during
the period of project.
We are extremely thankful to Dr. M Rama Bai, Professor, Dr. K Sreekala, Assistant
Professor, Mrs. J Sreedevi, Assistant Professor, Department of CSE, MGIT, Major Project
Coordinators for their encouragement and support throughout the project.
We are extremely thankful and indebted to our internal guides Mrs. B. Prasanthi, Assistant
Professor and Mrs. M. Mamatha, Assistant Professor, Department of CSE, for their constant
guidance, encouragement and moral support throughout the project.
Finally, we would also like to thank all the faculty and staff of CSE Department who helped us
directly or indirectly, for completing this project.
ANCHAL AGRAWAL (15261A05C3)
MOHAMMED WASEF MOHIUDDIN (15261A05F4)
iv
TABLE OF CONTENTS
Certificate i
Declaration ii
Acknowledgement iii
List of Figures vi
List of Tables viii
Abstract ix
1. Introduction 1
1.1 Existing Attendance Systems 3
1.1.1 Basic Attendance Systems 3
1.1.2 Moderate Attendance Systems 4
1.1.3 Advanced Attendance Systems 5
1.1.4 Drawbacks of Existing Systems 5
1.2 Proposed Attendance System 7
1.3 Requirements Specifications 9
1.3.1 Software Requirements 9
1.3.2 Hardware Requirements 9
1.3.3 Developer Requirements 9
2. Literature Survey 10
2.1 Previously Implemented Techniques for Attendance Systems 10
2.2 Tabular Representation 12
3. AutoViRe: Automated Visage Recognition Attendance System 13
3.1 Modules Description 13
3.1.1 Feature Registration 13
3.1.2 Facial Recognition 15
3.1.3 Attendance Update 15
3.1.4 Email Notification 16
3.2 Design Methodology 17
3.2.1 Activity Diagram 17
3.2.2 State Chart Diagram 19
3.2.3 Class Diagram 21
3.2.4 Sequence Diagram 23
v
3.2.5 Collaboration Diagram 24
3.2.6 Use Case Diagram 25
4. Testing including Test Cases and Results 26
4.1 Test Cases 26
4.2 Results 27
5. Conclusions and Future Scope 31
5.1 Conclusion 31
5.2 Future Scope 31
Bibliography 32
Appendix A – Source Code 33
Appendix B – User Manual 52
vi
LIST OF FIGURES
Figure 3.1 Feature Registration 13
Figure 3.2 Facial Recognition 14
Figure 3.3 Attendance Update 15
Figure 3.4 Facial Registration and Facial Recognition 16
Figure 3.5 Activity Diagram – Decision Making and Implementation 17
Figure 3.6 State Chart Diagram – Representation of States and their transformation 19
Figure 3.7 Class Diagram – Representation of classes and their relationships 21
Figure 3.8 Sequence Diagram – Object Interaction and Process Execution 23
Figure 3.9 Collaboration Diagram – Object Interaction and Process Execution 24
Figure 3.10 Use Case Diagram – Application’s Functionality based on Input Type 25
Figure 4.2 Feature Registration Screenshot 26
Figure 4.3 Facial Recognition Screenshot 28
Figure 4.4 Digital Attendance Sheet 29
Figure 4.5 Email Notification 30
Figure B.1 Creation of Django Project 52
Figure B.2 Databases Class 53
Figure B.3 Time_Zone Class 53
Figure B.4 Making database migrations into the project 53
Figure B.5 Running the localhost server 54
Figure B.6 Creation of Django app 54
Figure B.7 Installed_Apps class 54
Figure B.8 Making migrations into Django app 55
Figure B.9 Creation of Super User for Admin 55
Figure B.10 Running localhost server 55
Figure B.11 Admin page login for the Django localhost 56
Figure B.12 Admin project Admin Page 56
Figure B.13 Redirecting to Project Path 57
Figure B.14 Running server on command line 58
Figure B.15 Browser Display of Application Execution 59
Figure B.16 Registration based on Unique Student ID 60
vii
Figure B.17 Facial Registration 61
Figure B.18 Dataset consisting of several registered faces 62
Figure B.19 Browser Display of training webpage 63
Figure B.20 Training display on the command line 63
Figure B.21 Completion of training 63
Figure B.22 Browser display of final testing 64
Figure B.23 Successful Face Detection based on dataset 65
Figure B.24 Final tally of attendance for the current day 66
Figure B.25 Final updated database 67
Figure B.26 Email sent to the respective parent 68
viii
LIST OF TABLES
Table 2.1 Tabular Representation of the Background Research 12
Table 4.1 Test Cases and Outputs 26
ix
ABSTRACT
In educational institutions, companies and various organizations, one of the main criteria is to
maintain a record of people present daily. Attendance Management hence becomes an
important goal for the progress and integrity of any organization. The system revolves around
the concept of tallying registered values in the database with the current inputs. Each input is
compared to existing values in database and if the value exists, the given candidate is marked
present. Otherwise, the candidate is marked absent.
Hardware-based attendance systems that store values in a database currently exist and are the
most common record tracking systems found in organizations. There are some automatic
attendances making system which are currently used by much institution. One of such system
is biometric technique. Although it is automatic and a step ahead of traditional method it fails
to meet the time constraint. The student has to wait in queue for giving attendance, which is
time taking.
The proposed system deals with automating the attendance recording procedure in an efficient
and optimized manner. It follows a tallying procedure to record attendance and stores it in the
system. This project introduces an involuntary attendance marking system, devoid of any kind
of interference with the normal teaching procedure.
The system can be also implemented during exam sessions or in other teaching activities where
attendance is highly essential. This system eliminates classical student identification such as
calling name of the student, or checking respective identification cards of the student, which
can not only interfere with the ongoing teaching process, but also can be stressful for students
during examination sessions.
In organizations & educational institutions, attendance updating takes place continuously and
these results in dynamic changes. The system would provide the administrator the access to
attendance tracking dynamically & maintain it efficiently.
1
1. Introduction
Uniqueness or individuality of an individual is his face and its features. In this project, face of an
individual is used for the purpose of marking attendance automatically. Conventional methodology
for taking attendance is by calling the name or roll number of the student and the attendance is
manually recorded. The important point of concern with this methodology is Time Consumption.
On an average it takes an essential part of the time allotted for the subject.
Attendance is prime important for both the teacher and student of an educational organization. So,
it is very important to keep record of the attendance. The problem arises when we think about the
traditional process of taking attendance in class room. Calling out the name or roll number of the
student for attendance is not only a problem of time consumption but also consumes energy. So,
an automated attendance system can resolve all the above problems.
To stay away from these losses, an automated process has been used in this project which is based
on Image Processing. In this project, the two main components that have been used are face
detection and face recognition.
Face detection is used to locate the position of the face region and face recognition is used for
marking the understudy’s attendance. The database of all the students in the class is stored and
when the face of the individual student matches with one of the faces stored in the database then
the attendance is recorded.
There exists a wide range of automatic attendance management systems which are currently being
used by many institutions and organizations. One such system is the biometric based attendance
system. Although it is automatic and leaps a step ahead of the traditional method it fails to meet
the time constraint because the students have to wait in a queue for marking attendance, which is
also a time taking process.
2
This project introduces an involuntary attendance marking system, devoid of any kind of
interference with the normal teaching procedure. The system can be also implemented during exam
sessions or in other teaching activities where attendance is highly essential.
This system eliminates classical student identification such as calling names of the students, or
checking respective identification cards of the students, which can not only interfere with the
ongoing teaching process, but also can be stressful for students during examination.
The proposed system deals with automating the attendance recording procedure in an efficient and
optimized manner. Our proposed system shall be a Face Recognition Attendance System which
uses the basic idea of image processing which is used in many secure applications like banks,
airports etc. It follows a tallying procedure to record attendance and stores it in the system.
Section 1.1 specifies the Requirements for the Development of the Application.
Section 2 gives the comparison and descriptions of the Existing Attendance Systems and Proposed
Attendance Systems along with its uses.
Section 3 describes the Overview of the modules and the Design of the overall application.
Section 4 mentions all the Test Cases and their outcomes along with the Execution Results of the
application.
Section 5 throws light on the Final Conclusions and Future Scope that we can derive from the
application and its uses.
3
1.1 Existing Attendance Systems
Attendance is prime important for both the teacher and student of an educational organization. So,
it is very important to keep record of the attendance. There are various attendance management
systems that vary in complexity and feasibility. We have divided them into three categories
namely, basic, moderate, and advanced.
1.1.1. Basic Attendance Systems
a. Manual Attendance System:
The Manual Attendance System involves the process of the faculty calling out the roll calls. If the
student is present in the class, the student physically acknowledges the roll call and says that he/she
is present. In all other cases, the faculty marks the student absent.
b. Paper Based Attendance System:
The Paper Based Attendance System is a part of the manual attendance system or could be used
for any other attendance system as well. Attendance is taken in any form and it's recorded on a
paper by writing either the absentees, or the presentees only. Usually faculties write the roll
numbers of the students that are absent or present as per convenience.
c. Timesheet Attendance System:
The Timesheet Attendance System involves recording attendance into a timesheet. A timesheet is
a physical or virtual tool that allows you to record and keep track of your worked time, in this case
it's number hours the student attends.
d. Token Based Attendance:
The Token based attendance involves displaying of a security token when demanded in order to
secure attendance. A security token (sometimes called an authentication token) is a small hardware
device that the owner carries to authorize access to a network service. The device may be in the
form of a smart card or may be embedded in a commonly used object such as a key fob. In the
context of students, the token is usually their identity card.
4
1.1.2 Moderate Attendance Systems
a. Biometric Attendance System:
The biometric attendance system works on two basic principles. First, it takes an image of a finger.
Then finger scanner saves characteristics of every unique finger and saved in the form of biometric
key. Actually, finger print scanner never saves images of a finger only series of binary code for
verification purpose. Secondly, the biometric attendance system determines whether the pattern of
ridges and valleys in this image matches the pattern of ridges and valleys in pre-scanned images.
b. Badge Monitoring Attendance System:
The Badge Monitoring Attendance System[3] is most commonly used in places where people work
with radioactive materials such as in a X-ray lab, nuclear centres, etc. The radioactive badge is
worn by the person somewhere between the neck and the waist such that the front faces the source
of radiation.
c. Swipe Card Attendance:
The Swipe Card Attendance System works by the person swiping its card on entry and exit of the
gate, and the attendance is recorded. A swipe card must come in contact with the corresponding
card reader before any transaction can take place. The transaction becomes active when the
magnetic stripe on a card is moved through a console at a gate.
d. Access Card Punching Attendance System:
A punch card is a flat and stiff paper with notches cut in it and contains digital information. In
punch card attendance system, students use this punch or proximity card for in and/or out. To
use a punch card, students just need to wave the card near a reader, which then ensures whether
the correct person is logging in and/or out.
5
1.1.3 Advanced Attendance Systems
a. Retinal Scan-based Attendance System:
The Retinal Scan-based Attendance System makes uses of retinal features and marks attendance
on retinal recognition. Eye scan or retinal scan is a biometric system that identifies a person by
using unique patterns of the retina. Human retina contains a complex blood vessel (retinal vein)
patterns through which an eye scanner device can easily identify a person and can even
differentiate identical twins. To scan human retina, retinal scanner uses the reflection of light that
is absorbed by retinal vein.
b. Gait Recognition Attendance System:
The Gait Recognition Attendance System records and recognizes an individual by examining the
way an individual walks, saunters, swaggers, or sashays — with up to 90-percent accuracy.
c. Facial Recognition Attendance System:
The Facial Recognition Attendance System[5] makes use of facial features such as distance
between the eyes, width of the nose, depth of the eye sockets, the shape of the cheekbones, the
length of the jaw line, etc. to recognize and mark attendance.
d. Sensor Detection Attendance System:
The Sensor Detection Attendance System uses RFID (Radio Frequency Identification)[4] to
identify individuals. A radio frequency identification reader (RFID reader) is a device used to
gather information from an RFID tag, which is used to track individual objects. Radio waves are
used to transfer data from the tag to a reader. RFID is a technology similar in theory to bar codes.
1.1.4 Drawbacks of Existing Systems:
a. Accuracy:
With an automated system, there is no human error. When you manually track your
students’ time, your students typically report their hours after they've worked them. This
will often increase the likelihood of inaccurate reporting. A student may not intend to
6
misrepresent his hours, he may just forget what his actual in and out times were. Or, if a
student has illegible handwriting, it could make it difficult for roll list to determine actual
hours attended. With manual reporting, the organization is basically relying on the honor
system. This system can be abused, which can lead to time theft.
b. Increased Productivity:
Organizations who use a manual roll call process spend several hours each day collecting
time cards, re-entering an illegible data by hand, faxing, phoning, and processing rollcall.
When you employ an automated time and attendance system, the rollcall process takes just
minutes each period.
c. Savings:
With an automated system, you’ll save rollcall processing hours and eliminate time theft
which means your bottom line will improve.
d. Regulatory Compliance:
An automated time and attendance system will not guarantee you’ll be compliant with all
student laws, the data that’s collected through the system can ensure you have the
information at your fingertips you’ll need to comply with all labour regulations. With an
automated timekeeping system, you’ll have the ability to pull up reports quickly. This will
provide you with all the information you’ll need if you’re ever subject to an audit.
7
1.2. Proposed System
The proposed system “AutoViRe: Automated Visage Recognition Attendance System” overcomes
the problems of the existing systems as mentioned previously. It mainly incorporates Facial
Recognition to mark student’s attendance into the database.
1.2.1. Salient Features of the Proposed System:
a. Face-mapping:
Facial features of the student such as distance between the eyes, width of the nose, depth of the
eye sockets, the shape of the cheekbones, the length of the jaw line, etc. are registered into the
database. Students are recognized based on these stored facial features, and if a match is found,
the student is marked present and the same is updated into the database. In all other cases, the
student is recorded absent in the database.
b. Complete Automation:
The system is automated to its full potential. The algorithm runs for few initial mins of every hour
and captures the attendance of students present in class and restarts at the beginning of the next
hour until end of day.
c. Immediate Update:
Once the algorithm successfully recognizes the student, the attendance for the corresponding
student is updates into the database automatically, without any human intervention.
d. Three-step Management:
The entire system is efficiently split into three components. Firstly, the facial features are stored,
and the model is trained. Second, the student is recognized by facial recognition algorithm
“Haarcascade”. Third, the attendance is automatically updated into the database.
e. Email to Parents:
At the end of the day, after classwork is over, an email is triggered automatically to the parents
reporting the number of hours their ward attended for that particular day.
8
f. Multiple Face Detection: Another advantage with this system is that, students need not form a
queue to stand in front of the camera, get themselves recognized and update their attendance. The
system has the capability of recognizing multiple faces at a time, and hence it saves time.
9
1.3 Requirements Specifications
1.3.1 Software Requirements
● Operating System: Windows Environments
● Browser: Any efficient fast-paced modern-day browser
1.3.2 Hardware Requirements
● Processor: Pentium IV Processor or higher
● Hard Disk: 400Mb minimum hard disk storage
● RAM: 512Mb or more
1.3.3 Developer Requirements
● Operating System: Windows Environments
● Language: Python 3, HTML, CSS
● Package Installer: OpenCV 3.0, Matplotlib, NumPy, python-csv, Pillow.
● Server Base: Local Host
● IDE: Django version 2.1.7
● RAM: 2Gb Minimum
● Processor: Pentium IV Processor or higher
10
2. Literature Survey
2.1. Previously Implemented Techniques for Attendance Systems
2.1.1. REAL TIME LOCATING SYSTEM USING RFID FOR INTERNET OF THINGS
In the present, attendance techniques are usually supplemented manually, because the number of
university students is increasing within training institutes, the problem with getting hold of a hand
requires human effort to report and maintain student attendance. Consequently, human errors are
common in this process. In recent years, there has been an increase in the number of applications
based on RFID (Radio Frequency Identification) systems. RFID technology facilitates automatic
wave identification using passive and active passive electronic labels with convenient readers. In
this paper, an attempt has been made to address the problem of continuous attendance of lectures
in developing countries and to find the location of special students using RFID technology. The
implementation of RFID for attending student attendance as developed and deployed in this study
is capable of eliminating lost time during manual attendance gathering and an opportunity for
education administrators to capture classroom statistics for sharing appropriate outcomes
Attendance and for further managerial decisions.
2.1.2 IOT BASED AUTOMATIC ATTENDANCE MANAGEMENT SYSTEM
In recent days, we have seen a sudden increase in the usage of Radio Frequency
Identification(RFID) systems in the fields of industrial technologies, health, agriculture,
transportation, etc. Also, Internet of Things is blooming parallelly. Therefore, using these, an
attempt has been made to solve the attendance management and monitoring problems. Attendance
Management System is the implementation of Internet of Things through Raspberry Pi 3 and RFID
Technology in order to reduce the time consumed by the traditional system of recording daily
attendance in schools and institutions. So, everything here in turn gets automated. An attempt has
also been made to develop an Android application(app) and help the students to view their
attendance anywhere, anytime.
2.1.3 AUTOMATED ATTENDANCE SYSTEM USING HAARCASCADE : A FACE
RECOGNITION APPROACH
A face recognition system is computer application capable of identifying a person from a digital
image. One of the ways to do so is by comparing selected facial features from image and a facial
database. Face recognition is used in different areas, to name a few, The Australian and New
Zealand customs services have an automated border processing system called SMARTGATE that
11
uses facial recognition. The system compares the face of the individual with image in the e-
passport microchip to verify that the holder of passport is rightful owner. Properly designed
systems installed in airports, multiplexes and other public places can identify individuals among
crowd, without passers-by even being aware of the system. Other biometrics like fingerprints, iris
scans and speech recognition cannot perform this kind of mass recognition. Another area includes
ATM and check cashing security. The software is able to quickly verify a customer’s face. After
a customer consents, the ATM captures a digital image of him. The facelt software then generates
a face print of the photograph to protect customers against identity theft and fraudulent
transactions. Thus, by face recognition software there is no need for a picture ID, bankcard or
personal identification number(PIN)To verify customers identity. But it is good enough to be
already implemented in different vertical markets such as commercial sectors, healthcare and
hospitality. However, effectiveness of facial recognition software in cases of railway and airport
security is questionable as it struggles to perform under certain conditions such as lightning,
sunglasses, long hair and other objects partially covering the subjects face. Also, if the face is
pointing the camera at an angle.
2.1.4 FACE RECOGNITION BASED ON CONVOLUTION NEURAL NETWORK
In this paper, a face recognition method based on Convolution Neural Network (CNN) is
presented. This network consists of three convolution layers, two pooling layers, two full-
connected layers and one Softmax regression layer. Stochastic gradient descent algorithm is used
to train the feature extractor and the classifier, which can extract the facial features and classify
them automatically. The Dropout method is used to solve the over-fitting problem. The
Convolution Architecture for Feature Extraction framework (Caffe) is used during the training and
testing process. The face recognition rate of the ORL face database and AR face database based
on this network is 99.82% and 99.78%.
12
2.2. Tabular Representation
S.No YEAR AUTHOR TITLE TECHNIQUES ADVANTAGES DISADVANTAGES
1. 2017 T.Saimounika
K.Kishore
Real Time
Locating
System using
RFID for
Internet of
Things
It uses RFID to
mark student
attendance.
Reduces
documents and
save time.
Initial cost of
installation is
required.
2. 2017 Sri Madhu
B.M
Kavya
Kanagotagi
Devansh
IoT based
Automatic
Attendance
Management
System
It uses RFID to
mark student
attendance and is
based on
Raspberry Pi.
Increased
security and
confidentiality.
Implementation of
Raspberry Pi requires
a special Operating
System.
3. 2017 Abhish Ijari
Anand
Mannikeri
Vinod Kumar
Gulmikar
Automated
Attendance
System using
Haarcascade :
A Face
Recognition
Approach
Uses Facial
Recognition to
mark student’s
attendance.
Reduces risk of
duplicate entries
or manipulations.
Camera keeps
capturing video
continuously for a
long time.
4. 2017 Kewen Yan
Shaohui
Huang
Yaoxian Song
Wei Liu
Neng Fan
Face
Recognition
Based on
Convolution
Neural
Network
Uses Facial
Recognition to
mark student’s
attendance and is
based on CNN.
Increases
accuracy of
facial
recognition.
Needs a special server
to implement CNN
and is costly.
Table 2.1: Tabular Representation of the Background Research
13
3. AutoViRe: Automated Visage Recognition Attendance System
3.1. Modules Description
The entire system “AutoViRe: Automated Visage Recognition Attendance System” is
efficiently divided into four design modules namely, Feature Registration, Facial
Recognition, Attendance Update, and Email Report.
a. Feature Extraction:
The facial features such as distance between the eyes, width of the nose, depth of the
eye sockets, the shape of the cheekbones, the length of the jaw line, etc. are registered
into the database folder by using the facial recognition xml file from the ‘OpenCV’
python library. Once the features are registered, the model is trained using the gathered
features.
Figure 3.1: Feature Registration
14
b. Facial Recognition:
The facial features are fetched from the database and the face of the student is
recognized by comparing with existing values in the database. Facial Recognition is
done using ‘OpenCV’ python library, particularly using ‘Haarcascade’ code.
Figure 3.2: Facial Recognition
15
c. Attendance Update:
Once the facial recognition is done successfully, the attendance for the corresponding
student is updated into the database automatically, without any human intervention. If
a match is found from the existing database, the student is marked as present, in all
other cases the student is marked absent.
Figure 3.3: Attendance Update
16
d. Email Notification:
At the end of the day, an email is triggered automatically and is sent to the parent
presenting the status of number of classes, the ward attended for that day.
Summary:
Figure 3.4: Facial Registration and Facial Recognition
The entire system can simply put into a nutshell as follows:
Initially the facial features of the student are captured based on convolutional neural networks [1],
extracted and registered into the database. Then the facial features are compared at the time of
taking attendance using the existing database images. If a match is found, the attendance for the
respective student for that corresponding hour is marked as present into the database, and in all
other cases it’s marked as absent. An email is triggered to the student’s parent that generates the
report of their ward’s attendance for the day.
17
3.2 Design Methodology
3.2.1. Activity Diagram
Figure 3.5: Activity Diagram – Decision Making and Implementation
18
Figure 3.5 signifies the activity flow in terms of decisions and how their implementation is
responsible for the application’s activity. The activity flow starts with capturing an input image.
The image capturing process runs into a loop until a proper input image is captured which is
suitable for facial detection and facial recognition. The captured image is pre-processed,
extraneous details and background noise is removed. The specific facial features such as distance
between the eyes, width of the nose, depth of the eye sockets, the shape of the cheekbones, the
length of the jaw line, etc. are analyzed and hoarded.
The captured input image, after being pre-processed and the facial features being extracted, is
verified with the existing database that consists of facial features recorded at the time of
registration of students’ details into the system.
If a match is found, the corresponding student is marked present, in all other cases, the student is
marked absent. This completes the whole process of marking of student attendance by
incorporating facial recognition.
19
3.2.2. State Chart Diagram
Figure 3.6: State Chart Diagram - Representation of States and their transformation
Figure 3.6 defines the Representation of states and how their transitions occur. The initial state
deals with capturing of a suitable for facial detection and facial recognition. It runs into a loop until
an input image is captured successfully.
20
The next state is pre-processing. In this state, extraneous details and background noise is
eliminated from the captured input image. After this, in the next state, feature extraction is done.
The specific facial features such as distance between the eyes, width of the nose, depth of the eye
sockets, the shape of the cheekbones, the length of the jaw line, etc. are analyzed and hoarded.
The state after this is the verification module. The captured input image, after being pre-processed
and the facial features being extracted, is verified with the existing database that consists of facial
features recorded at the time of registration of students’ details into the system.
If a match is found, the corresponding student is marked present, in all other cases, the student is
marked absent. This completes the whole process of marking of student attendance by
incorporating facial recognition and is the terminating state of the system.
21
3.2.3. Class Diagram
Figure 3.7: Class Diagram – Representation of classes and their relationships
Figure 3.7 represents the classes and their relationships with each other, along with attributes and
operations. Person, Time, Student, Lecturer, Camera and Admin are the classes in this system.
The class ‘Person’ is a generalization of any person, such as a ‘Student’, ‘Admin’, or a ‘Lecturer’.
The ‘Person’ has attributes such as name, Roll No., Phone No, Aadhaar No., etc. It’s operations
are Login( ), Logout( ), Exit( ), and Verify( ).
The class ‘Student’ represents a subclass of ‘Person’, it could be any student. It has the attributes
such as FaceImage, and state. It has operations like Upload( ), and Verify( ).
The class ‘Admin’ represents a subclass of ‘Person’, it’s the admin of the system, who owns the
highest level of access of data in the system and the maximum power to manipulate data. It has
the attribute, AdminID. It’s operations are Add( ), Remove( ), and Manage ( ).
The class ‘Lecturer’ represents a subclass of ‘Person’, it could be any faculty. It has the attribute,
FacultyID. It has the operation Manage_Student_Attendance( ).
22
The class ‘Time’ represents the time of capturing of input image and its related details. It is related
to the class ‘Person’. It has attributes such as ClassTime, Camera and CaptureZone. It has the
operation Turn_Camera_On( ).
The class ‘Camera’ is related to the class ‘Time’ and represents a camera that captures the input
image. It has the attribute, CameraID. It has the operations such as Recognise( ), Detect ( ) and
Send( ).
23
3.2.4. Sequence Diagram
Figure 3.8: Sequence Diagram - Object Interaction and Process Execution
Figure 3.8 indicates the interaction amongst the application’s objects. There are three
objects, User, FaceDetector, and System.
The ‘User’ i.e. the student registers himself/herself into the ‘FaceDetector’ database. This data is
stored into the ‘System’. The ‘FaceDetector’ retrieves data from the ‘System’ and accesses the
input image from the ‘User’.
The ‘User’ sends the input value to the ‘FaceDetector’ and the ‘FaceDetector’ searches for a tally
in the ‘System’. If a match is found, the ‘User’ is marked present, otherwise, the student is marked
absent.
24
3.2.5. Collaboration Diagram:
Figure 3.9: Collaboration Diagram - Object Interaction and Process Execution
The figure 3.9 illustrates the relationships and interactions among objects through messages they
use for communication. There are three
objects, User, FaceDetector, and System.
The ‘User’ i.e. the student registers himself/herself into the ‘FaceDetector’ database. This data is
stored into the ‘System’. The ‘FaceDetector’ retrieves data from the ‘System’ and accesses the
input image from the ‘User’.
The ‘User’ sends the input value to the ‘FaceDetector’ and the ‘FaceDetector’ searches for a tally
in the ‘System’. If a match is found, the ‘User’ is marked present, otherwise, the student is marked
absent.
25
3.2.6. Use Case Diagram:
Figure 3.10: Use Case Diagram – Application’s Functionality based on Input Type
The figure 3.10 denotes the function of the application which is entirely dependent on the
User’s Input. There are two actors namely, ‘Student’ and ‘System Manager’.
The ‘Student’ Registers its facial features and enrolls itself into the system. The ‘System
Manager’ manages these registration details.
The ‘Student’ may enter and exit the class. The ‘System Manager’ recorded attendance based on
student’s movement in and out of class.
The ‘System Manager’ maintains system resources.
26
4. Testing including test cases and results
4.1 Test Cases
S.
No
Action
Performed
Expected Output Actual Output Remarks
1 Starting Up and
Successfully
Running Server
Runs successfully and
system checks take place
properly without errors
System checks report
syntax errors in code
and import crashes.
Inefficient Code
loops and improper
library imports.
2 Server Start-up
and Execution Server successfully starts
up and user gives
requirement-based inputs
Takes a time gap of a
few milliseconds for
the server to start and
run
Make sure you don’t
start the server
abruptly
3 Feature
Registration Registration Successful
with the help of UIDs
and stored within dataset
folder
No features registered
and hence no dataset
available for detection
Register the system
model in a well-lit
environment for
maximum accuracy
4 Face (Visage)
Detection Successful Detection of
Faces by tallying images
of the given dataset
Inefficient Face
Detection and
improper results
shown
Train the system with
an increased count of
data set
5 Setting up Port
Number Successful Port set up for
requests and responses Port Number already
in use and server
doesn’t start
Select a unique port
number for efficient
system execution
6 Model Training Efficient Training done
within specified time
period
Improper UID entered
while registering face
leading to inefficient
results
Make sure inputs are
given correctly before
registration takes
place
7 Emails regarding
Attendance Info Emails successfully sent
to the respective parent’s
email IDs as soon as
schedule is complete
Large number of
emails being sent
consecutively leads to
time delay.
Emails must be
grouped and sent with
frequent time gaps to
avoid lag 8 Django Setup Make sure all required
specifications are
followed and specified
Improper Path Details
and server command
doesn’t execute
System paths differ
uniquely, define them
accordingly
9 Database Update Successfully appended
student’s attendance
within the excel sheet
Database overload due
to improper appending
of data
Group and arrange
database based on
timely basis
10 OpenCV
extraction OpenCV installed
successfully and properly
imported
Version of OpenCV
differs from that of
NumPy and code
crashes
All required libraries
have to be installed
with proper versions
Table 4.1: Test Cases and Outputs
27
4.2 Results
4.2.1. Feature Registration
Figure 4.2: Feature Registration Screenshot
Figure 4.2 demonstrates the process of feature registration. the camera window opens for a few
seconds and captures 100 images from the video frame and stores in the database with the
respective student id. OpenCV uses Haarcascade algorithm[2] to extract features.
28
4.2.2. Facial Recognition:
Figure 4.3: Facial Recognition Screenshot
Figure 4.3 shows the outcome of facial recognition. The Python libraries including NumPy and
matplotlib help the system recognize faces by tallying with dataset. OpenCV detectmultiscale()
function detects the faces through the camera and recognises them using the existing dataset.
29
4.2.3. Attendance Update:
Figure 4.4: Digital Attendance Sheet
Figure 4.4 displays the excel sheet where upon the recognition of faces, the attendance is marked
for the respective student for that corresponding hour automatically, without any human
intervention into the excel sheet, which acts as the database on this case.
30
4.2.4. Email Notification:
Figure 4.5: Email Notification
Figure 4.5 shows that email notification being sent to parents at the end of the day. Every day, at
the end of class work session for the day, an email is sent to the student’s respective parents/
guardians notifying them with their ward's attendance for that day.
31
5. Conclusions and Future Scope
5.1 Conclusions
The Automated Visage Recognition based attendance system has the main goal of automating the
process of managing attendance and revolves around the fulcrum of Automation. The system
involves mainly three steps: Registration, Authentication and Update. Its fundamental goal is to
eliminate time consumption and the need to maintain paperwork. Since the advent of technology,
humans have progressed to evolve and adapt to changes based on their convenience. Bringing this
idea to practicality helps the common man to effectively and efficiently progress and this system
helps the whole organization to evolve and achieve what is necessary by eliminating the tedious
and iterative tasks.
5.2 Future Scope
The system is built on a combination of several technologies and has overcome most manual flaws
and thus stands apart from the existing systems. However, the database management is an area of
concern and needs to be filtered out on a cyclic and timely basis to avoid data overflow.
Furthermore, since the system is built upon the features of Machine learning, effective training
and efficient procedures must be followed to achieve 100% accuracy. Finally, we can build the
system as an integration of the current attendance systems being used in the organization in order
to achieve maximum efficiency.
32
Bibliography
[1] “Face Recognition Based on Convolution Neural Network” by Kewen Yan,
Shaohui Huang, Yaoxian Song, Wei Liu, Neng Fan, Conference: Proceedings of the 36th
Chinese Control Conference
[2] “Automated Attendance System using Haarcascade : A Face Recognition
Approach” by Abhish Ijari , Anand Mannikeri , Vinod Kumar Gulmikar, Conference:
International Journal for Research in Applied Science & Engineering Technology
(IJRASET)
[3] “IoT based Automatic Attendance Management System” by B.M Sri Madhu,
Kavya Kanagotagi, Devansh, Conference: 2017 International Conference on Current
Trends in Computer, Electrical, Electronics and Communication (CTCEEC)
[4] “Real time locating system using RFID for Internet of Things” by T. Saimounika,
K. Kishore, Conference: 2017 International Conference on Energy, Communication, Data
Analytics and Soft Computing (ICECDS)
[5] “Classroom Attendance Using Face Detection and Raspberry-Pi” by Priya Pasumarti, P. Purna
Sekhar, Conference: International Research Journal of Engineering and Technology (IRJET), e-
ISSN: 2395-0056, p-ISSN: 2395-0072, Volume: 05 Issue: 03 | Mar-2018.
33
APPENDIX A – Source Code
1. VIEWS.PY
# collects helper functions and classes that “span” multiple levels of MVC
from django.shortcuts import render
# OpenCV-Python is a library of Python bindings designed to solve computer vision problems.
import cv2
# The OS module in Python provides a way of using operating system dependent functionality.
import os
# NumPy is the most basic yet a powerful package for scientific computing and data manipulation
in Python
import numpy as np
# Python Imaging Library is a free library for the Python programming language that adds support
for opening, manipulating, and saving many different image file formats.
from PIL import Image
# The csv module helps you to elegantly process data stored within a CSV file.
import csv
# wrappers library provided to make sending email extra quick, to make it easy to test email
sending
# during development, and to provide support for platforms that can’t use SMTP.
from django.core.mail import send_mail
# The method sleep() suspends execution for the given number of seconds.
from time import sleep
#function definition
def AutoViRe(request):
value=0
if request.method=="POST" and "student_choice" in request.POST:
new_student=request.POST.get('student')
print(new_student)
if new_student=='yes':
value=1
else:
value=3
if request.method=="POST" and "register" in request.POST:
cam = cv2.VideoCapture(0) #This will return video from the webcam on your
computer
34
# set video width
cam.set(3, 640)
# set video height
cam.set(4, 480)
# A Haar Cascade is basically a classifier which is used to detect the object for
# which it has been trained for, from the source
face_detector =
cv2.CascadeClassifier('C:/Users/HP/Desktop/AutoViRe/AutoViReapp/haarcascade_frontalface_
default.xml')
print(face_detector)
# For each person, enter one numeric face id
face_id = int(request.POST.get('student_id'))
print(face_id)
print("\n [INFO] Initializing face capture. Look at the camera and wait ...")
# Initialize individual sampling face count
count = 0
while(True):
#reading the video file being captured
ret, img = cam.read()
#convert images from one color-space to another
gray = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
#detects various facial features mentioned in
haarcascade_frontalface_default.xml
faces = face_detector.detectMultiScale(gray, 1.3, 5)
print(faces)
for (x,y,w,h) in faces:
#draws a green rectangle at the top-right corner of image.
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)
count += 1
# Save the captured image into the datasets folder
cv2.imwrite("C:/Users/HP/Desktop/AutoViRe/AutoViReapp/dataset/User." +
str(face_id) + '.' +
str(count) + ".jpg",
gray[y:y+h,x:x+w])
#the function cv2.imshow() is used to display an image in a
window.
cv2.imshow('Visage Capture', img)
# Press 'ESC' for exiting video
35
k = cv2.waitKey(5) & 0xff
if k == 27:
break
# Take 100 face sample and stop video
elif count >= 100:
break
# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
#release the camera resource
cam.release()
#destroy all created windows
cv2.destroyAllWindows()
value=2
if request.method=="POST" and "train" in request.POST:
# Path for face image database
path = 'C:/Users/HP/Desktop/AutoViRe/AutoViReapp/dataset'
#Local binary patterns Histogram (LBPH) is a type of visual descriptor used
for classification
# in opencv.
recognizer = cv2.face.LBPHFaceRecognizer_create()
detector =
cv2.CascadeClassifier("C:/Users/HP/Desktop/AutoViRe/AutoViReapp/haarcascade_frontalface_
default.xml");
# function to get the images and label data
def getImagesAndLabels(path):
#get the path of all the files in the folder
imagePaths = [os.path.join(path,f) for f in os.listdir(path)]
#create empth face list
faceSamples=[]
#create empty ID list
ids = []
#now looping through all the image paths and loading the Ids and the
images
for imagePath in imagePaths:
#loading the image and converting it to gray scale
PIL_img = Image.open(imagePath).convert('L') # grayscale
#Now we are converting the PIL image into numpy array
img_numpy = np.array(PIL_img,'uint8')
36
#getting the Id from the image
id = int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces = detector.detectMultiScale(img_numpy)
#If a face is there then append that in the list as well as Id
of it
for (x,y,w,h) in faces:
faceSamples.append(img_numpy[y:y+h,x:x+w])
ids.append(id)
return faceSamples,ids
print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
#train the model with captured images for respective id's
recognizer.train(faces, np.array(ids))
# Save the model into trainer/trainer.yml
recognizer.write('C:/Users/HP/Desktop/AutoViRe/AutoViReapp/trainer.yml')
# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting
Program".format(len(np.unique(ids))))
value=3
if request.method=="POST" and "recognize" in request.POST:
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('C:/Users/HP/Desktop/AutoViRe/AutoViReapp//trainer.yml')
cascadePath =
"C:/Users/HP/Desktop/AutoViRe/AutoViReapp/haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);
font = cv2.FONT_HERSHEY_SIMPLEX
#indiciate id counter
id = 0
# names related to ids: example ==> Sandesh: id=1, etc
names = ['Unknown', 'Sandesh', 'Shweta', 'Anchal Agrawal', 'Wasef
Mohiuddin', 'Prashanti', 'Alka','Huda', 'Harshika']
# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
# set video width
cam.set(3, 640)
# set video height
37
cam.set(4, 480)
# Define min window size to be recognized as a face
#minimum width
minW = 0.1*cam.get(3)
#minimum height
minH = 0.1*cam.get(4)
#set of value in the format (id,name,email id)
data =[[0,'Unknown',''],[1,'Sandesh',''],[2,'Shweta',''], [3,'Anchal
Agrawal','[email protected]'],[4,'Wasef
Mohiuddin','[email protected]'],[5,'Prashanti',''],
[6,'Alka',''],[7,'Huda',''],
[8,'Harshika','[email protected]']]
#number of lectures per day
periods=6
fieldname=['Unique ID','Name','E-mail ID']
for i in range(periods):
fieldname.append('Hour '+str(i+1))
for period in range(periods):
ret, img =cam.read()
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
#input grayscale image.
gray,
#Parameter specifying how much the image size is reduced
at each image scale.
scaleFactor = 1.2,
# Parameter specifying how many neighbors each
candidate rectangle should have to retain it.
# This parameter will affect the quality of the detected faces:
higher value results in less
# detections but with higher quality
minNeighbors = 5,
#Minimum possible object size. Objects smaller than that
are ignored.
minSize = (int(minW), int(minH)),
)
data1=[]
for(x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)
38
#predict the user Id and confidence of the prediction
respectively
id, confidence = recognizer.predict(gray[y:y+h,x:x+w])
# If confidence is less them 100 ==> "0" : perfect match
if (confidence < 100):
id = names[id]
confidence = " {0}%".format(round(100 -
confidence))
else:
id = "Unknown"
confidence = " {0}%".format(round(100 -
confidence))
cv2.putText(img, str(id),(x+5,y-5),font,1,(255,255,255),2)
#cv2.putText(img, str(confidence), (x+5,y+h-5),font,1,
(255,255,0), 1)
cv2.imshow('AutoViRe Detection',img)
cv2.waitKey(10) & 0xff
data1.append(id)
print(data1)
data1=list(set(data1))
for i in data1:
data[names.index(i)].insert(period+3,"P")
for i in range(len(data)):
if len(data[i])!=period+4:
data[i].insert(period+3,"A")
sleep(2)
print(data)
cam.release()
cv2.destroyAllWindows()
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height
# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)
#open CSV in write mode to replace previous results and in append
mode to append new results
with open('StudentsDB.csv', 'w') as writeFile:
#write values into csv file in a dictionary form
39
writer = csv.DictWriter(writeFile,fieldnames=fieldname)
#write headers of csv file
writer.writeheader()
#write values into csv file
writer = csv.writer(writeFile)
#compute values for rows based on output of facial recognition
writer.writerows(data)
#close csv file
writeFile.close()
#release camera
cam.release()
#Destroy all created windows
cv2.destroyAllWindows()
print("\n [INFO] Exiting Program and cleanup stuff")
for i in range(len(data)):
#body of email
text="Dear Parent, Your Ward " +data[i][1]+ "'s Attendance for today
is: "
#subject of the email
subject = 'Regarding College Attendance'
if i==4 or i==3 or i==8:
print(data[i][1])
count=periods
for j in range(periods):
if data[i][j+3]=='A':
count-=1
print(count)
text+=str((count*100)/periods)+' %'
#send email to corresponding student's parent's email id
send_mail(subject,text,'',[data[i][2]],fail_silently=False)
#indicate end of program
print("End of Visage Detection")
return render(request,'sample.html',{"value":value})
40
2. INTERFACE.HTML
{% load static %}
<!DOCTYPE html>
<html>
<head>
<title>AutoViRe</title>
<style type="text/css">
.bg
{
background:#181515 0 no-repeat fixed;
height: 100%;
width: 100%;
}
.container
{
position: absolute;
top: 50%;
left: 50%;
width: 1200px;
height: 600px;
margin-top: -300px; /* Half the height */
margin-left: -600px; /* Half the width */
/* border:7px solid #FFC000;*/
}
.s1{
position: absolute;
}
.s11{
font-family: "Raleway", sans-serif;
font-size: 35px;
color: white;
margin-left: 320px;
margin-top: 200px;
}
.s2{
position: absolute;
}
.s22{
margin-left: 550px;
41
margin-top: 280px;
font-family: "Raleway", sans-serif;
}
.s3{
position: absolute;
}
.button {
background-color: #181515; /* Green */
border: none;
font-family: "Raleway", sans-serif;
color: white;
padding: 16px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
margin: 4px 2px;
-webkit-transition-duration: 0.4s; /* Safari */
transition-duration: 0.4s;
cursor: pointer;
position: absolute;
}
.button5 {
background-color: white;
color: black;
font-family: "Raleway", sans-serif;
border: 2px solid #555555;
margin-left: 535px;
margin-top: 350px;
position: absolute;
}
.button5:hover {
background-color: #555555;
color: white;
font-family: "Raleway", sans-serif;
position: absolute;
}
42
html { overflow-y: hidden;
overflow-x: hidden;
}
#container1{
position: absolute;
top: 50%;
left: 50%;
width: 1200px;
height: 600px;
margin-top: -300px; /* Half the height */
margin-left: -600px; /* Half the width */
/* border:7px solid #FFC000;*/
}
.r1{
position: absolute;
margin-left: 450px;
margin-top: 130px;
}
</style>
</head>
<body align="center">
<div>
<form method="POST">
{% csrf_token %}
<!-- <div class="container">
<img id="layer" src="./static/images/bgimage1.png" alt="nn" STYLE="position:absolute;
width:1200px;height:600px;">
<div class="s1">
--><h1><u>AutoViRe: Automated Visage Recognition Attendance System</u></h1>
<!-- </div> -->
</div>
{% csrf_token %}
{% if value == 0 %}
<h2><i>Do you want to register?</i><br></h2>
<input type="radio" name="student" value="yes">Yes
<input type="radio" name="student" value="no">No<br><br>
43
<input type="submit" name="student_choice">
{% endif %}
{% if value == 1 %}
Enter the Student ID:
<input type="value" name="student_id">
<input type="submit" name="register">
{% endif %}
{% if value == 2 %}
<h2><i>Train the model</i></h2>
<input type="submit" name="train">
{% endif %}
{% if value == 3 %}
<h2><i>Test the model</i></h2>
<input type="submit" name="recognize">
{% endif %}
</form>
</body>
</html>
44
3. URLS.PY
"""AutoViRe URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/2.1/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path
from AutoViReapp import views
urlpatterns = [
path('admin/', admin.site.urls),
path('AutoViRe',views.AutoViRe,name='AutoViRe')
]
45
4. SETTINGS.PY
"""
Django settings for AutoViRe project.
Generated by 'django-admin startproject' using Django 2.1.7.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'hfwtt*cu4kq0i1g3e)0ij1s%wyx1q8m49yf^e1jwp=*b_$re*&'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['*']
EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = '[email protected]'
EMAIL_HOST_PASSWORD = '*********'
EMAIL_PORT = 587
# Application definition
INSTALLED_APPS = [
46
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'AutoViReapp',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'AutoViRe.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'AutoViRe.wsgi.application'
47
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
48
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
49
5. APPS.PY
from django.apps import AppConfig
class AutovireappConfig(AppConfig):
name = 'AutoViReapp'
50
6. WSGI.PY
"""
WSGI config for AutoViRe project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'AutoViRe.settings')
application = get_wsgi_application()
51
7. MANAGE.PY
#!/usr/bin/env python
import os
import sys
if __name__ == '__main__':
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'AutoViRe.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
52
APPENDIX B – User Manual
1. Creation of AutoViRe app on the Django framework
$ django-admin startproject AutoViRe
Figure B.1: Creation of Django project
This creates the Django project and generates a command-line utility that lets you interact with
this Django project in various ways the manage.py file. An initialization file __init.py__,
settings.py file that deals with app settings and configurations. A urls.py file that maps the URLs
and functions to each other i.e. the URL declarations for this Django project; a “table of contents”
of your Django-powered site. The wsgi.py file which is an entry-point for WSGI-compatible web
servers to serve your project.
In settings.py file, modify settings.py file as follows:
Change the ‘Databases’ class to the code below:
53
Figure B.2: Databases class
Change ‘TimeZone’ class to the code below:
Figure B.3: Time_Zone class
Then on the command prompt, execute $ python manage.py migrate
Figure B.4: Making database migrations into the project
The migrate command looks at the Installed_Apps setting and creates any necessary database
tables according to the database settings in your mysite/settings.py file and the database
migrations shipped with the app. You’ll see a message for each migration it applies.
54
On the command prompt, execute $ python manage.py runserver
Figure B.5: Running the localhost server
On the command prompt, execute $ python manage.py startapp autovireapp
Figure B.6: Creation of django app
This creates the app in the project that can run the functionalities of the project.
In the settings.py file, change the ‘Installed_Apps’ class to the code below:
Figure B.7: Installed_Apps class
55
Add ‘autovireapp’ name in the end after all other default apps.
On the command prompt, execute $ python manage.py makemigrations autovireapp
Figure B.8: Making migrations into Django App
On the command prompt, execute $ python maange.py createsuperuser to create a user who can
login to the admin site.
Figure B.9: Creation of Super User for Admin
On the command prompt, execute $ python manage.py runserver
Figure B.10: Running of localhost server
On the internet browser, type the URL: http://127.0.0.1:8000/admin
56
Figure B.11: Admin page login for the Django localhost
Now, try logging in with the superuser account you created in the previous step. You should see
the Django admin index page:
Figure B.12: Django project Admin Page
57
2. Redirection to Project on command line
Figure B.13 Redirecting to Project Path
Figure B.13 represents the Command Prompt in which the user redirects to the Project Location
on the system from the root. This is done in order to run the server on which the application has
been built. Only after redirection takes place successfully, the user can run and execute the
application on the server.
58
3. Startup of Server
Figure B.14 Running Server on Command Line
Figure B.14 represents the execution of server on command line. The command ‘runserver’
performs system checks to check whether the environment is set up properly and all the
configurations are properly maintained. It also starts the development server and indicates the
domain address.
59
4. Execution on Browser
Figure B.15 Browser Display of Application Execution
Figure B.15 represents the display of the homepage of application execution. This is the first page
of the application and asks the user if he/she wants to register the face with the system. Based on
the input given, the application displays the corresponding page accordingly.
60
5. Unique Identification assigned to Visage
Figure B.16 Registration based on Unique Student ID
Figure B.16 describes the display of a numeric entry asking for a unique ID based on which the
dataset would be classified. Based on what entry is given, the system registers the face under that
identification number. This ensures easy, effective and efficient classification.
61
6. Visage Capture
Figure B.17 Facial Registration
Figure B.17 represents the face being recognized by the system and is stored under the unique ID
given in the previous step. Based on the count of frames given in the code, the snapshots are taken
and stored for each unique identifier.
62
7. Dataset Storage
Figure B.18 Dataset consisting of several registered faces
Figure B.18 represents the dataset that contains images of all faces that have been registered into
the system. This is a very integral part of the system, since its responsible for both training and
testing the system model.
63
8. Training the Model
Figure B.19 Browser Display of Training Webpage
Figure B.19 represents the display of the training webpage which triggers the execution of the
training of registered faces within the dataset.
Figure B.20 Training Display on the command line
Figure B.20 represents the process of training of model which helps the model achieve maximum
accuracy and efficiency every time its run and executes based on number of faces.
Figure B.21 Completion of Training
Finally, Figure B.21 represents the completion of successful training after a certain time and this
results in the model being ready to successfully execute and get tested.
64
9. Testing Model
Figure B.22 Browser Display of Final Testing
Figure B.22 represents the final webpage which executes the main part of the application and
detects faces based on the dataset. This step also involves immediate updating of database along
with an email being sent to the respective parents.
65
10. Face Detection
Figure B.23 Successful Face Detection based on Dataset
Figure B.23 indicates the faces being detected and their corresponding names as well, this is
automatically updated in the database behind the scenes and this executes on an iterative basis
until the schedule for that particular day comes to an end.
66
11. Attendance Capture on an hourly basis
Figure B.24 Final Tally of Attendance for the current day
Figure B.24 represents the count of the number of hours each student is present in class during that
hour. This is displayed on the command line and is also updated within the database immediately
for each hour.
67
12. Database Updating based on timely capture
Figure B.25 Final Updated Database
Figure B.25 represents the updated database with all the values being marked throughout the day
for the respective students. This eliminates the huge paperwork which is normally done when
manually managing attendance.
68
13. Email sent to the respective parents
Figure B.26 Email sent to the Respective Parent
Figure B.26 represents the email which has been sent to the respective parent indicating their
ward’s attendance for the day. This is also an automated addition to the system which executes on
its own after the database has been updated.