Int. Journal of Scientific Research in
Computer Science and Engineering An Open Access Scholarly Peer-reviewed Scientific Research Publishing Journal
ISSN: 2320-7639
Aim & Scope
Soft Computing High Performance Computing Engineering and Emerging Technologies Computer Sciences & Information Technology High Speed Networking & Information Security Computational Sciences and Recent Technology
Volume-5, Issue-1, February 2017 Edition
IJSRCSE, Editor-in-Chief
Prof. (Dr.) N.S. Choudhari e-mail:[email protected]
Copyright © IJSRCSE
ISROSET Publication, India
www.virtualcom.in, www.isroset.org, e-mail: [email protected]
i
Editorial
Message from Managing Editor
International Journal of Scientific Research in Computer Science and Engineering (IJSRCSE) is one of the
leading and growing open access, peer reviewed, monthly scientific research journal that is committed to timely
publication of original research, surveying and tutorial contributions on the analysis and development of computing
science, engineering, information technology and computational science which gains a foothold in Asia and opens
to the world. The journal is designed mainly to serve researchers and developers, dealing with sustainable
computing, high performance computing, high speed networking & information security, engineering and emerging
technologies computational sciences and recent technology. Papers that can provide both theoretical analysis,
along with carefully designed computational experiments, are particularly welcome.
It is the vision of IJSRCSE to publish original and unpublished research articles, review articles, survey papers,
refereed articles as well as auxiliary material such as-case study, technical articles, short communication,
Symposium, Commentary, Perspective, Conceptual Paper, and Proceeding based on theoretical or experimental
works in all areas of human study without financial restriction.
IJSRCSE editorial board consists of several internationally recognized experts and guest editors. Wide circulation is
assured because libraries and individuals, worldwide, subscribe and reference to IJSRCSE. The Journal has grown
rapidly to its currently level of over 500 articles published and indexed. The journal is published monthly with
distribution to librarians, universities, research centers, researchers in computing, and computer scientists. The
journal maintains strict refereeing procedures through its editorial policies in order to publish papers of only the
highest quality.
IJSRCSE publish two types of issues; Regular Issues and Theme Based Special Issues. Announcement regarding
special issues is made from time to time, and once an issue is announced to be a Theme Based Special Issue,
Regular Issue for that period will not be published.
All the papers in the online version are available freely with open access full-text (.pdf) content and permanent
worldwide web link. The abstracts will be indexed and available at major academic databases.
IJSRCSE is an interdisciplinary, rapid peer reviewed journal that gives you the flexibility to submit articles that do
not fit neatly within traditional journals. It is ideal for authors who want to quickly announce recent developments,
methods, or new products to a broad audience. Some additional benefits of publishing in IJSRCSE are:
IJSRCSE double blind peer review by at-least two referees on the basis of their originality, novelty, clarity,
completeness, relevance, significance and research contribution.
Get decision on your manuscript as early as possible from the date of submission.
Multimedia integration and commenting
Usage and citation data
Vast global outreach to thousands of users through the IJSRCSE digital library.
IJSRCSE provides individual e-Certificate to each author.
Authors can download their full length Articles at any time.
Article will publish immediately upon receiving the final versions.
Convenient author-pays publishing model, with nominal article processing charge of each article.
IJSRCSE is indexed in Google Scholar, DPI Digital Library, Thomson Reuters RID, ORCID,ResearchBib,
CiteseerX, Mendeley, ResearchGate, WorldCat, Slideshare, Scribd, Academia,and many more.
www.isroset.org
ISSN: 2320-7639 © IJSRCSE, INDIA
ii
Copy Right © IJSRCSE – Volume-1, Issue-1, February 2017 Edition
All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by
any means, electronic or mechanical including photocopying, recording or by any information storage and retrieval system,
without the prior written permission from the copyright owner. However, permission is not required to copy abstracts of papers
on condition that a full reference to the source is given.
ISSN: 2320-7639
Disclaimer The opinions expressed and figures provided in the Journal; IJSRCSE, are the sole responsibility of the authors. The publisher
and the editors bear no responsibility in this regard. Any and all such liabilities are disclaimed All disputes are subject to Indore
jurisdiction only.
Office Address
501- King Apartment, Navlkha Chouraha, Indore,
INDIA, Pin: 452001
Mobile: +91 9424420408
e-mail: [email protected], webpage: www.isroset.org
iii
Honorable Editorial Board and Reviewer Members
Editor-in-Chief Associate Editor-in-chief
Dr. Narendra S. Chaudhari Director, MANIT, Bhopal
Prof. - Computer Science & Engineering
IIT, Indore - India
Dr. Pradeep Sharma Head, Dept. of Computer Science
Govt. Holkar Science College,
DAVV, Indore
Executive Editors (Joint)
Dr. Umesh Kumar Singh
Institute of Computer Science, Vikram University, Ujjain, India
Dr. Neetesh Purohit
Indian Institute of Information Technology, Allahabad
Honorable Editorial Board/Reviewer Committee Members
Dr. D.P. Kothari
Dept. of Electrical Engineering, J B Group of Education Institutions, Hyderabad - India
Dr. Feng Liu
School of IT & Electrical Engineering, University of Queensland, Australia
Dr. Baoming Ge
Dept. of Electrical & Computer Engineering, Michigan State University, Michigan, USA
Dr. Shaligram Prajapat
International Institute of Professional Studies, DAVV, Indore- India
Dr. P.K. Paul
Raiganj University, Raiganj, Uttar Dinajpur, WB, India
Dr. Chang-Ho Kim
Dept. of Mechanical Engineering, Dong-Eui University, South Korea
Dr. A N Zaki Rashed
Dept. of Electronic Engineering, Menoufia University, Menouf- Egypt
Prof. Anuj Kumar Gupta
Dept. of CSE, RIMT Institute of Engineering & Technology, Mandi Gobindgarh- India
Dr. Trung Duong
Research Faculty at Center for Advanced Infrastructure and Transportation
Rutgers, State University of New Jersey (RU), United States
Dr. Kirti Mathur
International Institute of Professional Studies, DAVV, Indore- India
Dr. Umesh Kumar
Principal: Govt. Women’s Poly, Ranchi
Dr. Christophe Feltus
University of Namur, Namur, Belgium
Dr. Mithilsh Mittal
Govt. Holkar Science College, DAVV, Indore
Prof Ashok Sharma
MIET, Jammu University of Jammu, Jammu INDIA
Prof Pradeep K Sharma Dept. of CSE, MIT Group of Institute, Ujjain, M.P. India
iv
IJSRCSE is now indexed in the Following Databases
Thomson Reuters ResearcherID is fully integrated with Web of Science™, provides the global research community with an invaluable index to author information.
IARC-JCRR provides access to quality controlled open access journals and proceedings. It is world's largest growing professional organization for indexing of scholarly peer reviewed journals & proceedings, which boost the worldwide visibility and accessibility of your scientific content.
DPI Digital Library is the world's largest Abstracting and Indexing professional data base for peer-reviewed scientific literature; Research Articles, Review Articles, Survey Paper, Short Communications & Case Studies, Conference Articles, Book Chapters.
Citeseerx is a scientific literature digital library and search engine that focuses primarily on the literature in computer and information science. It is aims to improve the dissemination of scientific literature & to provide improvements in functionality, availability, comprehensiveness, efficiency, in the access of scientific knowledge.
Microsoft Academic Search is a free search engine for academic papers and resources principally in the field of computer science, developed by Microsoft Research Asia, Beijing.
ResearchBib is open access with high standard indexing database for researchers and publishers. Research Bible may freely index journals, research papers, call for papers, research position.
WorldCat is the world's largest network of library content and services. WorldCat libraries are dedicated to providing access to their resources on the Web, where most people start their search for information.
Google Scholar is a freely accessible web search engine that indexes the full text of scholarly literature across an array of publishing formats and disciplines.
Academia.edu is a social networking website for academics. Academia.edu is a platform for academics to share research papers. The company's mission is to accelerate the world's research.
Advanced Science Index is an indexing service indexes publisher including publishers of scientific and art materials. It is aiming at rapid evaluation and indexing of all local and international scientific or media publisher.
Index Copernicus (IC) is a world-wide gateway to complex scientific information. Index Copernicus’ innovative approach to the international scientific information services is integrative, interactive and inclusive.
The arXiv (arXiv.org), a project by Cornell University Library, provides open access to over a third of a million e-prints in physics, mathematics, computer science and quantitative biology.
ResearchGate is a largest network for scientists, research professionals and affiliated people to share papers, ask and answer questions, and find collaborators. It is aims to connect researchers and make it easy for them to share and access scientific output, knowledge, and expertise.
BASE is one of the world's most voluminous search engines especially for academic open access web resources. BASE is operated by Bielefeld University Library.
The Internet Archive offers permanent access for researchers, historians, scholars, people with disabilities, and the general public to historical collections that exist in digital format.
GetCITED is a website database that lists publication and citation information on academic articles whose information is entered by members.
v
Call for Paper Impact Factor: 1.032 (Calculated by IARCIF)
Dear Readers, IJSRCSE solicits original research papers contributing to the foundations and applications of Contemporary
Computing for the Volume-5, Issue-2 in Mar-Apr 2017.
Authors are cordially invited to submit original or unpublished, experimental, theoretical research work as
papers/articles to the upcoming Edition/Issues. The aim of IJSRCSE is to publish research articles/papers, review
articles, survey articles in rapidly developing field of computer science, engineering, data mining, artificial
intelligence, cloud computing security and cryptography, information security, distributed computing, computational
sciences and more (see coverage areas) without financial restriction.
The manuscript/ paper can be submitted through Online Submission System in IJSRCSE Template. If you facing
problems with online submission System. The Paper can be submitted via email to [email protected]. The email
must bear the subject line "IJSRCSE: Paper Submission". If you facing problems with paper submission please feel
free to contact the editor at [email protected].
Paper Template:
Publication of any articles/ manuscript in International Journal of Scientific Research in Computer Science and
Engineering requires strict conformance to the paper template. However, initial submission of an article or
manuscript for review need not be compliant with the template. (Visit Author Guidelines)
Once the paper is selected, the authors will be asked to submit the camera-ready paper. The camera-ready paper
is the final version of the article/ manuscript that will be published in the International Journal of Scientific Research
in Computer Science and Engineering Digital Library. While submitting the camera-ready, the authors must take
extreme care so that the papers strictly conform with the prescribed template of IJSRCSE. The camera-ready paper
template can be downloaded from this link.
Abstracting and Indexing
All registered papers will be published in DPI Digital Library with unique Digital no. Click here
Formal Article/Paper Acceptance Requirements
1. The article is presented in an intelligible fashion and is written in IJSRCSE Template and Standard English. 2. The article should be original/plagiarism fee writing that enhances the existing body of knowledge in the given
subject area. Original review articles and surveys paper, case study, technical note, and short communication are acceptable, even if new data/concepts are not presented.
3. Experiments, statistics, and other analyses are performed to a high technical standard and are described in sufficient detail.
4. Conclusions are presented in an appropriate fashion and are supported by the data. 5. Figure/Image should be clear and visible. Clearly mention figure name and numbers in increasing order. 6. Equation/Formula should be in Math's equation editor Software (equation editor software). Please do not give
scanned equation/formula. 7. Tables and Figures should be in MS Word or Excel. Please do not give scanned equation/formula.
Best Paper Award Our editorial committee will select the best paper for every Issue. The best paper award e-certificate would be awarded to the authors of the selected manuscript. Authors of published papers would be provided with e-certificates.
Topics International Journal of Computer Sciences and Engineering is cross-disciplinary in nature. The topics are not
limited to the list that is available at this link.
vi
TABLE OF CONTENTS
Title : Detection of Cross Browser Inconsistency by Comparing Extracted
Attributes
Author's : C. P. Patidar, Meena Sharma, Varsha Sharda
Section : Research Paper Page No : 1-6
Type : Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2017.V5I1.16.522
Title : Hybrid DWT, FFT and SVD based Watermarking Technique for Different
wavelet Transforms
Author's : Kanchan Thakur
Section : Research Paper Page No : 7-12
Type : Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2017.V5I1.712.523
Title : Zero day Attacks Defense Technique for Protecting System against
Unknown Vulnerabilities
Author's : Umesh Kumar Singh, Chanchala Joshi, Suyash Kumar Singh
Section : Review Paper Page No : 13-18
Type : Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2017.V5I1.1318.524
Title : Data Mining: A Comparative Study of its Various Techniques and its
Process
Author's : Marie Fernandes
Section : Review Paper Page No : 19-23
Type : Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2017.V5I1.1923.525
Title : Information and Communication Technologies in State affairs:
Challenges of E-Governance
Author's : Stephen John Beaumont
Section : Review Paper Page No : 24-26
Type : Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2017.V5I1.2426.526
Title : Comparative Study and Analysis of Unique Identification Number and
Social Security Number
Author's : Sarita Sharma, Rakesh Gaherwal
Section : Review Paper Page No : 27-30
Type : Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2015.V5I1.2730.527
Title : A proposed Method for Mining High Utility Itemset with Transactional
Weighted Utility using Genetic Algorithm Technique (MHUI_TWU-GA)
Author's : Pradeep K.Sharma, Vaibhav Sharma and Jagrati Nagdiya
Section : Review Paper Page No : 31-35
Type : Isroset-Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2017.V5I1.3135.538
vii
Title : Analysis of Security in Cloud-Learning Systems
Author's : Sangeetha Rajesh
Section : Survey Paper Page No : 36-40
Type : Isroset-Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2017.V5I1.3640.539 Title : Security Issues on Online Transaction of Digital Banking
Author's : Wakil Ghori
Section : Review Paper Page No : 41-44
Type : Isroset-Journal Volume-05 , Issue-01
Abstract
Full-Text HTML
References
Citation
DPI :-> 16.10053.IJSRCSE.2017.V5I1.4144.540
© 2017, IJSRCSE All Rights Reserved 1
International Journal of Scientific Research in _______________________________ Research Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.1-6, February (2017) E-ISSN: 2320-7639
Detection of Cross Browser Inconsistency by Comparing
Extracted Attributes
C.P.Patidar1, Meena Sharma
2, Varsha Sharda
3*
1Information Technology, IET, DAVV, Indore, India
2Computer Science, IET, DAVV, Indore, India 3Computer Science, Medicaps, Indore, India
Available online at: www.isroset.org
Received 28th Dec 2016, Revised 12th Jan 2017, Accepted 02nd Feb 2017, Online 28th Feb 2017
Abstract—The advancement in web technology and popularity of web applications amplifies the inconsistencies
between various web browsers. These inconsistencies augment cross browser incompatibilities that constitute different
look on different browsers for a particular web application. In some cases, Cross-Browser Inconsistencies (XBIs)
consists of acceptable difference whereas these may entirely prevent users from accessing part of a web application’s
functionality in other cases. Therefore, testing process of a web application must be performed comprehensively on
multiple browsers to achieve consistency. Available tools and techniques require a considerable manual effort to
recognize such issues and provide limited support for fixing the cause of the issues. In this paper, we propose a
technique for detecting cross-browser issues without human intervention.
Index Terms: Browser, Cross Browser Inconsistency, Reliability, Web application
I. INTRODUCTION
Presently, web applications are evolved from web systems or
websites based on a client server model. When a client issues
a request to the server through a web browser then the server
side components get invoked. These communications
generate requests to the server, and the server responds to
such requests with updates to the current web page,
programmed in HTML (Hyper-Text Mark up language) or
XML (extensible Mark-up Language), and to other related
resources, such as style information in CSS (Cascading Style
Sheets), client-side code (e.g., JavaScript), images, and so
on. Subsequently, these resources are used to calculate and
render an updated web page in the web browser. Web
applications often have variable elements such as
advertisements and generated content (e.g., time, date etc.)
which are dissimilar across multiple requests. If these
elements are not ignored, the technique might consider these
as changes across browsers thus resulting in false positives in
the results. Hence, the technique requires discovering and
leaving out such elements during comparison.
A web browser is a software application for
retrieving, presenting, and traversing information resources
on the World Wide Web. By a Uniform Resource Identifier
(URL), a web page, image and video an information resource
is recognized. The browser gets in contact with the web
server and needs for information. The web server receives
the information and displays it on the computer. The major
problems associates with using the web application through
different web browsers are related with web browser
inconsistency. Also, web applications are being used by
many for all activities in every field of work. Some variation
in arrangement of elements or content of a web-based
application on different browsers is known as Cross-Browser
Inconsistency .When a user execute a web application on
multiple browsers, then some web application exhibit
different behaviours and thus introduces Cross-Browser
Inconsistencies (XBIs) [1]. XBIs exhibit differences between
a web application's appearances, behavior, or both, when it is
executed on two different environments. If cross browser
inconsistencies are not being correctly tested during the
testing phase, then it can negatively affect the experience of
the user of web application. Consequently, identification of
the cross browser inconsistencies is an essential factor and
is a serious concern for companies dependent on such
applications for business. Rapidly changing technologies
have accordingly driven up the number of web browser
version release and browsers are the main interfaces to
deliver/access the information in one click [2].
It seems that, website is developed using one
browser rather than for multiple browsers. Testing across a
variety of browsers will expose issues the developer may be
unaware of. Accordingly, we performed a systematic study
*Corresponding Author: Varsha Sharda
E-mail: [email protected], Tel.: +91 7354531265
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 2
on various real-world web applications. This study facilitates
us to found a categorization of XBIs that aids in defining our
technique. We found three major varieties of XBIs:
structural, content and behaviour.
(i) Structural XBIs: This kind of XBI affects the
structure, or arrangement, of individual web pages.
The web page structure is basically a particular
arrangement of elements, which in case of structural
XBIs is incorrect in a particular browser. For
instance, the misalignment of one or more elements
on a specified web page, in a particular browser, can
comprise a structural XBI [3].
(ii) Content XBIs: This type of XBI is examined in the
content of individual components on a web page.
Such differences can take place, where the graphical
appearance of a web page element, or the textual
value of an element, is different across two
browsers. We further categorize this type of
inconsistency as visual-content and text-content
XBIs [3].
(iii) Behavioural XBIs: These types of XBIs involve
differences in the behaviour of individual widgets
on a web page. An example of such an XBI would
be a button that performs a particular action within
one browser and a totally different action, or no
action at all, in another browser [3].
In addition, the internet of web application has quietly
become one of the important medium of the business. The
software faults in web applications have potentially leads to
the failure or the underperformance of the business. Most of
the works on web applications have been on making them
more powerful. But, quite little is done to guarantee the
quality. Key quality attributes for web applications include
reliability, availability, interoperability and security apart
from ensuring the functional & usability aspects [4].
Web browser compatibility testing is technical and
puzzling - something you have to let your web developer
deal with. The problem is that if your website is not well-
suited with the plethora of browsers available, it will impinge
on your business reputation [5].
The recent work on identifying XBIs has proposed
techniques that focus only on certain aspects of a web
application's execution, and are well appropriate for specific
types of XBIs. For example, the WebDiff tool uses computer
vision to detect XBIs, whereas CrossT uses graph
isomorphism along with text comparison to find XBIs [6].
These tools provide only partial and imprecise solutions to
the XBI detection problem.
To address the drawbacks of existing techniques, we
proposed a technique that integrates a rich set of comparison
techniques and orchestrates them to apply each technique to
the category of XBIs that it is best matched to detect. Our
technique is a computerized, defined, and widespread
approach for XBI detection.
The key contributions of this work are:
A new technique and tool for detecting both visual
and structural XBIs in web applications.
An innovative, powerful technique to detect visual
XBIs.
An evaluation of this technique on several real-
world web applications that shows its effectiveness
in detecting different kinds of inconsistencies XBIs.
The rest of the paper is organized as follows. Section I
contain Introduction of cross browser inconsistency along
with web application ,Section II contain related work done in
area of cross browser inconsistency of a web application,
Section III describes problem definition of our research work
.Section IV explains our proposed solution to detect cross
browser inconsistency with flow chart, Section V contain
application area of our research work, Section VI contain
expected outcomes of our proposed methodology and
Section VII concludes research work with future directions.
II. RELATED STUDY
With the proliferation of several browser versions and release
updates, testing infrastructure requirements are no longer
static. Given the various smart devices flooding the market
each day, cross-browser compatibility has emerged as a
major challenge for software testers.
It has been observed that, TCS puts a light on
accomplishment of Cross Browser Testing Tool. It offers a
computerized solution with a preconfigured collection of
devices and test environment that enables quick testing
across multiple OS and browsers. It connects to real devices
for testing of web based mobile applications and ensures
precise cross browser and cross device testing. It covers three
major areas of testing: Cross browser UI validation,
Functional testing to ensure the accuracy of functionality,
RWD testing to address inconsistencies in pages while
ensuring best possible viewing and user interaction across a
wide range of devices. They also offers a solution using
efficient layout comparison, functional test, responsive web
design, broken link validation and portal based management
for cross browser supports and gains up to 60% savings in
test effort through dynamic script creation. It gives an
enhanced UI scanning method and centralized test
management. Almost, 50% time saving with automatic PDF
evaluation and parallel test implementation across multiple
browser versions [7].
In addition, the details of widespread approach for cross
compatibility testing of website have been specified. It
covers the technical complexities of a website and
differences in the browsers, operating systems, and devices
require detecting cross browser inconsistency. Also, the
parameters that a website must meet before its launching
worldwide and some cross browsers automated testing tools
which assist in website testing on a variety of browsers,
operating systems and devices and meet technical
requirements for assuring website quality have been
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 3
suggested. Subsequently, a range of tools on the basis of
their speed, pricing mode, interfaces, delays, scroll bars and
additional features have been compared. Accordingly, the
web performance testing for web site functionality on
different web browsers, operating systems and different
hardware platforms is checked for software, hardware
memory leakage errors [8].
Next, the relative study of cross browser compatibility as
design issue in different web sites based on an online tool
using .NET Framework has been devised. It provides
different development and design issues in various kinds of
websites like Government Websites, Educational Websites,
Commercial Websites, Social Networking Websites, and Job
Portal Websites. The results obtained after testing five
different categories of websites shows that educational and
social networking sites displays slightest compatibility in
multiple browsers where as job portals, commercial and
government websites shows 100% observance to the website
design principles suggested by W3C [9].
Moreover, to make the browser a protected
environment for running programs by introducing a
separation method that insulates one application from the
performance of another has been presented [10]. It shows the
use OS processes within the browser to safely separate
programs in a mode that is both efficient and backwards
compatible with existing web sites. Also, it recognize
content on the web page whether it is more active content or
rich active content along with the trouble with browsers like
failure separation, concurrency and memory management. Through measurements of both web content and browser
behavior, these have shown that current web browsers
provide unpredictable surroundings for running applications.
This leads to serious problems with respect to failure
isolation, concurrency, and memory management. These
have shown that browser-based applications can be safely
isolated from each other using OS processes. Processes
prevent unwanted communications between programs in the
browser, and they are well-organized relative to other
browser operations, both in time and memory overhead.
Then, quantitative categorization of browser
vulnerabilities to project the numbers of vulnerabilities for
mapping test and improvement resources more efficiently
have been presented [11]. Vulnerability discovery data for
the three key browsers, Internet Explorer, Firefox and
Mozilla have been examined and fixed to a vulnerability
discovery model, and the integrity of fit is statistically
examined. It also classifies Vulnerabilities based on cause,
severity, impact and source. Classification of Vulnerability
such as Input Validation Error (includes boundary condition
error, buffer flood), Access Validation Error ,Exceptional
situation Error ,Environmental Error, Configuration Error,
Race Condition Error, Design Error.
Later, the difficulty of cross-browser compatibility
testing of web applications as a functional consistency check
of web application behavior across different web browsers
with an automated solution have been posed [12]. This
approach consists of automatically analyzing the given web
application under different browser environments and
capturing the behavior as a finite-state machine and
comparing the generated models for equivalence on a pair
wise basis and exposing any observed discrepancies. This
overall approach consists of a two-step process. The first step
is to automatically crawl the given web application under
multiple browser environments and capture and store the
observed behavior, under each browser, as a specific state
machine navigation model. The crawling is done in an
identical fashion under each browser to replicate precisely
the same set of user interaction sequences with the web
application, under each environment. The second step
consists of formally comparing the generated models for
similarity on a pair wise-basis and revealing any
experimental discrepancies.
Further, a tool for identifying XBIs in web
applications automatically, without requiring any effort from
the developer has been provided [13]. This tool can work
with any web application that runs on desktop browsers. This
model captures screen and then compares the graph
generated by crawler by graph isomorphism checking
method. Also, it identifies different types of inconsistencies
in a web application. It also generates easy to understand and
actionable reports for the developer, thus allowing them to
deal with XBIs more efficiently.
Accordingly, a thread level study of the work load
generated by Google's Chrome browser on a heterogeneous
multi-processing (HMP) platform found in numerous smart
phones have been presented [14]. The thorough traces of the
thread workload generated by the web browser, especially
the rendering engine examined by it, and discuss the power
saving potentials in relation to power management policies in
Android. It also emphasizes on power management of web
browser workload characterization on HMP platforms and
seeks potential power savings based on the interpretation. Focus on the actual thread workloads and a function call
invoked by the web browser is the new feature offered by it.
It also provides the information that can be used for power
management. Then, various tools enabling parallel execution
of a range of automated tests using several remote test
environments with different web browsers have been
recommended. It also presents a tool for automated testing of
web applications based on the Selenium RC framework.
III. PROBLEM DEFINITION
A high-quality web design aims to offer an identical
appearance to the website viewed from any web browser.
Consequently, a good quality website must be viewable in its
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 4
complete functionality on any web browser. As every
webpage is made up of a range of components with its own
uniqueness and it affects the performance of a webpage in
different contexts. Similar to other parameters of
performance assessment the browser compatibility aspect of
website is also affected by different components of a
webpage either directly or indirectly. In addition, different
technologies produce the compatibility issue. As a result, for
the period of the design stage of the websites these must be
tested meticulously for its compatibility at different browsing
environments. Section I discussed about the parameters that
can affect the cross browser inconsistency of a web
application.
IV. PROPOSED SOLUTION
Description
To identify cross browser inconsistencies, we propose a
model to detect XBI. Figure 1 depicts an overview of our
proposed XBI detection technique that takes as input the
URL of the home page of the web application under test,
URL and two browsers considered for the testing, Browser1
and Browser2. It produces output as list of identified
inconsistencies. Our proposed model compares extracted
attributes from crawler generated graph.
(i) Web Crawler
A web crawler is an automated program, or script,
which methodically scans or “crawls” through web pages to
generate an index of the data it is set to look for. This process
is called as Web crawling or Spidering. We proposed to use a
web crawler called “WebSPHINX (Website-Specific
Processors for HTML Information extraction)” written in
java. It crawl different websites and produces graph for that.
It is open source web crawler and source code is available at
websphinx.zip.
(ii) Attribute Extractor
Attribute extractor is based on the following
hypothesis: In a graph generated by web crawler, since
attribute terms repeat in multiple graphs for a web
application, they are more likely to occur than other terms.
We try to exploit this redundancy to capture the attributes.
Thus the simplest way to select attributes would be to take
the most frequent terms in the graphs. However, this method
has a drawback. This method gives only frequent attributes
and is likely to neglect rare attributes appearing in only few
graphs. To overcome the first problem, we propose a two
stage method. In the first stage, we cluster the all the words
found in the graphs such that all the words close to an
attribute are grouped together in a single cluster. This results
in word clusters of different sizes. In the second stage, we
extract an attribute from each cluster.
(iii) Comparator
This module performs textual analysis of
corresponding elements to detect text-content XBIs. For
detecting image-content XBIs, it compares screen images of
the corresponding elements on the web page. The structure of
the page extracted by the crawler is analyzed by the Layout
Analysis component to create alignment graphs, which
represent the relative alignment of web page elements.
Comparison can be either pair wise where two attributes
from graph are compared or it can be 3 way comparisons
where three attributes extracted from graph are compared.
(iv) Classifier
This module classifies the type of inconsistency
present in a web application according to its type that
whether it is structural, content or behavioral inconsistency.
(v) Report Generator
This module generates a report written in HTML
tabulates the set of detected XBIs.
V. APPLICATION
Application area of detecting cross browser inconsistency is
the E-commerce websites, commercial websites, educational
websites, government websites, news portal, and social
networking websites. For the same, there exists a requisite FIGURE 1: MODEL FOR CROSS BROWSER
INCONSISTENCY
Attribute Extractor
Web Crawler
Browser 2
URL
Browser 1
URL
Content
Inconsist
ency
Behaviour
al
Inconsisten
cy
Comparator
(Pair wise comparison or
3 ways comparison)
Classifier
Report
Generator
Structural
Inconsiste
ncy
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 5
that a web application must behave similarly when executed
on multiple different browsers.
VI. EXPECTED RESULTS
It has been observed that, when a web application is executed
on multiple browser then expected outcome of our proposed
model is to identify three main types of inconsistencies, if
exists. This proposed model also generates report of
inconsistencies. As this technique uses three-way comparator
that compares three graphs generated by crawler
simultaneously. Therefore, finding of XBIs can be fast
technique as compared to other available tools. These are
illustrated as below:
We found that, structural XBIs are the most general
class of XBIs, happening in 57% of the subjects
with XBIs.
We ascertain that these content XBIs occurred in
30% and 22% of the sites with XBIs respectively.
We determine that behavioural XBIs occurred in
9% of the web applications with XBIs.
Thus, we can conclude that the behavioural XBIs have an
effect on the functionality of individual components,
resulting in broken navigation between different screens. On
the other hand, Structural and content XBIs involve
differences in the arrangement or depiction of elements on a
particular web page.
VII. CONCLUSION
XBIs are a severe problem for web developers. Existing
research tools only target particular aspect of XBIs and can
report a significant number of false positives and negatives.
To deal with these limitations, we presented our proposed
model for detection of XBI.To accomplishes this task this
paper presents the overview of proposed technique, its
application and expected results. In addition, it also generates
easy to understand reports for the developer, therefore
allowing them to deal with XBIs more effectively. This is a
challenging difficulty, as the application will necessarily look
different in the two platforms, but it should offer the same, or
at least similar functionality.
REFERENCES
[1] C.P.Patidar and Meena Sharma ,”An automated approach
for cross browser inconsistency(XBI) detection”, Ninth
annual ACM India conference organized by ACM India,
Oct 21-23,2016.
[2] Nepal Barskar, C.P.Patidar and Meena Sharma, “Analysis
and Identification of Cross Browser Inconsistency Issues in
Web Application using Automation Testing”, International
Journal of Computer Science and Information Technology
& Security (IJCSITS), ISSN: 2249-9555Vol.6, No3, May-
June 2016.
[3] Nepal Barskar and C.P. Patidar, “A Survey on Cross
Browser Inconsistencies in
Web Application”, International Journal of Computer
Applications (0975 – 8887) Volume 137 – No.4, March
2016.
[4] “WebTesting”, mindlance.com, [email protected].
[5] Ochin and Jugnu Gaur, “Cross Browser Incompatibility:
Reasons and Solutions”, International Journal of Software
Engineering & Applications (IJSEA), Vol.2, No.3, July
2011.
[6] Shauvik Roy Choudhary, Husain Versee and Alessandro
Orso, “WEBDIFF: Automated Identification of Cross-
browser Issues in Web Applications”, 26th IEEE
International Conference on Software Maintenance in
Timisoara Romania, 978-1-4244-8628-1/10, 2010.
[7] http://www.tcs.com/assurance, 2016.
[8] Sanjay Dahiya1, Ved Parkash1 and T.R. Mudgal2,
“Comprehensive Approach for Cross Compatibility Testing
of Website “, National Workshop-Cum-Conference on
Recent Trends in Mathematics and Computing (RTMC)
,Proceedings published in International Journal of
Computer Applications (IJCA) ,2011 .
[9] Jatinder Manhas,“ Comparative Study of Cross Browser
Compatibility as Design Issue in Various Websites” , BIJIT
- BVICAM’s International Journal of Information
Technology Bharati Vidyapeeth’s Institute of Computer
Applications and Management (BVICAM), New Delhi
(INDIA), NOV 2014.
[10] Charles Reis, Brian Bershad, Steven D. Gribble and Henry
M. Levy, “Using Processes to Improve the Reliability of
Browser-based Applications”, University of Washington
Technical Report UW-CSE, DEC 2007.
[11] Sung-Whan Woo, Omar H. Alhazmi and Yashwant K.
Malaiya,“AN ANALYSIS OF THE VULNERABILITY
DISCOVERY PROCESS IN WEB BROWSERS”,
proceeding of the 10th IASTED International Conference
Software engineering and Applications,
Dallas,TX,USA,ISBN: 0-88986-642-2 / CD: 0-88986-599-
X, NOVEMBER 13-15, 2006,
[12] Ali Mesbah and Mukul R. Prasad,“ Automated Cross-
Browser Compatibility Testing”, ICSE ’11,Waikiki,
Honolulu, HI, USA ,ACM 978-1-4503-0445-0/11/05, May
21–28, 2011 .
[13] Shauvik Roy Choudhary, Mukul R. Prasad and Alessandro
Orso,“ X-PERT: A Web Application Testing Tool for
Cross-Browser Inconsistency Detection”, ISSTA’14,San
Jose, CA, USA,Copyright 2014 ACM 978-1-4503-2645-
2/14/07,July 21–25, 2014.
[14] Nadja Peters1, Sangyoung Park1, Samarjit Chakraborty1,
Benedikt Meurer2, Hannes Payer2 and Daniel Clifford2,
“Web Browser Workload Characterization for Power
Management on HMP Platforms”, CODES/ISSS ’16
Pittsburgh, PA, USA, ACM, and ISBN: 978-1-4503-4483-
8/16/10, October 01-07 2016.
[15] Shauvik Roy Choudhary, Mukul R. Prasad and Alessandro
Orso, “CROSSCHECK: Combining Crawling and
Differencing to Better Detect Cross-browser
Incompatibilities in Web Applications”, 2012.
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 6
[16] Shauvik Roy Choudhary,”Detecting Cross-browser Issues
in Web Applications”, ICSE ’11, Waikiki, Honolulu, HI,
USA, ACM 978-1-4503-0445-0/11/05, May 21–28, 2011. Authors Profile
C.P.Patidar received the B.E. degree in information technology and M.E. degree in computer engineering. He is an assistant professor of Information Technology at the Devi Ahilya University, Indore, India. His research interests are in cross browser inconsistencies, GPGPU computing, CUDA programming multithreaded architecture and memory architecture of computers.
Meena Sharma received the B.E. degree in computer engineering and M. Tech degree in computer science in 1992 and 2004 respectively. She received the Ph. D. Degree in computer engineering in 2012.She is a professor of Computer Engineering at the Devi Ahilya University Indore, India. Her research interests are in software engineering, software quality matrices and object oriented modelling and design.
Varsha Sharda received the B.E.
degree in computer engineering.She is an
assistant professor of Computer Science and
engineering at the Medi-caps University
Indore, India.Her research interests are in
software engineering, Database management
and object oriented analysis and design.
© 2017, IJSRCSE All Rights Reserved 7
International Journal of Scientific Research in ________________________________ Research Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.7-12, February (2017) E-ISSN: 2320-7639
Hybrid DWT, FFT and SVD based Watermarking Technique for
Different wavelet Transforms
Kanchan Thakur
Research Scholar, Dept. of Information Technology, SATI, Vidisha, India
Available online at: www.isroset.org
Received 30th Dec 2016, Revised 12th Jan 2017, Accepted 04th Feb 2017, Online 28th Feb 2017
Abstract— The primary function of developing a digital image watermarking (DIW) procedure is to meet both
imperceptibility and robustness requirements. Digital watermarking seems as an effective process of protecting
multimedia contents such as copyright safeguard and authentication. In this paper, we endorse SVD founded digital
watermarking procedure for powerful watermarking of digital pictures for copyright safety. In proposed research, a
novel and robust digital watermarking method is introduced in which a mixture of DWT (Discrete Wavelet Transform)
and FFT (Fast Fourier Transform) along with SVD (Singular Value Decomposition) is applied. Due to the usage of this
combination of 3 techniques in our proposed work, it increases the robustness and imperceptibility of extracted image.
One of the vital essential benefits of the proposed idea is the robustness of the system on extensive set of attacks.
Analysis and experimental outcome show a lot accelerated effectiveness of the proposed method in evaluation with the
pure SVD-established watermarking and the procedure without making use of some wavelet perform. The results are
compared with Base Work in which single level DWT-SVD combination is taken for watermarking for copy right
security. It is shown through PSNR (Peak signal-to-noise ratio) that it provided a very high imperceptibility.
Experimental outcome verify that the proposed system given good quality picture value of watermarked pictures.
Index Terms—Watermarking; embedding; extraction; PSNR
I. INTRODUCTION
Web has appeared as indispensible and accordingly the
safety and the privacy issue have come to the fore of the
computing fraternity. These issues need to be addressed with
utmost urgency and highest level of dedication.
Watermarking addresses the privacy and security issues.
Watermarking has helped no longer just in protection but
also in resolving numerous copyright and privations issues,
which grew to become some of the contentious disorders at
the similar time the increase of internet. Watermarking
methods can be segregated on the founded of domain based,
record based, notion centered and application situated.
Domain of watermarking procedure is separated in to 2
materials equivalent to on the founded of spatial domain and
other is on the foundation of frequency domain. In spatial
area watermarking, watermark is embedded by way of
changing the pixels worth of the host image/ video instantly.
The major advantages of pixel based ways are that they're
conceptually easy and have very low computational
complexities and as a consequence are broadly utilized in
video watermarking the place real-time performance is a
important difficulty.
In frequency area, the watermark is embedded for the
robustness of the watermarking mechanism. There are 3
primary approaches of information transmission in frequency
area. As SVD FFT and DWT. The principal force supplied
by transforming domain procedures is that they are able to
take talents of designated houses of alternate domains to
handle the boundaries of pixel-based ways or to aid further
aspects. In general, transform domain methods require higher
computational time. In become domain procedure, the
watermark is embedded distributive in overall area of a
fashioned data. Host video is first changed into frequency
domain by using transformation techniques. The converted
area coefficients are then altered to retailer the watermark
know-how. The inverse transform is subsequently applied as
a way to receive the watermarked video. On the basis of
document watermarking may also be apply on picture,
textual content, Audio and Video.[1]
A. Types of Watermarking
a) Visible: The watermark is visible that can be a text or a
logo. It is used to identify the owner [2].
b) Invisible: The watermark is embedded into the image in
such a way that it cannot be seen by human eye. It's used to
guard the picture authentication and in addition prevent it
from being copied.
B. Watermarking Applications
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 8
Watermarking technologies is applied in every digital media
whereas security and owner identification is needed [3]
1. Owner Identification
2. Copy Protection
3. Medical Applications
4. Data Authentication
5. Fingerprinting
C. Watermarking Attacks
There are more than a few possible malicious intentional
or accidental attacks that a watermarked object is likely to
topic to. The availability of broad range of image processing
soft ware’s made it possible to perform attacks on the
robustness of the watermarking systems. The aim of these
attacks is prevent the watermark from performing its
intended purpose [4].
1. Removal Attack
2. Interference attack
3. Geometric attack
4. Low pass filtering attack
5. Forgery attack
6. Security Attack
7. Protocol Attack
8. Cryptographic attacks
Another example of this type of attack is the oracle attack
[5]. Within the oracle attack, a non-watermarked object is
created when a public watermark detector system is
available. These attacks are just like the attacks utilized in
cryptography.
D. Watermarking Techniques
A number of watermarking ways are available. But, these
methods are generally found in sound watermarking.
Discrete Wavelet Transform:
The DWT is just something of filters. You will get two
filters included, one could be the “wavelet filter”, and the
other could be the “scaling filter”. The wavelet filtration is
just a large go filtration, as the scaling filtration is just a low
go filter. Determine 2 reveals workflow of DWT. A benefit
of DWT over various transforms is it enables great
localization equally in time and spatial frequency domain.
On account that of these organic multi-resolution nature,
wavelet progress schemes are principally excellent for
applications where scalability and tolerable destruction are
important. DWT is preferred, because it provides equally a
parallel spatial localization and a volume distribute of the
watermark within the host picture. The hierarchical house of
the DWT offers the possibility of analyzing an indication at
numerous promises and orientations.
Figure 1.Workflow of DWT
Fast Fourier Transform:
FFT algorithm calculates the DFT of a sequence, or their
inverse. Fourier examination turns a signal from their
distinctive domain to a representation in the volume domain
and vice versa. A FFT computes such transformation with
the aid of factorizing the DFT matrix in to a manufactured
from quick (frequently zero) factors. An FFT computes the
DFT and produces exactly the same impact as examining the
DFT classification straight away; probably the most essential
huge difference is that the FFT is greatly extra quickly. (In
the present presence of round-off problem, many FFT
formulations will also be much more specific than examining
the DFT classification straight away.[6]
SVD TECHNIQUE:
Surely the SVD is a numerical method which is used for
diagonalizable matrices in numerical evaluation. In SVD
transformation, a matrix can even be decomposed correct
into a multiplication of three matrices which will also be
linear algebra system that decomposes a specified matrix into
3 aspect matrices are left singular vectors, set of singular
values and proper singular vectors.
SVD watermarking is designed to work on binary. For a
picture of N x N pixels and a binary watermark of p pixels,
divided the picture into (N/4) x (N/4) non overlapping blocks
whose dimension is 4X4 pixels. This is established to come
to a decision the positions of embedded blocks for each and
every watermark bit. The steps are used in video
watermarking are Inserting a watermark, it includes a
watermark insertion unit that makes use of ordinary video,
watermark and a individual key to get the watermarked
video. Watermark insertion unit, It contains the person key,
input video and the watermark is passed via a watermark
insertion unit which outcome in a watermarked video.
Watermark Extraction Unit, It has 2 phases are locating the
watermark and recuperating the watermark expertise.
Watermark Detection Unit includes an extraction unit to 1st
extract the watermark for comparing it with the normal
watermark inserted and the output is sure or no relying on
whether the watermark is present [7].
Image
LL1 LH1 HL1 HH1
LH1
HL1 HH1
LL1 LH1
HL1 HH1
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 9
II. LITERATURE SURVEY
This section represents some previous work in the area of DIW done in past which I have reviewed.
[8] In this paper video watermarking with 3-level DWT is
proposed which is perceptually invisible. Perceptually
invisible implies that the watermark is embedded in video in
such a manner that the change to the pixels values isn't
observed. In proposed work using two special videos and
dissimilar logo images and shown how watermark is detected
and watermarks not detected. The key key's given to
watermark image at some stage in embedding system and
while extracting the watermark photo the equal secret key is
used.
[9] On this paper proposed a potent audio watermarking
scheme based on LWT-DCTSVD, DWT-DCTSVD with
exploration of DE optimization and DM quantization. The
appealing residences of SVD, LWT/DWT-DCT, DE and
quantization procedure make our scheme very robust to more
than a few customary signal processing attacks. The
experimental end result authenticate that the proposed
watermarking scheme has just right imperceptibility too.
The comparison results with other SVD-based and similar
algorithms indicate the superiority of scheme.
[10] in 2013, here they have reviewed some recent
algorithms, proposed a classification based on their intrinsic
features, inserting methods and extraction forms. Many
watermarking algorithms are reviewed within the literatures
which show benefits in systems utilizing WT with SVD. In
this paper they also have presented a review of the significant
techniques in existence for watermarking those which are
employed in copyright protection. Along with these, an
introduction to digital watermarking, properties of
watermarking and its applications have been presented. In
future works, the use of coding and cryptography watermarks
will be approached.
[11] In “Wavelet Bases and Decomposition sequence in
the DIW” analyzes and compares the performance of unique
wavelet bases in the DIW and the result of extraordinary
wavelet decomposition series for the DIW embedding
centered on the application of wavelet in the DIW. The
experiments proved the DIW embedding based on
biorthogonal wavelet better than others.
[12] in” A New Digital Watermarking Algorithm Based
on IWT and SVD” proposed an new algorithm of digital
watermarking based on combining the Non Sub Sampled
Contour let Transform and SVD, they first applied the NSCT
to the image and extract the low-frequency sub-band of
image, and then decompose the low-frequency sub-band of
image by SVD, finally embed the watermarking in the
decomposed SV. The experiment results show that the new
algorithm has good ability in standing up to geometric
attacking, especially rotation attacks.
III. PROPOSED METHODOLOGY
From literature review it has been observed that most of the
approaches introduced in past having problems like Low
Imperceptibility, Data embedding capacity is less, Quality
and More conceptual complexity. These problems have been
removed in proposed work.
A new digital watermarking approach based on hybrid
DWT_FFT and SVD have been proposed in this work. The
proposed algorithm is developed based on 3 stages. Firstly
dissimilar types of wavelets are applied on the host image to
calculate the four sub-bands of original gray scale image.
After that FFT is applied on to the LL sub band of host
image. Later on SVD is calculated on LL sub band. To
manage as well as to develop the force of the watermark, we
have taken a scale factor. At second stage watermarked
image is retrieved by embedding SV of LL sub-band of both
original gray scale image and watermark image. At the final
stage of the algorithm exactly reverse practice is involved to
remove watermark image from the watermarked image. The
performance of this scheme was estimated with respect to the
imperceptibility. It can be seen from the results that the
PSNR value of our proposed algorithm is higher. The
proposed system provided good imperceptibility and the
robustness.
Figure 2. Watermark Embedding Procedure
ImageCover Image ImageWatermark Image
Image
TransformsDifferent
wavelet Transforms wavelet Transforms
on LL sub-band FFT on LL sub-band
bandSVD on LL sub-band bandSVD on LL sub-band
band
Embedding Procedure
ImageWatermarked Image
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 10
Figure 3. Watermark Extracting Pprocedure
A. Proposed Algorithm
The general steps followed in the proposed technique are as
follows
1. Digital Watermarking
Step 1 Take the original image (a) and convert it into gray
scale image using function-
Step 2 Now apply dissimilar types of WT to “I” and
decompose it into 4 sub bands LLз, LHз, HLз and HHз.
Step 3 Apply FFT onto the LL sub band of I.
Step 4 Apply SVD to the LL sub band i.e.
Step 5 Now take the watermark picture and apply 3 levels
DWT to decompose it into 4 sub bands i.e. LLз, LHз, HLз
and HHз.
Step 6 Apply FFT onto the LL sub band of I.
Step 7 Apply SVD to the LL sub band of watermark i.e.
Step 8 Modify the singular values(SVs) (I_s) of Ie with the
SVs (W_s) of watermark i.e.
Here α stands for scale factor.
Step 9 Now obtain modified DWT coefficient i.e.
Step 10 At last, the watermarked picture “W*” is obtained
by applying inverse three level DWT.
2. Watermark Extraction
Step 1 take the watermarked image and apply the same
process to calculate the SVs of watermarked image.
Step 2 Subtract the SVs of watermarked picture i.e. (Wm_s)
from SVs of normal picture i.e. (I_s) to get the SVs of
watermark image i.e.
Step 3 Obtain modified DWT coefficient i.e.
Step 4 Get the watermark image by applying inverse
DWT_FFT process.
IV. RESULT SIMULATION
This section represents the experimental analysis of the
proposed techniques. Numerous experiments had been
conducted using MATLAB.
The proposed technique uses mixture of hybrid DWT_FFT
along with SVD for embedding the watermark on the Cover
Image. The focus of digital watermarking in transform
domain is to insert the max possible watermark signal
without perceptually affecting image quality, so that the
watermark must remain present as imperceptible and robust.
There are a no. of watermarking way exists in transform
domain. With the help of these techniques issues such as
visual quality of the image and robustness can be
accommodated, a single transform based watermarking is not
able to satisfy diverse criteria desired for watermarking. The
specifications reminiscent of imperceptibility with appreciate
to payload ability and robustness of watermarking approach
contradict each and every different. In order to increase the
robustness, the payload should be increased but it decreases
the imperceptibility of the image. The incorporation of
imperceptibility and robustness simultaneously in
watermarking system design is an issue that needs to be
addressed. DWT reduces the image data and then watermark
is embedded in high frequency sub bands. This will filtered
out the unwanted information from the image. Thus whilst to
Watermarked Original
Different Wavelet Different wavelet
TransformsDifferent
FFT on LL sub-bandFFT FFT on LL sub-band
SVD on LL sub- SVD on LL sub-
Watermark Extracting
ProcedureWatermark
Extracted Watermark
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 11
keep the robustness and imperceptibility of the watermarked
picture.
Figure4. (a) host image, (b) Watermark Image (c)Embedded Image (d)
Extracted Image
Table1 Shows the PSNR value of extracted watermark
image. PSNR is a ratio most likely applied as the great
measurement among the original and the compressed picture.
The more PSNR, the improved quality of reconstructed or
compressed image. The results of proposed algorithm gives
the more PSNR values so the better quality of image.
Table2. Shows MSE Values of Base & Propose Algorithm.
For the realistic purposes, MSE makes it possible for
researchers to evaluate the “actual” pixel values of long-
established knowledge with the degraded picture. As
understood by using the identify, MSE represents the natural
of squares of the “errors” between the exact picture and the
noisy image. The error can be calculated as the amount by
which the values of the original image differ from the
degraded image. Minimum value of MSE leads to the higher
the quality of picture.
The thought is that the bigger the PSNR, the simpler
degraded picture has been reconstructed to check the real
picture and the easier reconstructive algorithm. This would
occur because we wish to minimize the MSE between
pictures with respect the maximum signal value of the image.
TABLE1. PSNR VALUE OF BASE & PROPOSE ALGORITHM
Wavelet
Function
Noise Base PSNR Propose
PSNR
Haar No 21.4123 50.0913
Wavelet
Function
Salt & Pepper 21.2431 34.0682
Bior 5.5 No 21.4221 50.4864
Salt & Pepper 21.2229 34.0624
Bior 1.1 No 21.4123 50.0913
Salt & Pepper 21.2431 34.0682
Sym8 No 21.4120 49.7629
Salt & Pepper 21.2430 34.0430
Coif5 No 21.4126 49.4620
Salt & Pepper 21.2434 34.0349
TABLE2. MSE VALUE OF BASE & PROPOSE ALGORITHM WITHOUT NOISE
Wavelet
Function
Base MSE Propose MSE
Haar 0.0072 0.0031
Bior 5.5 0.0070 0.0030
Bior 1.1 0.0072 0.0031
Sym8 0.0071 0.0032
Coif5 0.0075 0.0034
fig.5. PSNR value without Noise Attack
Figure 6. PSNR value with Noise Attack
Figure7. MSE value with Noise Attack
CONCLUSION
The proposed way uses the hybrid DWT-FFT technique
along with SVD technique for embedding the watermark on
(a) (b)
(c) (d)
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 12
the Cover Image and follows the reverse scheme to extract
the watermark picture. The proposed work is able to achieve
moderate robustness, high imperceptibility with reduced
amount of data to be processed. A no. of experiments have
been taken and Analysis is done based on experimental
results which shows improved performance of the proposed
method when compared with the Single level DWT-SVD
centered watermarking offered in Base procedure. PSNR
value generated from proposed algorithm is much higher
than base algorithm which assures the enhanced value of
images. The grouping of 3 methods hybrid DWT-FFT along
with SVD, introduced in proposed method is the reason
behind the better performance, good imperceptibility and
enhanced quality of image.
REFERENCES
[1] Naved Alam, “A Robust Video Watermarking Technique using DWT, DCT, and FFT”. 2016, IJARCSSE
[2] Manjinder Kaur and Varinder Kaur Attri “A Survey on Digital Image Watermarking and Its Techniques” International Journal of Signal Processing, Image Processing and Pattern Recognition Vol. 8, No. 5 (2015), pp. 145-150
[3] Mei Jiansheng, Li Sukang, “A Digital Watermarking Algorithm Based On DCT and DWT”, Proceedings of the 2009 International Symposium on Web Information Systems and Applications (WISA‟09) Nanchang, P. R. China, May 22-24, 2009, pp. 104-107
[4] Prabhishek Singh, R S Chadha “A Survey of Digital Watermarking Techniques, Applications and Attacks”International Journal of Engineering and Innovative Technology (IJEIT) Volume 2, Issue 9, March 2013.
[5] Ashminder Kaur, Ms. Lofty, "A Review of Colour Image Watermarking Scheme Based Image Normalized", International Journal of Computer Sciences and Engineering, Volume-04, Issue-07, Page No (183-185), Jul -2016
[6] Rupali Nayyar, Randhir Singh, Ritika, “Improved Audio Watermarking Using Arnold Transform, DWT and Modified SVD”. IJIRSET.2016
[7] Priya Chandrakar and Shahana Gajala Qureshi, "A Review on Video Watermarking", International Journal of Computer Sciences and Engineering, Volume-03, Issue-04, Page No (48-52), Apr -2015.
[8] Shaikh Shoaib, Prof. R. C. Mahajan “Authenticating Using Secret Key in Digital Video Watermarking Using 3- Level DWT” International Conference on Communication, Information & Computing Technology (ICCICT), Jan. 16-17,IEEE 2015.
[9] Baiying Lei, Ing Yann Soon, and Ee-Leng Tan “Robust SVD-Based Audio Watermarking Scheme With Differential Evolution Optimization” IEEE Transactions On Audio, Speech, And Language Processing, Vol. 21, No. 11, November 2013.
[10] Y. Shantikumar Singh, B. Pushpa Devi, and Kh. Manglem Singh, “A Review of Different Techniques on Digital Image Watermarking Scheme”, International Journal of Engineering Research, ISSN:2319- 6890, Volume No.2, Issue No.3, pp:193-199, 01 July 2013.
[11] Chen Li, Cheng Yang, Wei Li, “Wavelet Bases and Decomposition Series in the Digital Image Watermarking”. Advances in Intelligent and Soft Computing, Advances in Multimedia, Software Engineering and Computing Vol.2 , s.l. : Springer, 2012.
[12] A.Kala and K.Thaiyalnayaki, "Chaos based Image Watermarking using IWT and SVD", International Journal of Computer Sciences and Engineering, Volume-03, Issue-01, Page No (72-75), Jan -2015
Authors Profile
Kanchan Thakur studies in Dept. of
Information Technology, SATI, Vidisha,
(M.P.), India and her research field in image
watermarking.
© 2017, IJSRCSE All Rights Reserved 13
International Journal of Scientific Research in _________________________________ Review Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.13-18, February (2017) E-ISSN: 2320-7639
Zero day Attacks Defense Technique for Protecting System
against Unknown Vulnerabilities
Umesh Kumar Singh1, Chanchala Joshi
2*, Suyash Kumar Singh
3
1School of Engineering and Technology, Vikram University, Ujjain, M.P. India
2Institute of Computer Science, Vikram University Ujjain, M.P. India
3Institute of Engineering and Technology, Devi Ahilya Vishwavidyalaya, Indore, M.P. India
Available online at: www.isroset.org
Received 29th Dec 2016, Revised 15th Jan 2017, Accepted 08th Feb 2017, Online 28th Feb 2017
Abstract— Every organization connected to the internet has one common threat of zero-day attacks. Zero-day exploits
are unnoticed until a specific vulnerability is actually identified and reported. Zero-day attacks are difficult to defend
against because it is mostly detected only after it has completed its course of action. Protecting networks, applications
and systems from zero-day attacks is the daunting task for organization’s security persons. This paper analyzed the
research efforts in relation to detection of zero-day attacks. The fundamental limitations of existing approaches are the
signature generation of unknown activities and the false alarming rate of anomalous behavior. To overcome these
issues, this paper proposes a new approach for zero-day attacks analysis and detection, which senses the organization’s
network and monitors the behavioral activity of zero-day exploit at each and every stage of their life cycle. The
proposed approach in this paper provides a machine learning based framework to sense network traffic that detects
anomalous behavior of network in order to identify the presence of zero-day exploit. The proposed framework uses
supervised classification schemes for assessment of known classes with the adaptability of unsupervised classification in
order to detect the new dimension of classification.
Index Terms— zero day attacks, unknown vulnerabilities, detection system, malware analysis, network security
I. INTRODUCTION
During the past few years, the rapidly growing use of
network services presents the biggest challenge in protecting
computing environment for being everything digital. Every
day the world of digital information security faces new
challenges; an incredible flood of new devices is challenging
tradition methods of securing organization’s network. Major
software releases, introduce important new features very
frequent which result in unexpected vulnerabilities [1].
Therefore, the overall security level of a network cannot be
measured by simply identifying the number of known
vulnerabilities present in the system. The securing network
system is more than patching known vulnerabilities and
deploying firewalls or IDSs. The safer network configuration
has little value if it is vulnerable to zero-day attacks. Zero-
day attacks pose a serious threat to the organization’s
network, as they can exploit unknown vulnerabilities. The
vulnerabilities that are unknown could cause harm at any
level of the system’s security because of unavailability of
patches. Also, the security risk level of unknown
vulnerabilities is difficult to measure due to less predictable
nature of them.
According to Symantec’s Internet Threat Report of 2016 [2],
there is 125% increase in targeted attacks from the year
before in 2015. Also, a new zero-day vulnerability was found
every week, on an average, in 2015. The zero-day
vulnerabilities continue to trend upward from the last six
years with 8 zero-day vulnerabilities reported in 2011, 14
zero-day vulnerabilities reported in 2012 and 23 zero-day
vulnerabilities in 2013 which is doubled from the year
before. In 2014, the number held relatively steady at 24.
However, in 2015, an explosion in zero-day vulnerabilities
reaffirms the critical role of zero-day attacks. 82 zero-day
vulnerabilities were reported in 2016 up to the month of
October. These estimates include only vulnerabilities that
were eventually reported; the true number of zero-day
vulnerabilities available to attackers could be much higher.
Figure 1 shows zero-day vulnerabilities from 2011 to
October 2016.
Zero-day attacks are the attacks against system flaws that are
unknown and have no patch or fix [3, 4]. With traditional
defenses it is extremely difficult to detect zero-day attacks
because traditional security approaches focus on malware
signatures, this information is unknown in the case of zero-
day attacks. Attackers are extraordinarily skilled, and their
malware can go undetected on systems for months or even
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 14
years which gives them plenty of time to cause irreparable
harm [5, 6]. So, dealing with unknown vulnerabilities is
clearly a challenging task. although there are many effective
solutions like IDS/IPS, firewalls, antivirus, software
upgrading and patching for tackling known attacks [8], but
zero-day attacks are known to be difficult to mitigate due to
the lack of information. Discovering unknown vulnerabilities
and figuring out how to exploit them is clearly a challenging
task. Figure 2 shows the timeline of zero-day vulnerability
from discovery to patch.
Fig1. 2011 – 2016 Zero- Day Vulnerabilities
Fig2. Zero-Day Timeline from discovery to patch
Zero-day vulnerabilities are the most harmful among of all
the hazards confronting organization’s computing
environment. They exposed system’s flaws to the attacker
before a patch is available. Zero-day vulnerabilities are
unknown but sometimes software vendor knows about the
flaw but has not yet issued a fix. According to FireEye report
[7], vulnerabilities discovered by cybercriminals remain
unknown to the public, including vendors of the software, for
an average of 310 days.
A. Terminology Used for Defining the Concepts
Vulnerability: Vulnerability is a weakness or bug in
a software program that might be used by attackers
or cyber-criminals to execute unauthorized code on
a network system.
Exploit: An exploit triggers the vulnerability and
executes a malicious action inside the vulnerable
application without knowledge of the attacked user.
Zero-Day attack: A Zero-Day attack is an exploit
for vulnerability for which no patch is readily
available and vendor may or may not be aware, it
can even infect the most up-to-date system.
Zero-Day Vulnerability: An unpatched
vulnerability, the term "zero-day" denotes that
developers had zero days to fix the vulnerability.
Alarm: An alert which indicates that a system is or
being attacked.
True Positive: Number of correctly identified
malicious code.
False Positive: Number of incorrectly identified
trusted code as malicious code. Alarm is generated
when there is no actual attack.
False Negative: Number of incorrectly rejected
malicious code. Detector fails to detect actual attack
and no alarm is generated while the system is under
attack.
Noise: Data or interference that can trigger a false
positive.
Zero-day exploits require additional security defenses in
order to protect network system; the traditional defenses are
powerless against them. This paper described the zero-day
attacks challenges, zero-day exploit identification and
detection techniques and proposes a new approach to identify
the zero-day attack.
This paper analyzed the dangers of zero-day attacks and
proposed ZDAR (Zero-Day Attack Remedy) System to
detect and rank unknown vulnerabilities. To detect unknown
vulnerabilities the proposed ZDAR system involved various
advanced techniques, such as polymorphic worm
recognition, traffic monitoring, signature generation and
attack validation. Finally, the proposed system recommends
some practical steps to reduce the risks of zero-day attacks.
II. LITERATURE REVIEW
Zero day attack exploits zero-day vulnerability without any
signature [9]. It takes advantage of a malware before a patch
has been created. That means, for zero-day vulnerability no
patch is readily available, also vendor may or may not be
aware of it. The name ―zero-day‖ shows that it occurs before
the vulnerability is known; the term "zero-day" denotes that
developers have had zero days to fix the vulnerability. A
zero-day attack exploits a vulnerability that has not been
disclosed publicly, including vendor of software, therefore,
almost no defense mechanism available against zero-day
attack. The anti-virus products cannot detect the attack
through signature-based scanning and because the
vulnerability is unknown, the affected software cannot be
patched [10]. These unpatched vulnerabilities are free pass
for attackers to any target they want to attack. All these facts
range the market value of new vulnerability in $5000 to
$250,000 [11].
According to Kaur & Singh [1] the most dangerous attacks
that are harder to detect are polymorphic worms which show
distinct behaviors and worms pose a serious threat to the
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 15
Internet security. These worms rapidly propagated and
increasingly threaten the Internet hosts and services by
exploiting unknown vulnerabilities also they can change their
own representations on each new infection. The same have
many signatures hence their fingerprinting generation is very
difficult task.
Rathor et al. [12] analyzed the log files using log correlation
to detect the zero attack using attack graph. However, by
nature of zero day attack, they cannot be predicted and hence
remedial measures cannot be planned in advance. In the field
of vulnerability categorization Joshi et al. [13] evaluates
some of the prominent taxonomies, this assessment is helpful
for proper categorization of vulnerabilities presents in
network system environment and proposed a five
dimensional approach for vulnerabilities categorization [14]
with attack vector, defense, methodology used for
vulnerability exploitation, impact of vulnerability on to the
system, and the target of attack. There are many
vulnerability scanners available for identification and
assessment of vulnerabilities. Selection of these vulnerability
scanners plays an important role in network security
management [15,16]. However, these vulnerability scanners
could not idetifiy zero-day attacks due to less predictable
nature of zero-day attacks. Zhichun Li [17] proposed a fast,
noise-tolerant and attack-resilient network-based automated
signature generation system Hamsa, for polymorphic worms;
which allowed to make analytical attack-resilience
guarantees for the signature generation algorithm.
The most dangerous zero-day exploits driven by downloads,
in which an exploited Web page results malware attack in
system [18]. These kinds of attacks exploit Web browser’s
vulnerabilities or third-party browser plug-ins. So far, some
of the most hazardous zero-day exploits that play critical role
in lucrative targeted attacks are Hydraq Trojan [19], Stuxnet
[20], Duqu [21] and Flamer [22]. Hydraq Trojan designed to
steal information from several companies. Stuxnet, vanished
the Iranian nuclear program in 2010, contained four zero-day
exploits never before seen. It is known as malware of the
century and U.S. and Israeli government agencies are
suspected of having created Stuxnet. Duqu, identified as the
most sophisticated malware ever seen, appeared in 2012 [23],
used against the security firm and many other targets
worldwide. An unknown high level programming language
used to develop some part of Duqu malware and it exploits
zero-day Windows kernel vulnerabilities. Flame malware
discovered by Kaspersky Lab in 2012, exploits zero-day
vulnerabilities in Microsoft Windows. These zero-day
attacks are most difficult to defend because after attack only
the data get available for analysis [24].
III. TRADITIONAL DEFENSES AGAINST ZERO-DAY
ATTACKS
Any organization connected to the internet has one common
threat of zero-day attacks. The purposes of these attacks are,
sensing confidential information, monitoring target’s
operations, theft of commercial information and system
disruption. This section analyzed the research efforts done in
direction of defense against zero-day exploit. The primary
goal of defense techniques is to identify the exploit as close
as possible to the time of exploitation, to eliminate or
minimize the damage caused by the attack [25]. The research
community has broadly classified the defense techniques
against zero-day exploits as statistical-based, signature-
based, behavior-based, and hybrid techniques [1].
A. Statistical-based
Statistical-based attack detection techniques maintain the log
of past exploits that are now known. With this historical log,
attack profile is created to generate new parameters for new
attacks detection. This technique determines the normal
activities and detects the activities which are to be blocked.
As the log is updated by historical activities, the longer any
system utilizing this technique, the more accurate it is at
learning or determining normal activities [26]. Statistical-
based techniques build attack profiles from historical data,
which are static in nature; therefore they are not able to adopt
the dynamic behavior of network environment. So, these
techniques can’t be used for detection of malware in real
time.
B. Signature-based
For detection of polymorphic worms, signature-based
techniques are used to identify their new representations on
each new infection. There are basically 3 categories of
signature-based detection techniques [1]: content-based
signatures, semantic-based signatures and vulnerability-
driven signatures. These techniques are generally used by
virus software vendors who will compile a library of
different malware signatures [1]. These libraries are
constantly being updated for newly identified signatures of
newly exploited vulnerabilities. Signature-based techniques
are often used in virus software packages to defend against
malicious payloads from malware to worms.
C. Behavior-based
These techniques rely on the ability to predict the flow of
network traffic [1]. Their goal is to predict the future
behavior of network system in order to resist the anomalous
behavior. The prediction of future behavior is done by
machine learning approach through the current and past
interactions with the web server, server or victim machine
[27]. Behavior-based techniques determine the essential
characteristics of worms which do not require the
examination of payload byte patterns [1]
Intrusion detection and intrusion prevention signatures
integrate these defense techniques. These signatures need to
have two basic qualities [1], ―First, they should have a high
detection rate; i.e., they should not miss real attacks. Second,
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 16
they should generate few false alarms‖. The goal of any
techniques used by an organization should be to detect in real
time the existence of a zero-day exploit and prevent damage
and proliferation of the zero-day exploit.
D. Hybrid-based
Hybrid-based techniques combine heuristics with various
combinations of the three previous techniques which are
statistical-based, signature-based, and behavior-based
techniques. Using a hybrid model technique will overcome a
weakness in any single technique [1].
IV. RECENT ZERO-DAY VULNERABILITIES BY
CATEGORY
Zero-day attacks pose one of the most serious threats to the
organization’s network, as they can exploit unknown
vulnerabilities. The unknown vulnerabilities could cause
harm at any level of the system’s security, because the
security risk of unknown vulnerabilities can’t be measure due
to less predictable nature of them [7]. Table 1 represents the
sample list of recent zero-day vulnerabilities by category.
The recently discovered zero-day attacks reflect that cyber-
attacks are becoming more sophisticated and better at
bypassing organizational defenses, so it has become crucial
to detect zero-day attacks.
Table 1: Recent zero-day vulnerabilities list
Adobe/Flash
Operation Greedy
Wonk
CVE-2014-
0498
Remote Code
Execution
CVE-2014-
0502
Buffer Overflow CVE-2014-
0515
Stack Based Buffer
Overflow
CVE-2014-
9163
ActionScript 3
ByteArray Use After
Free Remote Memory
Corruption
CVE-2015-
5119
Remote Code
Execution
CVE-2014-
0497
CVE-2015-
5123
CVE-2015-
5122
CVE-2015-
5119
Operation Pawn Storm CVE-2015-
7645
Internet
Explorer
Remote Code
Execution
CVE-2014-
1776
Backdoor.Moudoor CVE-2014-
0322
Memory Corruption CVE-2014-
0324
Backdoor.Korplub CVE-2015-
2502
Given the value of these vulnerabilities, it’s not surprising
that a market has evolved to meet demand. In fact, at the rate
that zero-day vulnerabilities are being discovered, they may
become a commodity product [25]. Targeted attack groups
exploit the vulnerabilities until they are publicly exposed
then toss them aside for newly discovered vulnerabilities.
When The Hacking Team was exposed in 2015 as having at
least six zero-days in its portfolio [23], it confirmed our
characterization of the hunt for zero days as being
professionalized.
V. PROPOSED ZDAR (ZERO-DAY ATTACK REMEDY)
SYSTEM
The zero-day attacks occur between the time period, when
vulnerability is first exploited and when software vendors
start to develop a counter to that attack. It is difficult to
measure the duration of the time period, as it is hard to
determine when the vulnerability was first discovered. Even
sometimes vendors do not know if the vulnerability is being
exploited when they fix it. So the vulnerability may not be
recorded as a zero-day attack. However the vulnerability
time period can be of several years long. According to
FireEye [7], a typical zero-day attack may last for 310 days
on average.
The proposed framework is visualized as a security system
that monitors the network flow and deciding whether it is
malicious or not. Figure 3 shows the proposed system
architecture, which consists of the following six major
components: data acquisition module, an intrusion detection
system, information collection, feature extraction and
transformation, supervised classifier, and a UI (client
machine/ host/ server machine) portal.
Fig 3. ZDAR (Zero-Day Attack Remedy) Framework
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 17
The data capture module is a device Traffic Analyzer (TA)
which parses packets and collates packets belonging to the
same flow. This module is responsible for generating all the
flow-level features associated with this flow. The IDS/IPS
module performs deep packet inspection and tags the flow
whether it belongs to some threat. The information storage
component stores all the flow features and their associated
class labels. The feature extraction module extracts statistical
features on a per-flow basis while the feature transformation
module converts them into more robust features that will be
used to build classifiers for detecting malicious flows. The
classifiers are constructed in an offline fashion and are
deployed to incoming network flows. The UI portal is used
for reporting the emergence of new suspicious flows.
The goal of proposed framework is to detect and isolate
malicious flows from the network traffic and further classify
them as a specific type of the known malware, variations of
the known malware or as a new (unknown) malware. To
achieve this, we develop a machine-learning based malware
detection and classification framework by sensing
organization’s network traffic features. Our proposed
framework integrates the accuracy of supervised
classification on known classes with the adaptability of
unsupervised learning for new malware detection.
VI. CONCLUSION
Vulnerabilities appear in almost every organization, but the
most attractive to targeted attackers is software that is widely
used [28]. Most of the vulnerabilities are discovered in
software such as Internet Explorer and Adobe Flash, which
are used frequently by a large number of consumers and
professionals. After discovery, the zero-day attacks are
quickly added to attackers’ toolkits and exploited. This paper
presents a malware detection approach based on features
derived from network flow characteristics. The proposed
approach addresses, the supervised learning techniques and
identify flows of known and unknown malware with very
high precision.
Networks are dynamic in behavior with uncertainties, so new
method should regularly be sought to prevent malicious
attackers from exploiting unknown vulnerabilities. This
paper proposes an efficient approach to detect zero-day
attacks using feature extraction and transformation by
sensing suspicious network connections which do not match
known attack signatures at run-time. The feature
transformation module discovered the suspicious connections
which differentiate between the behavior of known attacks
and anomalous activities. The anomaly detection technique is
used to discover anomalies and thus to identify the zero-day
attack types using an assigned anomaly score. The proposed
method is effective and efficient in detecting zero-day attacks
than the typical statistical based anomaly detection
techniques.
ACKNOWLEDGEMENT
The authors are thankful to MP Council of Science and
Technology, Bhopal, for providing support and financial
grant for the research work.
REFERENCES
[1] Kaur, R.; Singh, M., "Efficient hybrid technique for
detecting zero-day polymorphic worms," Advance
Computing Conference (IACC), 2014 IEEE
International, pp.95-100, 21-22 Feb. 2014.
[2] ―Internet Security Threat Report‖, Internet Report
Volume 21, APRIL 2016.
[3] Kaur, R.; Singh, M., ―Automatic Evaluation and
Signature Generation Technique for Thwarting Zero-
Day Attacks‖, Second International Conference, SNDS
2014, India, pp 298-309, March 13-14, 2014.
[4] K. Ren, C. Wang, and Q. Wang, ―Security challenges for
the public cloud,‖ IEEE Internet Computing, vol. 16, no.
1, pp. 69–73, 2012.
[5] Y. Yang, S. Zhu, and G. Cao, ―Improving sensor
network immunity under worm attacks: a software
diversity approach,‖ in Proceedings of the 9th ACM
international symposium on Mobile ad hoc networking
and computing. ACM, 2008, pp. 149–158.
[6] J. Caballero, T. Kampouris, D. Song, and J. Wang,
―Would diversity really increase the robustness of the
routing infrastructure against software defects?‖ in
Proceedings of the Network and Distributed System
Security Symposium, 2008.
[7] White Paper, ―ZERO-DAY DANGER: A Survey of
Zero-Day Attacks and What They Say About the
Traditional Security Model‖, FireEye Security
Raimagined, 2015.
[8] L. Wang, M. Zhang, S. Jajodia, A. Singhal, and M.
Albanese, ―Modeling network diversity for evaluating
the robustness of networks against zeroday attacks,‖ in
Proceedings of ESORICS’14, 2014, pp. 494–511.
[9] A. AlEroud, G. Karabatis, ―Toward Zero-day Attack
Identification Using Linear Data Transformation
Techniques‖, IEEE 7th International Conference on
Software Security and Reliability, pp 161-168, 18 - 20
Jun 2013.
[10] T. Leinster and C. Cobbold, ―Measuring diversity: the
importance of species similarity,‖ Ecology, vol. 93, no.
3, pp. 477–489, 2012.
[11] L. Bilge, T. Dumitras, ―Before we knew it: an empirical
study of zero-day attacks in the real world‖, CCS '12
Proceedings of the 2012 ACM conference on Computer
and communications security, pp 833-844, Raleigh,
North Carolina, USA — October 16 - 18, 2012.
[12] M. Rathor, D. M. Dakhane, ―Predicting Unknown
Vulnerabilies in Network Using K- zero Day Safety
Technique‖, International Journal of Advanced Research
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 18
in Computer Science and Software Engineering 5 (4),
pp. 221-224, April- 2015.
[13] C. Joshi, U.K. Singh, ―A Review on Taxonomies of
Attacks and Vulnerability in Computer and Network
System‖. International Journal of Advanced Research in
Computer Science and Software Engineering (IJRCSSE)
Volume 5, Issue 1, January 2015, pp 742-747.
[14] C. Joshi, U.K. Singh, ―ADMIT- A Five Dimensional
Approach towards Standardization of Network and
Computer Attack Taxonomies‖. International Journal of
Computer Application (IJCA, 0975 – 8887), Volume
100, Issue 5, August 2014, pp 30-36
[15] C. Joshi and U. Singh, ―Analysis of Vulnerability
Scanners in Quest of Current Information Security
Landscape‖ International Journal of Computer
Application (IJCA, 0975 – 8887), Volume 145 No 2, pp.
1-7, July 2016.
[16] C. Joshi, and U. K Singh, ―Performance Evaluation of
Web Application Security Scanners for More Effective
Defense‖ International Journal of Scientific and
Research Publications (IJSRP), Volume 6, Issue 6, pp
660-667, June 2016, ISSN 2250-3153.
[17] Z. Li, M. Sanghi, Y. Chen, ―Hamsa∗: Fast Signature
Generation for Zero-day Polymorphic Worms with
Provable Attack Resilience‖, Proceedings of the 2006
IEEE Symposium on Security and Privacy (S&P’06).
[18] M. Frigault, L. Wang, A. Singhal, and S. Jajodia,
―Measuring network security using dynamic bayesian
network,‖ in Proceedings of 4th ACM QoP, 2008.
[19] A. Lelli. (2010, Jan.) The trojan. hydraq incident:
Analysis of the aurora 0-day exploit, Available:
http://www.symantec.com/connect/blogs/trojanhydraq-
incidentanalysis-aurora-0-day-exploit
[20] N. Falliere, L. O. Murchu, and E. Chien.(2011, Feb.)
W32.stuxnet dossier, Available:
http://www.h4ckr.us/library/Documents/ICSEvents/Stuxnet
%20Dossier%20(Symantec)%20v1.4.pdf
[21] Symantec. (2011, Nov.) W32.duqu the precursor to the
next stuxnet, Available:
http://www.symantec.com/content/en/us/enterprise/media/s
ecurity response/whitepapers/w32
duqu the precursor to the next stuxnet.pdf
[22] R. Goyal and P. Watters, ―Obfuscation of stuxnet and
flame malware,‖ in Proc. 3rd Int. Conf. on Applied
Informatics and Computing Theory, pp. 150–154,
Barcelona, Oct. 2012.
[23] ―McAfee Labs 2017 Threats Predictions‖, Intel Security,
November 2016.
[24] P. Ammann, D. Wijesekera, and S. Kaushik, ―Scalable,
graph-based network vulnerability analysis,‖ in
Proceedings of ACM CCS’02, 2002.
[25] D. Hammarberg, ―The Best Defenses against Zero-day
Exploits for Various-sized Organizations‖, SANS
Institute InfoSec Reading Room, September 21st 2014.
[26] M. Albanese, S. Jajodia, and S. Noel, ―A time-efficient
approach to cost-effective network hardening using
attack graphs,‖ in Proceedings of DSN’12, 2012, pp. 1–
12.
[27] Y. Alosefer, O.F. Rana, "Predicting client-side attacks
via behavior analysis using honeypot data", Next
Generation Web Services Practices (NWeSP), 2011 7th
International Conference on Next Generation Web
Services Practices, pp.31,36, 19-21 Oct. 2011.
[28] I. Kim, K. Kim, ―A Case Study of Unknown Attack
Detection against Zero-day Worm in the HoneyNet
Environment‖, 11th International Conference on
Advanced Communication Technology (ICACT), pp
1715-1720, 15 - 18 Feb 2009.
Authors Profile
Umesh Kumar Singh (M’16) received his Doctor of Philosophy (Ph.D.) in Computer Science from Devi Ahilya University, Indore(MP)-India. He is currently Associate Professor of Computer Science and Director in School of Engineering & Technology, Vikram University, Ujjain(MP)-India. He has authored 6 books and his about 100 research papers are published in national and international journals
of repute. He was awarded Young Scientist Award by M.P. council of Science and Technology, Bhopal in 1997. He is reviewer of various International Journals and member of various conference committees. His research interest includes Computer Networks, Network Security, Internet & Web Technology, Client-Server Computing and IT based education.
Chanchala Joshi received her Master of Science in Computer Science and Master of Philosophy in Computer Science from Vikram University, Ujjain(MP)-India. She is currently Ph.D. Student and Junior Research Fellow in Institute of Computer Science, Vikram University, Ujjain(MP)-India. Her research interest includes network security,
security measurement and risk analysis.
© 2017, IJSRCSE All Rights Reserved 19
International Journal of Scientific Research in __________________________________ Review Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.19-23, February (2017) E-ISSN: 2320-7639
Data Mining: A Comparative Study of its Various Techniques and
its Process
Marie Fernandes
Department of Computer Science, Indore Indira School of Career Studies, Indore, India
Available online at: www.isroset.org
Received 24th Dec 2016, Revised 8th Jan 2017, Accepted 03th Feb 2017, Online 28th Feb 2017
Abstract - Data Mining also called as Information Mining or certainty finding is the term which is utilized for removing
or finding helpful data from the information that are available in vast databases. It likewise investigates covered up or
prescient examples of content that can be said as predictive patterns of text, from databases. This term showed up in
1990's. It is a procedure that examines or analyses information from alternate points of view and compresses it into
helpful data. This data can then be utilized for different business purposes by various undertakings. Information
mining from that point forward has turned into an essential piece of Knowledge Discovery in Databases (KDD), data
Digging, data fishing, and Data Collecting as appropriately termed as Data Dredging, Data Fishing, and Information
Harvesting. It turns a large collection of data into knowledge that can fulfill current global challenge because
computerization has lead to explosively growing, widely available and gigantic body of data floating through WWW.
Data mining methods are expected to change this information into sorted out learning. Keeping in mind the end goal to
do as such; capable and flexible tools are required which would reveal important data from the huge measures of
information. This need has prompted to numerous strategies, for example, Classical Techniques which incorporates
Statistics which provides measurements, Neighborhoods and Clustering which works through grouping and the Cutting
edge Procedures incorporates Trees, Networks and Rules. The dominant part of information mining methods manages
distinctive information sorts. The scope, purpose and motivation behind this paper is to do a relative investigation of the
different procedures accessible in information mining with their preferences, burdens and the field where they can be
properly utilized. This paper presents overview of data mining, the different strategies of data or information mining.
Keywords - Data mining, Data Dredging, Statistics, Nearest Neighbor, Decision Trees and Neural Networks.
I. INTRODUCTION
Data Mining is getting to be a rising exploration point
discovering applications in many fields like engineering,
Medicine, Business, Education and Science. Data dredging is
the use of data mining to uncover patterns in data that can be
presented as statistically significant. A lot of information has
prompt to expansive databases thus propelled database
frameworks, data warehousing and so data mining is
additionally progressing. This stockpiling or vault of gigantic
volumes of information and has posed like a challenging and
testing task in breaking down these stored information. The
proficient and viable examinations of information from the
enormous volumes of data that have been amassed needs
viable data mining strategies. Data mining procedure is
concerned with the investigation of data using some product
methods or software techniques for finding covered up and
unforeseen patterns and connections in sets of information.
Data mining concentrates on finding the data that is covered
up and is unexpected. It is extraction of new information
from expansive databases. Data Mining (DM) is an essential
part in the process of Knowledge Discovery in Databases. As
there are different information present and in addition many
concealed patterns of data are there in the databases thus, it
gets to be distinctly important to know the different strategies
that can be utilized for data mining.
The rest of the paper is organized as follow: Section II
mentions the literature reviews in the form of related work
done in data mining. Section III gives details of various data
mining techniques. Section IV explains about the classical
techniques of data mining with advantages and
disadvantages. Section V explains the next generation
techniques of data mining with advantages and drawbacks.
Section VI deals with the methodology used, Section VII is
of Results and Discussions, Conclusion and Future Scope is
shown in Section VIII and the references are mentioned in
the last section.
II. RELATED WORK
This section gives the summary of the various technical
articles and review work carried out in the field of data
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 20
mining and its techniques. In [1] Lee, S and Siau, K, has
analyzed that some techniques for solving data mining tasks
and concluded that the statistical techniques are used to
discover patterns and build predictive models, the neural
networks are powerful mathematical models suitable for
almost all data mining and the Decision trees can naturally
handle all types of variables, even with missing values.
In [2] Berson, Alex, evaluated the data sets that are present,
different tools that are needed for business data processing
and analysis.
In [3] S Mahajan, tries to analyzed the concepts of data
mining that could be used in the various fields as part of data
collection, data extraction.
In[4] Jain, A.K., Murty, M.N., and Flynn, P.J, analyzed
several applications where decision making and exploratory
pattern investigation can be performed on expansive
informational collections. It concludes that data abstraction
that is simple and compact representation of data can be done
in decision making rather than using a entire data set.
In [5] P Berkhin, attempts to analyzed that clustering divides
the data into groups of similar objects. It disregards details
for providing data simplification. It also provides concise
summaries of the data.
In [6] Jaskaranjit Kaur and Gurpreet Kaur, described the
processes of selected techniques from the data mining point
of view. The result of the research is that new research
solutions are needed for the problem of categorical data
mining techniques for future work.
In [7] J.Sheela Jasmine, attempts to analyzed, neural
networks to be a promising data mining tool because they
have proven their predictive power through comparison with
other statistical techniques using real data sets but due to
design problems neural systems need further research before
they are widely accepted in industry.
In [8], C Kaur, P Kapoor, M Bala , attempts to analyze the
efficiency of neural network algorithms and their
effectiveness to produce result as they have self-adjusting
nature.
In [9] P Gaur, concludes that neural network is very suitable
for solving the problems of data mining because of its
characteristics of good robustness, self-organizing adaptive,
parallel processing, distributed storage and high degree of
fault tolerance.
III. DATA MINING TECHNIQUES
Data Mining is done to prepare the data and distinguish the
patterns in the data so that a choice or a judgment can be
made. Different data mining methods appeared on the
grounds that the span of the data is turning out to be much
bigger and this data is more shifted and broad in nature and
substance. The business-driven needs additionally have
changed basic data recovery mechanisms. As it is impractical
for people to prepare huge data to discover significant data
opportune, so machine learning tools and advancements are
utilized. It being a critical piece of KDD, so knowing the
different methods and types of data extraction likewise gets
to be distinctly imperative. Knowledge discovery is a
procedure that concentrates on certain, possibly helpful or
beforehand obscure information from the data. The
knowledge discovery process is described in figure -1. The
diverse systems utilized for data mining are Classification,
Clustering, Artificial Intelligence, Neural Networks,
Association Rules, Decision Trees, Genetic Algorithm,
Nearest Neighbor method. A large number of modeling
techniques are labeled "data mining" techniques [1]. The
following section gives a short survey of selected number of
these techniques.
Figure-1 Steps of KDD
IV. CLASSICAL TECHNIQUES
A. Statistical Techniques
Statistics is the traditional field that deals with the collection,
quantification, interpretation, analysis and drawing
conclusions from data. Data mining is an interdisciplinary
field that mines data collectively with the help from
computer sciences dealing with data base, machine learning,
artificial intelligence, visualization and graphical models,
statistics and engineering dealing with pattern recognition,
neural networks. Thus, Statistics is a branch of mathematics
concerning the collection and the description of data [2].
Presently data mining and statistics has been characterized
autonomously however "mining data" for patterns and
predictions is precisely what is done through statistics. Some
of the procedures that are grouped under data mining, for
example, CHAID and CART have been the result of the
statistical profession, probability are the foundation on which
both data mining and statistics are fabricated. The strategies
are utilized in same places for similar sorts of issues
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 21
(prediction, classification discovery). The advantages or
benefits of Statistical Technique or the Factual Method is
that statistics introduces a high level perspective of the
database that gives some helpful data to such an extent that it
doesn't require each record to be comprehended in detail. For
instance, the histogram can rapidly indicate essential data
about the database, which is the most incessant or frequent.
The disadvantages or we can say the inconveniences of this
technique is that for vast piece of data; statistics for the most
part is concerned with outlining data and thus numbering
issue occurs because of this summarization. Statistical
Techniques can't be helpful without specific presumptions
about data. The consequences of using the Factual Strategy is
that statistics is utilized as a part of the detailing of
imperative data from which individuals might have the
capacity to settle on helpful choices and to make important
decisions. A trivial outcome acquired by a basic strategy is
known as a modern method of forecast more appropriately
called a sophisticated technique of prediction. It is so called
as Naïve Bayes prediction.
B. Nearest Neighbor
Clustering and the Closest Neighbor prediction technique
and strategy are a part of the most seasoned strategies
utilized as a part of data mining. Many individuals have the
reasoning that in clustering records are assembled or grouped
together. Nearest neighbor is a prediction technique or a
forecast method which is to some degree like clustering yet
its significance is that, to foresee or predict estimated value
of one record you need to search for records with comparable
indicator values in the database(historical) and utilize this
prediction value from the record that is "closest" to the
unclassified record. A technique that classifies each record in
a dataset based on a combination of the classes of the k
record(s) most similar to it in a historical dataset. Sometimes
its called the k-nearest neighbor technique [3].
The nearest neighbor prediction calculation expresses that
"Objects that are "close" to each other will have comparative
prediction values". Along these lines, if the estimation of one
of the objects is known then you can anticipate it for its
closest or nearest neighbors.
Figure-2 Nearest Neighbor Figure-3 Clustering
C. Clustering
Clustering is the unsupervised classification of patterns
(observations, data items, or feature vectors) into groups
(clusters) [4].Clustering is a technique or is the strategy in
which the comparable or like records are assembled together.
This is normally done to give a high level perspective of
what is happening in the database to the end user or client.
Clustering sometimes means segmentation. The nearest
neighbor calculation is to some degree refinement of
clustering with deference that they both utilize distance in
some feature spaces to make or create structure in the
information or predictions. The nearest neighbor procedure
or calculation is a refinement since some part of the
calculation is a method for deciding consequently the
weighting of the significance of the predictors and the
method for measuring distance inside the feature space.
Clustering is one of the uncommon instances of this where
the significance of every predictor is thought to be equal or
practically comparable. Clustering as applied to data mining
applications encounters three additional complications: a).
large databases, b).object with many attributes, and
c).attributes of different data types [5].
Figure -3 Clustering
V. NEXT GENERATION TECHNIQUES
A. Decision Trees
A decision tree is a predictive model that can be seen as a
tree. Every branch of the tree represents a classification
question and the leaves of the tree represent partitions of the
dataset and their arrangement particularly. Decision trees are
utilized for characterization and in addition for estimation
tasks. Decision trees can be utilized to assess or discover or
to anticipate the result for new sample data. The Decision
tree technique can likewise be utilized as a part of
investigating the dataset and business issue and has been
utilized for preprocessing information for other prediction
algorithms.
The Benefits of Decision Trees method is that the Decision
trees can normally deal with every type of variables, even
which has missing values. The advantageous favorable
feature of the Decision tree model is its straightforward
nature. The decision tree explicitly specify all possible
alternatives and traces each alternative to its conclusion in a
single view, allowing for easy comparison among the various
alternatives. It uses separate nodes to denote user defined
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 22
decisions, uncertainties, and end of process which then leads
to clarity and transparency to the decision-making process.
The Drawbacks of Decision Trees technique is that it doesn't
support the extensive number of analytic tests. Decision trees
do not specify and impose special restrictions or
requirements on the data preparation procedures, that is
Decision trees require relatively little effort from users for
data preparation. It cannot match the performance of linear
regression and consequently, Non-linear connections
between parameters don't influence tree execution.
Figure -4 Decision Trees
B. Neural Network Technique
Neural Network is the “Artificial Neural Network”. Being
artificial in the sense that they are computer programs which
implement sophisticated pattern detection and machine
learning algorithms on a computer to constuct predictive
models from large historical databases. “Artificial neural
networks derive their name from their historical development
which started off with the previous proposition that machines
could be made to think if scientists found ways to mimic the
structure and functioning of the human brain on the
computer”[6]. “Using neural networks as a tool, data
warehousing firms are harvesting information from datasets
in the process known as data mining”[7]. There are two main
structures of consequence in the neural network: The node
loosely corresponds to the neuron in the human brain and the
link loosely corresponds to the connections between neurons
in the human brain. Along these lines, a Neural Network
model is framed as a gathering or collection of
interconnected neurons. The course of action of neurons and
their interconnections is known as the design of the Network.
These interconnections can be a solitary layer or numerous
layer and can be unidirectional or bi-directional. A neural
network is given a set of inputs and is used to predict one or
more outputs. It can be said that Neural network in data
mining plays vital role for classification of the complex
data[8].Thus, the neural networks are most powerful
mathematical models that is suitable for most of the data
mining tasks, and the special emphasis lays on classification
and estimation problems. Neural networks can be used for
outlier analysis, clustering, prediction work and feature
extraction. It can even be used in complex classification
situations.
Figure -5 Neural Networks
Neural Networks is capable of producing an randomly
complex relationship between inputs and outputs. Neural
Networks ought to have the capacity to break down and also
arrange information utilizing the inherent elements with no
outside support or direction. Neural Networks of various
kinds are mostly and can be used for clustering and prototype
creation. Neural networks do not work in proper way when
there are many hundreds and thousands of input features.
Futher more , “neural computing refers to a pattern
recognition methodology for machine learning. The resulting
model from neural”[9].They do not provide acceptable
performance for complex problems. It is difficult to
understand the model that neural networks have built upon
and how the raw data affects the output predictive result. The
Neural Networks can be released on the data straight without
having to rearrange or modify the data very much. It is that
they are automated to a degree where the user does not need
to know that much about predictive modeling or how they
work or even the database in order to use them.
VI. METHODOLOGY
Due to shortage of time, this research is based on secondary
data sources which comprises of data collected from
journals, text books, articles, online and offline mediums. As
it becomes necessary to extract hidden information, it is thus,
necessary to know the data mining techniques that can be
applied on various datasets. Henceforth, the methodology of
the study or analyses of data mining is theoretical and has
been referred from different literature to reveal the various
techniques which can be helpful for extraction of information
and hidden patterns.
VII. RESULTS AND DISCUSSION
The consequence or result of the review or the investigation
of various data mining techniques and calculations is that
there are many apparatuses for dissecting information. Each
strategy has a few benefits and negative marks. The decision
tree can deal with both persistent and discrete information, it
gives great outcomes with the small size tree yet the demerit
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 23
is that a little change in information can change the decision
tree totally. The nearest neighbor technique of data mining
has the benefit of better execution with missing information
and it is anything but difficult to actualize and investigate
however it requires high count multifaceted nature. The
benefit of neural system is that they can group design on
which they have not been prepared but rather they have poor
interpretation ways.
VIII. CONCLUSION AND FUTURE SCOPE
The general objective of the data mining procedure is to
isolate the data from a huge informational collection and
change it into a justifiable shape for further utilize. This
paper puts a push to portray the procedures of chosen
methods from the perspective of data mining and shows the
ability of data mining and its distinctive systems. The review
presumes that all data mining strategies attempt to fulfill
their objectives in immaculate way; however every strategy
takes after and has its own attributes, determinations that
demonstrate their exactness, inclination and capability. Data
mining is consistently substantiating itself as an important
device in numerous ranges, however by some parts of the
data mining methods are commonly far superior appropriate
to some issue zones than to others, subsequently, it is
prescribe in many organizations to utilize data mining at any
rate to help administrators to settle on right choices as
indicated by the data given by data mining. There is not as
such any procedure that is and can be totally successful for
information mining in considering exactness, constraints,
division, outline, forecast, order, application, location and
reliance. It is thus, suggested that these methods ought to be
utilized as in collaboration with each other.
The current level of the study is empirical research. In terms
of future scope, a variety of data mining techniques can be
used by researchers to evaluate and extract hidden patterns. In this paper we briefly reviewed the various data mining
techniques. This review would be helpful to researchers to
focus on the various issues of data mining and the
techniques. In future course, we will review the various
classification algorithms and tools used in data mining and
can put focus on the hot and promising areas of data mining.
REFERENCES
[1] Lee, S and Siau, K. “A review of data mining techniques”,
Journal of Industrial Management & Data Systems, Volume-
101, Issue-01, pp (41-46), 2001.
[2] Berson, A, Smith, S, and Thearling, K., “Building Data Mining
Applications for CRM”, McGraw-Hill Professional, First(1st)
edition, 1999.
[3] S Mahajan, "Convergence of IT and Data Mining with other
technologies ", International Journal of Scientific Research in
Computer Science and Engineering, Volume-01, Issue-04, pp
(31-37), Aug 2013
[4] Jain, A.K., Murty, M.N., and Flynn, P.J. “Data Clustering: A
Review, Journal ACM Computing Surveys (CSUR)”, Volume-
31, Issue-0 3, pp (264-323), 1999.
[5] Jaskaranjit Kaur and Gurpreet Kaur , "Clustering Algorithms in
Data Mining: A Comprehensive Study", International Journal of
Computer Sciences and Engineering, Volume-03, Issue-07,
Page No (57-61), Jul -2015.
[6] B Khalid, N Abdelwahab. “A Comparative Study of Various
Data Mining Techniques: Statistics, Decision Trees and Neural
Networks”, International Journal of Computer Applications
Technology and Research, Volume-5, Issue-03, pp (172 – 175),
2016.
[7] J.Sheela Jasmine, "Application of Fuzzy Logic in Neural
Network Using Data Mining Techniques: A Survey",
International Journal of Computer Sciences and Engineering,
Volume-04, Issue-04, Page No (333-341), Apr -2016.
[8] C Kaur, P Kapoor, M Bala , “Role of Neural network in data
mining”, International Journal for Science and Emerging
Technologies with Latest Trends, Volume – 02, Issue -01, pp
(20-28), 2012
[9] P Gaur, “Neural Networks in Data mining”, International
Journal of Electronics and Computer Science Engineering”,
Volume -01, Issue -03, pp (1449-1453), 2012
AUTHORS PROFILE
Marie Fernandes has received M.Sc Degree in Electronics and Communication from Devi Ahilya University, Indore (M.P.) in 2005. Presently, she is pursuing MCA Degree from IGNOU University, Delhi. Her area of interest is Operating Systems, Digital Electronics, Operating System, Computer Networking. She has worked as a technical trainer with Jetking Infotrain Pvt. Ltd., till year 2010 in Indore (M.P.) and also worked as Quality Auditor Executive with Jetking Infotrain Pvt. Ltd. for the year 2011 and 2012. She is presently working as Assistant Professor at Indore Indira School of Career Studies,Indore(M.P.)since 2012 till date. Email id-
© 2017, IJCSE All Rights Reserved 24
International Journal of Scientific Research in __________________________________ Review Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.24-26, February (2017) E-ISSN: 2320-7639
Information and Communication Technologies in State affairs:
challenges of E-Governance
Stephen John Beaumont
Centro de Tecnología para el Desarrollo (CENTED), Buenos Aires, Argentina
Available online at: www.isroset.org
Received 30th Dec 2016, Revised 8th Jan 2017, Accepted 02th Feb 2017, Online 28th Feb 2017
Abstract: The mainstream conclusion about the purpose of implementing e-governance procedures is that these enhance
good governance. This good governance is generally characterised by participation, transparency and accountability.
This has proven to be a major problem in many developing countries. But the recent advances in information and
communication technologies provide opportunities to transform the relationship between governments and citizens so
as to enhance the achievement of good governance goals. In this paper we analyze the benefits than can be achieved by
implementing E-Governance programs, and also the challenges these changes associated with these innovations.
Index Terms—E-Government, Governance, Transparency
I. INTRODUCTION
E-Governance is modifying the way that State affairs
affecting individuals is implemented on a daily basis.
Although the potential for improvement is not questioned,
the practical implementations are still quite challenging. This
is why a deeper understanding if these issues must be
achieved in order to transcend the current limitations.
In Section II we analyze the meaning and scope of the term
E-Governance to set a common definition and understanding
throughout the rest of paper. We also mention some of the
goals of E-Governance as a means to achieve good
governance. In Section III we look at some of the main
challenges in implementing E-Governance programs. In
Section IV we mention some conclusions regarding
implementation of E-Governance programs.
II. WHAT DO WE MEAN BY E-GOVERNANCE?
First of all we must agree on the meaning and scope of the
term E-Governance, because if is often used in different
senses and different contexts. There are many definitions of
E-Governance, but we will mention just a few to put the term
in perspective:
“E-Governance is the public sector‟s use of information and
communication technologies with the aim of improving
information and service delivery, encouraging citizen
participation in the decision-making process and making
government more accountable, transparent and effective.”
[1]
“E-governance involves the use of information and
communication technologies (ICT) to transact the business of
government. At the level of service, e-governance promises a
full service available 24 hours a day and seven days a
week.”[2]
“E-government commonly refers to the processes and
structures pertinent to the electronic delivery of government
services to the public.”[3]
Additionally, Bannister and Connolly summarize some
characteristics that are present in e-governance
implementations:
-Technology mediated services;
-A commitment to technology;
-Functions that empower citizens;
-Internally focused use of ICT by government;
-Use of ICT to improve the quality services and governance;
-Something that enhances e-democracy;
-A technology mediated relationship between citizen and
state. [4]
Although there are countless other definitions of e-
governance, the idea is basically the same.
Having agreed upon what we mean by E-Governance, we
must ask ourselves, why would it be important or useful to
introduce e-governance procedures? The mainstream
conclusion about the purpose of implementing e-governance
procedures is that these enhance good governance. This good
governance is generally characterised by participation,
transparency and accountability. This has proven to be a
major problem in Latin American Democracies. But, the
recent advances in information and communication
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 25
technologies provide opportunities to transform the
relationship between governments and citizens so as to
enhance the achievement of good governance goals. The use
of ICTs can increase the involvement of citizens in all levels
of the process of governance. Advantages for the government
involve that they may provide a better service, making
governance more efficient and more effective. In addition,
the transaction costs can be lowered and government services
can become more accessible for the general population.
As far as the goals of e-governance, according to UNESCO,
they include:
-“Improve the internal organisational processes of
governments.
-Provide better information and service delivery.
-Increase government transparency in order to reduce
corruption.
-Reinforce political credibility and accountability.
-Promote democratic practices through public participation
and consultation.” [5]
Also according to UNESCO, the fields of implementation of
e-governance are:
-“E-administration- refers to improving of government
processes and of the internal workings of the public sector
with new ICT-executed information processes.
-E-services- refers to improved delivery of public services to
citizens. Some examples of interactive services are: requests
for public documents, requests for legal documents and
certificates, issuing permits and licenses.
-E-democracy- implies greater and more active citizen
participation and involvement enabled by ICTs in the
decision-making process.” [6]
For example, in Bangladesh, the “implementation of „Digital
Bangladesh‟ was an election promise means appropriate use
of technology to materialize all the commitments of the
government including the ones regarding education, health,
employment and poverty mitigation. The key intention behind
this idea is to improve the standards of livelihood of the
citizens by empowering them, ensuring transparency and
accountability in every sector of life, and setting up effective-
governance and, above all, deliver public services to their
thresholds through the most effective use of latest
technologies.” [7]
In another continent, particularly in Nigeria, Ojo argues that
“the use of information technology can increase the broad
involvement of citizens in the process of governance at all
levels by providing the possibility of on-line discussion
groups…” He also states that he benefits for government
include that they “may provide better service in terms of
time, making governance more efficient and more effective.”
[8]
This tendency is occurring world-wide. For example, “the
Government of India is transcending from traditional modus
operandi of governance towards technological involvement in
the process of governance. Currently, the Government of
India is in the transition phase and seamlessly unleashing the
power of ICT in governance.” [9]
III. CHALLENGES IN IMPLEMENTING E-GOVERNANCE
PROGRAMS
Signore et al. refer to these challenges by grouping them into
three categories: Technical, Economic and Social issues.
Some of the most relevant Technical issues include security
of the system even more so when electronic payment is
involved. Privacy is a great concern on behalf of the citizens
as it regards confidentiality of their personal data
Regarding Economic issues, these include aspects such as
costs, maintainability, reusability and portability.
The Social issues regard aspects like accessibility; usability
and what is most important, acceptance by the general
public. [10]
Mittal & Kaur, in the paper “E-Governance - A challenge for
India,” refers to the challenges of implementing E-
Governance programs in a segmented format. Some of the
most interesting obstacles singled out, include:
- Different Language spoken by potential users: People
belonging to different states speak different languages. The
diversity of people in context of language is a huge challenge
for implementing e-Governance projects as e-Governance
applications are written in English language.
-Low Literacy and Low IT Literacy: Much of the Indian
people are not literate and those who are literate, they do not
have much knowledge about Information Technology (IT).
-Lack of confidence on technologies provided by
government
-Technical issues such as user friendliness of government
websites.
-Cost: In developing countries like India, cost is one of the
most important obstacles in the path of implementation of e-
Governance where major part of the population is living
below poverty line. Economic poverty is closely related to the
limited information technology resources. [11]
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 26
IV. CONCLUSIONS REGARDING IMPLEMENTATION OF E-
GOVERNANCE PROGRAMS
It is quite clear that E-Governance programs are being
implemented worldwide. From the leading nations to
developing countries, these initiatives are taken o at different
levels of government and this is not a new occurrence, as this
2002 paper affirms: “Governments worldwide are faced with
the challenge of transformation and the need to reinvent
government systems in order to deliver efficient and cost
effective services, information and knowledge through
information and communication technologies.” [12]
As for some cases in India, Mittal & Kaur considers that for
E-Governance programs to be successful, some factors may
have to be taken into consideration. “Although Indian
government is spending a lot of money on e-Governance
projects but still these projects are not successful in all parts
of India. Unawareness in people, local language of the
people of a particular area, privacy for the personal data of
the people etc. are main challenges which are responsible
for the unsuccessful implementation of e-Governance in
India.” [11]
In the case implementation of E-Governance programs in
Australia, Freeman argues that “governments often equate
improved information access and service delivery with online
civic engagement, overlooking the importance of two-way
participatory practices.” She also concludes that “to
facilitate participatory e-government practices and online
civic engagement, governments will require policies that
guide the development of ICT infrastructure, enhance
citizens‟ ICT adoption and use, support online content and
spaces to which citizens can contribute, and ensure that
citizen involvement influences decision-making.” [13]
All over the world, governments are investing more and more
on information and communication technologies as a means
to communicate and interact with their citizens. E-
Governance programs will reach more individuals and
involve more government agencies in years to come. But the
challenges of effectiveness and efficiency still remain open to
debate.
REFERENCES
[1] UNESCO. 2011. E-Governance. http://portal.unesco.org/ci/en/ev.php-URL_ID=3038&URL_DO=DO_TOPIC&URL_SECTION=201.html
[2] Panda, Bibhu Prasad, & Swain, Dillip K. 2009. Effective communications through e-governance and e-learning. Chinese Librarianship: an International Electronic Journal, 27. URL: http://www.iclc.us/cliej/cl27PS.pdf
[3] Saxena, K. 2005. Towards excellence in e-governance. International Journal of Public Sector Management, 18(6).
[4] Bannister, Frank and Connolly, Regina. 2011. “New Problems for OLD? Defining e-Governance.” Proceedings of the 44th Hawaii International Conference on System Sciences
[5] UNESCO. 2005. http://portal.unesco.org/ci/en/ev.php-URL_ID=2179&URL_DO=DO_TOPIC&URL_SECTION=201.html
[6] UNESCO. 2005 (2). http://portal.unesco.org/ci/en/ev.php-URL_ID=4404&URL_DO=DO_TOPIC&URL_SECTION=201.html
[7] Kashem, Mohammad Abul , Nasim Akhtar and Anisur Rahman. 2014. “An Information System Model for e-Government of Digital Bangladesh.” IJCSNS International Journal of Computer Science and Network Security, VOL.14 No.11, November 2014.
[8] Ojo, John. 2014. “E-governance: An imperative for sustainable grass root development in Nigeria.” School of Politics and International Studies, University of Leeds, United Kingdom.
[9] Kumar, Puneet, Kumar, Dharminder &Kumar, Narendra. 2014. “E-Governance in India: Definitions, Challenges and Solutions.” International Journal of Computer Applications (0975 – 8887) Volume 101– No.16, September 2014
[10] Signore, Oreste; Chesi, Franco and Pallotti, Maurizio. 2005. “E-Government: Challenges and Opportunities.” CMG Italy – XIX Annual Conference. Florence, Italy.
[11] Pardeep Mittal & Amandeep Kaur. 2013. “E-Governance - A challenge for India.” International Journal of Advanced Research in Computer Engineering & Technology (IJARCET). Volume 2, Issue 3, March.
[12] Fang, Zhiyuan. E-Government in Digital Era: Concept, Practice, and Development. International Journal of The Computer, The Internet and Management, Vol. 10, No.2, 2002, p 1-22
[13] Freeman, Julie. 2012. “E-Government Engagement and the Digital Divide.” Conference Paper. CeDEM Asia 2012. Conference for E-Democracy & Open Government: Social & Mobile Media for Governance. Singapore.
Authors Profile
Mr. Stephen Beaumont holds a Ph.D. in Business Administration from the CEMA University, a Masters in Business Administration and a Masters in Strategic Studies from Naval University Institute and Bachelor in Computer Science from Belgrano University. He is currently head of research at CENTED (Centro de Tecnología para el Desarrollo), which is a Nonprofit Organization based in Buenos Aires, Argentina, whose mission is to contribute to the development, strengthening and professionalization of the Civil Society Organizations, in order to improve their performance, effectiveness and efficiency.
© 2017, IJSRCSE All Rights Reserved 27
International Journal of Scientific Research in __________________________________ Review Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.27-30, February (2017) E-ISSN: 2320-7639
Comparative Study and Analysis of Unique Identification Number
and Social Security Number
Sarita Sharma1*
, Rakesh Gaherwal2
1Department of Science, Indore Indira School of Career Studies, DAVV, Indore, India
2Department of IT, Idyllic Institute of Management, DAVV, Indore, India
Available online at: www.isroset.org
Received 10th Jan 2017, Revised 18th Jan 2017, Accepted 02th Feb 2017, Online 28th Feb 2017
Abstract— As we study the detailed information and use of Unique Identification number provided by Indian
Government and Social Security Number provided by U.S., an idea came in our mind to merge the concept of UID and
SSN so that these two can be used not only to identify the identity of a person but also for inspecting the account
information of the person which may be helpful to know about the black money hold by the person if needed. The use of
the Social Security number (SSN) has extended significantly for tracking the earnings details of U.S. workers for Social
Security entitlement and which is beneficial to compute the universal identification of the workers. We need a Social
Security number to get a job, collect Social Security benefits and get some other government services. But we don't
often need to show our Social Security card..Unique Identification Aadhar Card is provided to identify the personal
identity of a citizen on the basis of some biometric hand and eye impression with his or her personal address. Today in
India to recover the money problem Modiji has started the scheme of renewable the currency running from several
years to come in front the black money holders. If we apply the concept of SSN card with Aadhar card then this
problem can be resolved very easily. In our paper we have just try to show the details holding through UID and SSN so
that one can easily think that how the merging of UID and SSN may be possible.
Keywords- Demographic Data, Biometric Data , STQC, Numident, IRS
I. INTRODUCTION
The concept of Unique Identification Number (UID) was
introduced by Government of India with the help of UIDAI
which was set up in January as an attached office under a
sponsorship of Planning Commission . The UIDAI is
authorized to assign a 12-digit unique identification (UID)
number (termed as Aadhaar) to all the residents of
India. The data collected through UIDAI is centralized at the
IMT(Industrial Model Township), Manesar [1].
The Unique Identification Aadhar card is an identification
entity to collect the biometric and demographic data of
residents which is stored in a centralized database and is
issued to each residents with 12- digit unique identity
number Aadhaar.To generate a Aadhaar card a residents
need to give the details about his/her personal information
related to full name, address(current as well as previous
address) ,date of birth, marital status and also apart from it
biometric identification through finger prints and iris scans.
First time UID was issued in September 2010, the main
purpose of UIDAI was to target to eliminate the duplicate
identity of a resident by providing him the 12 digit unique
identification number through Aadhaar card. . Aadhaar
neither deliberates citizenship nor assurances rights,
benefits, or entitlements. Aadhaar is a random number
which never starts with a 0 or 1, and is not encumbered with
profiling or intelligence into identity numbers that makes it
imperceptive to fraud and theft. The unique ID would also
be suitable for as a valid ID while availing various
government services, like a to get the subsidy on LPG ,
kerosene, good like wheat,pulses,sugar etc. from Public
Distribution ,for account holder identification, in various
government jobs, for getting sim card etc [6].
Figure 1 Sample image of UID
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 28
The concept of Social security Number (SSN) was
introduced by the Social Security Administration, an
independent agency of United States for providing the
unique identification to U.S. citizens, permanent residents
and working (temporary) residents. The basic purpose of
this number is to track out the individuals for social security
purposes [2]. Apart from this purpose it has become a
national identification number for taxation and other
purposes [3].
The SSN number is composed of three parts with the total of
9-digit numbers. The format of SSN is as “AAA-GG-SSSS"
[4].
The first set of three digits is called the Area Number
The second set of two digits is called the Group
Number
The final set of four digits is the Serial Number
Area Number: The Area Number is assigned by the
geographical region. Before 1972, SSN card were only
issued in local Social Security offices everywhere in the
country and the Area Number represented the State not
compulsorily where the applicant lived since a person could
apply for their card in any Social Security office. Since
1972, the area number was assigned on the basis of ZIP code
given by the person’s application form when he applies for
original SSN card. It is not necessary that person’s mailing
address and actual residents’ address would be same. It
consists of three digits.
Group Number: The middle part of SSN number is 2 digit
group number which is in the range 01 to 99.The numbers is
not in consecutive order .If SSN is issued for administrative
reasons then it consists of odd numbers from 01 to 09
otherwise it consists of even numbers from 10 to 98 within
each area number assigned to a state. If the numbers 98 of a
specific area has been issued then even number 02 to 08 are
utilized following odd groups 11 through 99.
Serial Number: The last part of SSN number consists of 4
digits from 0001 to 9999 in each group, which is just a serial
number [5].
Figure 2 Sample Image of SSN
In this paper we have presented the basic details of UID and
SSN. On the basis of the uses and characteristics of UID and
SSN, we have concluded to mingle the concept of both cards
for the convenient of citizen’s identification as well as to
know about the credit history of them.
Rest of the paper is organized as follows: In section II, the
information provided through UID is given. Section III
includes the information given through SSN. In Section IV,
we have mentioned the areas where UID and SSN may be
applicable. In Section V, we have given the short
summarization of the related work. Section VI compares the
UID and SSN on the basis of some basic features. In Section
VII we have proposed some suggestions and
recommendations. Section VIII concludes the paper with
future scope.
II. DETAILS PROVIDED THROUGH UID[6]
Demographic information:
full Name
address
picture
What are all crimes you did
Marital status
Your previous address
Biometric information :
Some biological attributes of the individual.
Collection of information pertaining to race,
religion, caste, language, income or health is
specifically prohibited.
III. DETAILS PROVIDED THROUGH SSN
Name to be shown on the card
Full name at birth, if different
Other names used
Mailing address
Citizenship or alien status
Sex
Race/ethnic description (SSA does not receive this
information under EAB)
Date of birth
Place of birth
Mother's name at birth
Mother's SSN (SSA collects this information for the
Internal Revenue Service (IRS) on an original
application for a child under age 18.SSA does not
retain these data.)
Fathers' name
Father's SSN (SSA collects this information
for IRS on an original application for a child under
age 18. SSA does not retain these data.)
Whether applicant ever filed for an SSN before
Prior SSNs assigned
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 29
Name on most recent Social Security card
Different date of birth if used on an
earlier SSN application.
Date application completed
Phone number
Signature
Applicant's relationship to the number holder
IV. APPLICATIONS OF UID AND SSN[7]
The UID will be applicable in the following areas.
Banks
Schools and colleges
Real estate
Driver license
Voter ID
Ration card
telephone and mobile connections
employments
The SSN will be applicable where we need the record of
Employee,
patient,
student,
credit
V. RELATED WORK
Carolyn Puckett, in his article “The Story of the Social
Security Number” has given the significant use of
Social Security Number (SSN) and how this SSN is
being used to keep track of the earnings history of U.S.
workers for Social Security entitlement.
Swati Chauhan , Chetanshi Sharma and rest authors has
concluded in their paper” Survey Paper on UID System
Management ” that Unique Identification System is
very beneficial to the citizens because it is a unique
number which holds basic information of every person
and after having it there is no need to carry driving
license, voter cards, pan card, etc for any govt. or
private work[8].
James E. Duggan Robert Gillingham John S. Greenlees,
in their paper “Distributional Effects of Social
Security: The Notch Issue Revisited” has provided the
first empirical estimates of the effects of the Social
Security benefit notch on lifetime benefits based on
actual Social Security records, the 1988 Continuous
Work History Sample[9].
Shraddha Thorat and Vikrant Bhilare have given the
conclusion through their paper “Comparative Study of
Indian UID Aadhar and other Biometric Identification
Techniques in Different Countries” that countries those
have not used biometrics for identification should use it.
Further they should use multiple Biometrics that has a
combination of behavioral and Physical characteristics
for robustness [10].
VI. COMPARISON BETWEEN UNIQUE IDENTIFICATION NUMBER AND SOCIAL SECURITY NUMBER
Table 1 : COMPARISON BETWEEN UID AND SSN [11]
Features UID SSN
Digits in ID 12 digit 9 digit
Picture Available Not Available
Marital history display Not display
Appearance It’s a smart card It’s an envelope size paper
Credit history Not mentioned Mentioned
Purpose Aadhaar was created as a biometric based
authenticator and a single unique proof of identity
SSN was created as a number record
keeping scheme for government
services
Governing Body Aadhaar was constituted under the Planning
Commission
SSN is governed by Federal
legislation
Applicability Aadhaar is for residents SSN is for citizens and non-citizens
authorized to work
Storage, Access,
and Disclosure
Aadhaar and data generated at multiple sources is
stored in the CIDR(Central ID Repository) and
processed in the data warehouse
SSN and applications are stored in
the Numident (Numerical
Identification).
Verification
The SSN can be verified only in certain circumstances The SSN can be verified only in
certain circumstances.
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 30
VII. SUGGESTIONS AND RECOMMENDATION
From Table 1, it is clear that UID is fully based on biometric
identification and may be applied in every government
sector when needed. Today in government examination, UID
plays a very important role for the identification of the fraud
student giving the exam. But when there is need to know the
person’s financial detail so that one can find out the black
money holding by him, it is not very advantageous to use
UID. In such case, in spite of using UID, SSN may be used.
Hence we recommend that if there may a new technique to
combine the features of UID and SSN, one can generate a
new Unique Identification card.
VIII. CONCLUSION AND FUTURE SCOPE
As we analyzed the Unique Identification number and Social
Security number we come to the conclusion that
Both cards are very helpful to figure it out the details of any
individual or company but when it comes to bring out
financial transactions details, personal history etc., We will
face a little bit trouble to fetch the both information together
with the help of these two identification cards. In that case it
may be very propitious if we try to mingle the concept of
UID and SSN at the same time through a new smart card for
all over the world. In future this concept may create a drastic
change in our life for the unique identification and also we
may be secure from deceitful money holders.
REFERENCES
[1] LAW RESOURCE INDIA the National Identification
Authority of India Bill, 2010 Posted in CONSTITUTION,
GOVERNANCE, UID IDENTITY by NNLRJ INDIA on
June 19, 2011
[2] Carolyn Puckett, "The Story of the Social Security
Number", Social Security Bulletin, vol. 69, No. 2, 2009
[3] Kouri, Jim ,"Social Security Cards: De Facto National
Identification", American Chronicle, March 9, 2005
[4] https://www.ssa.gov/history/ssn/geocard.html
[5] Social Security Administration. "The SSN Numbering
Scheme". Retrieved 12/01/2017
[6] Elisabeth Ilie- Zudora,Zsolt Keménya,Fred van
Blommesteinb,László Monostoria, André van der Meulenb
,“A survey of applications and requirements of unique
identification systems and RFID techniques”, vol. 62, Issue
3, pp. 227–252, April 2011
[7] "Social Security Number Randomization".
Socialsecurity.gov. Retrieved 13/01/2017
[8] Swati Chauhan, Chetanshi Sharma, Geetanjali, Akshita
Verma, Jaya Gupta.” Survey Paper on UID System
Management”
[9] International Journal of IT, Engineering and Applied
Sciences Research (IJIEASR) ISSN: 2319-4413 ,vol. 3, No.
2, 2014
[10] James E. Duggan Robert Gillingham John S. Greenlees,
“Distributional Effects of Social Security: The Notch Issue
Revisited”, Public Finance Quarterly, pp. 349-370, July
1996
[11] Shraddha Thorat and Vikrant Bhilare , “Comparative Study
of Indian UID Aadhar and other Biometric Identification
Techniques in Different Countries”, International Journal of
Current Trends in Engineering & Research (IJCTER) e-
ISSN 2455–1392 ,vol. 2, Issue 6, pp. 62 – 72, June 2016
[12] “Aadhaar Number vs the Social Security Number” blog
Retrieved 16/12/2017
AUTHORS PROFILE
Author Profile:Sarita Sharma received B.Sc. Degree in Computer Science from Govt. Holkar Science College,DAVV,Indore, M.P. (India) in 2008. She has received M.Sc. in Computer Science from School Of Computer Science and IT,DAVV,Indore (India) in 2010.She is working as an Asst. Professor. in Indore Indira School Of Career Studies since 2016. She has worked as an Computer Faculty in Govt. Holkar Science College from 2012-2016 and also worked in Govt. M. L. B. PG Girls College from 2010-2012. Her areas of interest are Computer Programming Language(C,C++,,Java).Email:[email protected]
Rakesh Gaherwal received B.Sc. Degree from Govt. Holkar Science College,DAVV,Indore, M.P. (India) in 2008. He has received M.Sc. in Computer Science from School Of Computer Science and IT,DAVV,Indore (India) in 2010.Hee is working as an Asst. Professor. in Idyllic Institute of Management since 2012. His areas of interest are Computer Networking and Computer Programming Languages.Email:[email protected]
© 2017, IJSRCSE All Rights Reserved 31
International Journal of Scientific Research in __________________________________Review Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.31-35, February (2017) E-ISSN: 2320-7639
A proposed Method for Mining High Utility Itemset with
Transactional Weighted Utility using Genetic Algorithm
Technique ( -GA)
Pradeep K.Sharma1*
, Vaibhav Sharma2 and Jagrati Nagdiya
3
1*Department of Computer Science and Engineering, MIT Group of Institute, Ujjain, M.P. India
2Department of Information Technology, MIT Group of Institute, Ujjain, M.P. India
3Department of Computer Science and Engineering, MIT Group of Institute, Ujjain, M.P. India
Available online at: www.isroset.org
Received 3rd Jan 2017, Revised 12th Jan 2017, Accepted 02th Feb 2017, Online 28th Feb 2017
Abstract: Utility mining is a technique to prune high utility itemset from the given transactional database on the basis of
user-defined minimum utility threshold. Frequent itemset mining, only focus on itemset appear most frequently in the
database while in utility mining we concern about utility i.e. importance or profit of itemset according to the user
preference. In this paper we are proposing a two-phase algorithm, in the first phase, we are using weighted transaction
utility concept to calculate and compare the utility of itemset with minimum utility threshold and then in the second
phase, we are proposing genetic algorithm technique to search high utility itemset from the recognized transactional
database obtain after the first phase.
Keywords: Data Mining, Weighted Transaction Utility, Utility Mining, Genetic Algorithm
I. INTRODUCTION
Data mining technique is used to discover hidden pattern from
data already stored in a large database. Data mining is a
combined technique of database, statistics, Artificial
Intelligence and machine learning. Data mining helps users to
identify the purchase items and their consumers. Market
basket analysis has also been used in data mining techniques
for items & consumers. Frequent itemset mining, which is one
of the efficient techniques of data mining, identify items
appear most frequently in the database but it's not considered
that how much items are profitable or important for the user.
In utility mining profitability and interest of user related with
items taken into consideration. There are so many techniques
of utility mining have been purposed for pure high utility
itemset means itemset which is more profitable than others
with some selection criteria. A genetic algorithm is also an
efficient technique of soft computing which works on the
concept of the genetic process with some steps of the process
like mutation, crossover, selection etc. D Charles Darwin's
"The Origin of Species" publication in 1859 brought about
genetic algorithm detailing how complex, problem-solving
organisms could be created and improved through an
evolutionary process of random trials, sexual reproduction,
and selection [1]. GAs are used to construct a version of
biological evolution on computers.GA have been successfully
adopted in a wide range of optimization problems such as
control, design, scheduling, robotics, signal processing, game
playing and combinatorial optimization [1]. We can use the
concept of data mining as an application area of genetic
algorithm.
The main contributions of this paper are summarized as
follows.
A new method called MHUI_TWU-GA is proposed for search
high utility itemset with TWU concept and using genetic
algorithm approach. In this proposed method in the first step,
we calculate weighted transaction utility of each itemset and
then compare it with minimum utility threshold. In the second
step, we apply genetic algorithm approach to pruning and
generate high utility itemset.
The rest of this paper is organized as follows. Section II
describes the basic concepts and definitions of utility mining
and genetic algorithm. Section III presents the related works.
High utility itemset using genetic algorithm concepts
describes in section IV. The proposed approaches are
discussed in Section V, Conclusions are finally given in
Section V.
II. BASIC CONCEPTS AND DEFINITIONS
A. Utility Mining
Utility mining concepts and definitions given in UP-Growth
[2] called utility pattern growth are sufficient to study and
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 32
understand concept of utility mining as well as all essential
definition related with utility mining as follows:
Definition1: Afrequent itemset is a set of items that appears
at least in a pre-specified number of transactions. Formally,
let I={i1,i2,. .., }be a set of items and DB={T1,T2,..., Tn} a
set of transactions where every transaction is subset of
items(i.e. itemset).
Definition2.The utility of an item is a numerical value
defined by the user. It is transaction in dependent and reflects
importance (usually profit) of the item. External utilities are
stored in a utility table.
TID Transaction TU
1 (A,1)(C,1)(D,1) 8
2 (A,2) (C,6) (E,2) (G,5) 27
3 (A,1) (B,2) (C,1) (D,6) (E,1) (F,5) 30
4 (B,4) (C,3) (D,3) (E,1) 20
5 (B,2) (C,2) (E,1) (G,2) 11
Table: 1
Definition3: The utility of an itemset X in a transaction T is
denoted by U(X, Ti) &it is calculated as follows. For
example ({AC}, T1) =U ({A}, T1) +U ({C}, T1) = 5+ 1 = 6.
Definition 4: The utility of an itemset X inD is denoted by
U(X) & it is calculated as follows For example, U({AD})=
U({AD}, T1)+ U({AD}, T3) = 7+ 17= 24.
Definition5. An itemset called high utility itemset if its
transactional weight utility is higher than minimum utility
threshold otherwise it is called item with low utility value.
Item A B C D E F G
Profit 5 2 1 2 3 1 1
Table: 2
Definition 6.The transaction utility of a transaction T is sum
of utility(internal utility × external utility) of each item
present in transaction T, denoted as TU(Td) and defined as
u(X, Td). For example, TU (T1) = u ({ACD}, T1) = 8.
Apart of all above definitions we have already purposed a
new concept to discover high utility itemset mining [3] in
which we introduce a new term Weighted Transaction Utility
(WTU) for calculating utility of each items from Transaction
Database and profit table .for example WTU of item A is 20
because A is present in three transaction and its profit value
is 5.in this paper we are using concept of WTU to find out
potential itemset comparing WTU with minimum utility as
WTU≥min_uty.
B. Genetic Algorithm
Genetic algorithm is a heuristic search technique used to
generate or optimization useful solutions for the given
problem. Genetic algorithm process under basic concept of
natural selection start with initial population , calculate the
fitness of chromosomes in the initial population and repeat
this process for new generated offspring. After this
evolution technique performs including selection, crossover,
and mutation. In selection process of picking effective
chromosomes from the population performed. Crossover is
also known as recombination, process to taking to
individual from the population and generate new individual.
Fitness value of each individual calculates for survival of
the fitness. Mutation is a technique for selecting new
chromosome from two or more valuable individual from
initial population. In table 3 represent terminologies used in
Genetic Algorithm.
III. RELATEDWORK
Many researchers have published their research paper or
study in the field of utility mining. Basic of utility mining is
association rule mining and frequent item set mining. One of
the well-known algorithms is Apriority algorithm [4], which
is the fundamental for association rule mining to select
items which are related with each other in term of x y.
Then frequent pattern growth is proposed for item sets
occur frequently in the transactional database based on their
support value higher than minimum support count. A tree
base concept to identify potential itemset from the first
phase FP-Growth [5] was afterward proposed. After
comparison it has been evaluated that FP-Growth provide
better result than Apriority-based approaches because it scans
database twice without generating candidate itemsets.
But in the frequent item set mining [4, 5], the importance of
item to user is not taken into consideration that is the unit
profit related with items and purchased quantities not
consider. Thus, some new algorithms or research study
purposed for mining high utility itemset from the databases,
such as UMining[6],Two-Phase[7], IIDS[8] and
IHUP[2].UMining algorithm[9] proposed by Yao et al. Each
method considers space and time to improve efficiency for
prune high utility itemset. Two-Phase algorithm [7] proposed
by Liu et al. consists of two phases. In phase I, breath first
search technique is used to generate high utility item sets .It
generate candidate itemset compare with minimum utility
threshold for length first and then length second and so on.
after that it compare it with TWDC property .In each pass, to
generate candidate item sets, each item or item set compare
with its TWU value for length one to n-1 length which is very
time and space consuming process because in each pass we
have to calculate potential itemset and store it. To overcome
this problem, Li et al. [8] proposed an isolated items
discarding strategy, abbreviated as IIDS, to reduce the
number of candidates. By pruning isolated items using depth
wise search then number of potential item sets reduce
significantly. This method is better than previous methods
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 33
but this methods still perform multiple scan over
transactional database which is still time and space
consuming .To avoid scanning database multiple times,
Ahmed et al.[9] proposed a tree-based algorithm, called
IHUP, for mining high utility item sets. They use an IHUP-
Tree to maintain the information of high utility item sets and
transactions. Every node in IHUP-Tree consists of an item
name, as support count, and a TWU value. In this algorithm
three steps are used first is construction of IHUP-Tree;
second is the generation of high weighted transaction utility
potential candidate itemsets and third identification of high
utility itemsets In step of IHUP, first candidate are arranged
again in lexicographic order, support descending order or
TWU descending order, Then, the rearranged transactions are
inserted into the IHUP-Tree. In step2, high transactional
weighted utility itemsets generated from IHUP tree applies for
the FP-Growth [5] for final processing. Note that IHUP and
Two-Phase produce the same number of HTWUIs in phase I
since they use transaction-weighted utilization mining model
[7]. FP-growth algorithm also play significant role with two
novel steps and FP-Tree and then an effective algorithm
called UP-Growth [2] (Utility Pattern Growth) with four
steps, two steps of FP-Growth and two new steps with UP-
Tree for effectively purne high utility itemset from
transactional database. UP-Growth is a novel algorithm and
performs better than all previous algorithms in term of time
and space. A new concept to discover high utility itemset
mining [3] in which we introduce a new term Weighted
Transaction Utility (WTU) for calculating utility of each
items from Transaction Database and profit table.
IV. HIGH UTILITY ITEMSETS USING GENETIC
ALGORITHM CONCEPTS
A. Encoding:
Let I= { , ….., }is a set of items,
D= { , ….., } be a transaction database where each
transaction is a subset of I. An itemset X is a high utility
itemset if it satisfies the minUtil threshold, i.e. minUtil is a
threshold which is defined by the user.
This section of high utility itemsets is based on genetic
algorithm used in the proposed works. Encoding: Different
types of encoding techniques used in genetic algorithm like
binary encoding, hexadecimal encoding, octal encoding, real
number encoding, integer or literal permutation encoding and
tree encoding etc. .Here in our problem we are using binary
encoding technique to encode the solution of our problem into
chromosomes. In this coding technique 1 represent the presence
of item in transactional database and 0 represent absence of
item. Chromosome length is equal to the number of distinct
items of transactional database and it is fixed. Example:
The representation of a chromosome is shown in fig.1
1 0 1 0 1 1 0 1
Fig. 1. Chromosome representation for the itemset {1, 3, 5, 6,
8}.
B. Population Initialization: If we let N is population size
and M is a binary string space dimension than at generation
time to a list of binary. The algorithm for population
initialization is given in Figure 2. String is denoted by
( )
C. Fitness function:
The main goal this work is to generate the high utility
itemsets from the transaction database. Hence, the high
utility with minimum threshold value using GA, we use Yao
et al.'s [14] utility measure u(X) as the fitness function.
Fitness function is essential for determining the chromosome
(itemset) which satisfy minUtil threshold.
Biological Term Genetic Algorithm Term
Chromosome or
Genotype
Coded design Vector
Gene Every Bit
Population A Number of Coded design Vector
Generation Population of design vectors which
are obtained after one computation
Locus A particular position on the string.
Phenotype Parameter Set
Fitness
function
It is a measure associated with the
collective objective functions that
indicate the fitness of a particular
chromosome.
Chromosome or
Genotype
Coded design Vector
Gene Every Bit
Survival of the
fittest
The fittest individuals are
preserved and reproduce, which
is referred to as survival of the
fittest
Selection The process of picking effective
chromosomes from the population
for a later
Crossover Breeding is called as selection.
Mutation The process of creating a new
chromosome by mating two or more
valuable
Table 3: Terminologies used in Genetic Algorithm
D. Genetic Operators
Mainly there are three genetic operators, selection, crossover
and mutation in generic algorithm.
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 34
1.Selection: Different types of selection methods have used
in genetic algorithm like Roulette Wheel Selection, Random
Selection, Rank Selection, Tournament Selection, Steady
State Selection. . In this work, roulette wheel selection [10] is
used. After decoding we have to decide how to perform
selection that is select individuals from the population to
create new individual for next generation and how many new
offspring each can create. The selection of individual focus
on individual with higher fitness value.
1. Roulette Wheel Selection:
In roulette wheel selection we provide fitness to possible
solutions by fitness function .candidate with less fitness
value to be eliminated. The advantage of this method that
weaker solution may also survive for the selection process.
(1) At Time t=0 compute inilial population B0.
(2) If condition not fulfilled Compute initial
population for i =1 to N.
(3) Select at time t = t+1 from B.
(4) For i= 1 with probability pc perform crossover
of and +1 at time t = t+1;
(5) For i=1 with probability pm eventually mutate at t=t+1.
(6) Increase time t for next step.
(7) End
Figure 2: Population Initialization
Less fitness value to be eliminated. The advantage of this
method is that weaker solution may also survive for the
selection process.
2.Crossover: crossover also have different variant like one
point crossover, two point crossover, multi-point crossover,
and random multipoint crossover. Using crossover technique
we produce new individual which is different from parents.
Crossover mates chromosomes in the mating pool by pairs
and generates candidate offspring by crossing over the mated
pairs with probability.
Single -point crossover: In Single -point crossover two
parent chromosomes are interchanged at a randomly selected
point thus creating two children.
Before Crossover:
1 0 1 1 1 1 0 0 0 1 P1
0 1 0 1 0 1 1 1 0 P2
After Crossover:
1 0 1 1 1 1 1 1 0 0 C1
0 1 0 1 0 1 0 0 0 C2
Figure 3 : Single Point Crossover.
Two (Multi) point crossovers: In two (Multi) point
crossovers, two crossover points are selected instead of just
one crossover point.
Before Crossover:
1 0 1 1 1 1 0 0 0 1 P1
0 1 0 1 0 1 1 1 0 P2
After Crossover:
1 0 1 1 1 1 1 1 0 1 C1
0 1 0 1 0 1 0 0 0 C2
Figure 4: Two Point Crossover
3. Mutation
In mutation process after section and crossover we take some
of the individual for mutation. The most common technique
used in mutation is to alter or flip bit from chromosome with
some predefine probability.
There are mainly two types of mutation are perform single
point mutation and multipoint mutation .mutation is also
used to produce new best individual from parents which
improve performance significantly.
Before Mutation:
1 0 1 1 1 1 0 0 0 1 C1
After Mutation: Point of Mutation
1 0 1 1 1 0 0 0 0 C2
Figure 5. Mutation
1) Fixed number of generations reached
2) The solution's fitness with highest ranking at a fixed
number of generations.
3) Interrupt solution
4) Combinations of the above three steps
4. Evaluation
Evaluation step intends to select the chromosomes for next
generation. In this work, elitist selection [10] method is used.
This method copies the chromosome with higher fitness value
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 35
to new population.
5. Termination criteria:
Termination Criteria Conditions of termination criteria are
used to decide whether to continue search or stop, which are
as follows
-GA:
Phase I:
1) Scan Transaction Database to compute weighted
Transaction Utility (WTU) of item and itemset.
2) Compare TWU with minimum utility threshold and
remove unpromising item set from transaction
database to get recognized transaction database.
3) Get the number of distinct item from recognized
transaction database and set chromosome length
(CL).
Phase II:
1) Genrate a chromosome length (CL).
2) Calculate fitness value (fv) for each indivisual if
fv≥min_uty then goto step 6 otherwise goto step 4.
3) Check the population size p_size≥N goto step 7
otherwise goto step 4.
4) If termination conditions are fulfilled get output
otherwise continue.
5) Select parents using roulette whell selection for next
generation.
6) Perform crossover and mutation and again calculate
if fv≥min_uty and p_size≥N then goto step 10
otherwise goto step 8.
7) Evaluvate new individual from new and old
population for next generation.
VI. CONCLUSION
In this paper we used proposed a novel approach of Utility
itemset mining using concepts of genetic algorithm.This
proposed method would be very effective especially when
transaction database contain many distinct items because in
each method of utility mining memory requirement and
execution time are the main factors for efficient mining.To
overcome this problem we proposed a method in which
itemset are selected based on Transaction Weighted Utility
(TWU) and using Genetic Algorithm (GA) technique named
-GA.We will present experiment evaluation and
result of this proposed method on different transactional
database in next paper.
REFERENCES
[1]. S. Kannimuthu, Dr. K .Premalatha, Discovery of High Utility
Itemsets Using Genetic Algorithm, International Journal of
Engineering and Technology (IJET), Vol 5 No 6 Dec 2013-
Jan 2014.
[2]. VincentS. Tseng, Cheng-Wei Wu, Bai-En Shie, and
PhilipS.Yu.UP-Growth: An Efficient Algorithm for High
Utility Itemset Mining. InKDD’10, July25–28, 2010,
Washington, DC, USA.2010ACM.
[3]. Pradeepk. Sharma, Abhishe k Raghuvanshi, An Efficient
Methodfor Mining High Utility Data fromaDataSet, in
International Journal of Advanced Research in Computer
Science and Software Engineering, Volume3, Issue11,
November2013.
[4]. R.Agrawal and R.Srikant.Fast algorithms for mining
association rules.InProc.ofthe20thInt'lConf.onVery Large
Data Bases, pp.487-499, 1994.
[5]. J.Han,J.Pei,andY.Yin .Mining frequent patterns without
candidate generation.InProc.of the ACM-SIGMOD Int'l
Conf. on Management of Data, pp.1-12,2000.
[6]. H.Yao,H.J.Hamilton,L.Geng, A unified framework for
utility-based measures for mining itemsets. In Proc.of ACM
SIGKDD 2nd Workshop on Utility-Based Data Mining,
pp.28-37, USA,Aug., 2006.
[7]. Y.Liu, W.Liao, and A.Choudhary.A fast high utility itemsets
mining algorithm. InProc. ofthe Utility-Based Data Mining
Workshop,2005.
[8]. Y.-C.Li,J.-S.Yeh,andC.-C.Chang.isolated items discarding
strategy for discovering high utility itemsets, In Data
&Knowledge Engineering, Vol. 64,Issue1, pp.198-217, Jan.,
2008.
[9]. C.F.Ahmed,S.K.Tanbeer,B.-S.Jeong,andY.-K.Lee.Efficient
tree structures for high utility pattern mining in incremental
databases.In IEEE Transactionson Knowledge and Data
Engineering,Vol.21,Issue12,pp.1708-1721,2009.
[10]. Yu-Chiang Li, Jieh-Shan Yeh and Chin-Chen Chang,
"Isolated items discarding strategy for discovering high
utility itemsets", Data and Knowledge Engineering, Elsevier
Journal, Vol. 64, pp. 198-217, 2008.
© 2017, IJSRCSE All Rights Reserved 36
International Journal of Scientific Research in __________________________________ Survey Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.36-40, February (2017) E-ISSN: 2320-7639
Analysis of Security in Cloud-Learning Systems
Sangeetha Rajesh
K.J. Somaiya Institute of Management Studies and Research,
Available online at: www.isroset.org
Received 26th Dec 2016, Revised 12th Jan 2017, Accepted 30th Jan 2017, Online 28th Feb 2017
Abstract- Developments in computing are influencing many aspects of education. ELearning introduced a new learning
environment. The e-learning systems need to cope its processes with the progressive technologies like cloud computing.
Cloud computing is highly scalable virtualized resources that users can access. Cloud computing made a significant
influence in the educational environment. Throughout paper elearning system using cloud technology is referred as
cloud-learning systems. This paper mainly emphases on the impact of cloud computing in e-learning system with
respect to security. In this paper architecture for cloud learning system is proposed which has security as a service
model. The responsibilities of each participant in the system and the services provided by security as a service model
are also studied.
Key Terms: Cloud Computing, cloud-learning, eLearning, IaaS, PaaS, SaaS, SCaaS
I. INTRODUCTION
Elearning is the use of network technology to design,
deliver, select, administer and extend learning [1]. Elearning
software focus on providing educational services based on
internet services and virtual websites. It is the convergence
of learning and the internet. Elearning is a widely accepted
learning model. It provides new advances in learning
system. Cloud computing introduced a new computing
platform where services can be achieved as a purchase in
demand or pay per use [2]. Elearning services based on
cloud computing can significantly reduce costs and improve
efficiency. Security is an issue in cloud computing related to
information security and privacy protection. Since cloud
computing depends on the web based sources, various
threats attack the e-learners and the cloud based e-learning
technology through the internet.
The objective of this paper is to study the cloud based
elearning system, security issues related with this system and
to propose cloud learning system model with security as a
service.
Rest of the paper is organized as follows: Section I discusses
the relevance, motivation and objective of selecting this
topic. Section II covers the summary of literature review on
cloud computing and its features. Section III reveals the
concepts of elearning system. Section IV depicts the
architecture of cloud based elearning systems. Section V
illustrates the security issues related with the cloud learning
systems. Section VI presents the proposed architecture of
cloud learning system with security as a service model.
Finally, section VII concludes the paper with future scope.
II. CLOUD COMPUTING
Cloud computing is a model for providing scalable access
to networks and applications. Common set of configurable
computing resources that can be provided and released
immediately with minimal effort or involvement [3-7]. Users
can use the computing resources on demand and pay
according to the usage. It is a model where the services it
provides are the computing resources [8]. It shifts the
responsibility of configuring, deploying and maintaining
computing infrastructure from clients to cloud providers [9].
They provide an interface for clients to interact with their
resources as if they are their own standalone resources. The
user doesn’t necessarily know the details of location or
configuration of their resources. They are provided with
virtualized computer resources hosted in the cloud [10].
Figure 1 depicts the different services and deployment
models of cloud architecture. Various cloud services are
presented into three models.
Infrastructure as a Service(IaaS)
Platform as a Service(PaaS)
Software as a Service(SaaS)
A. IaaS
IaaS providers supply a virtual server instance and storage as
well as application program interfaces(API) the let users
migrate workloads to a virtual machine(VM). Infrastructure
is not managed or controlled by the client in this model. The
client has control over the operating system and storage.
B. PaaS
PaaS providers host development tools on their
infrastructures. Users access those tools over the internet
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 37
using APIs. It enables the OS and middleware services to be
delivered from a managed source over a network.
C. SaaS
It enables software to be delivered from a host source over a
network as opposed to installations or implementations.
Users can access SaaS applications and services from any
location using a computer or mobile device that has internet
access.
Different cloud deployment models are follows:
Private cloud
Public cloud
Community cloud
Hybrid cloud
FIGURE 1: CLOUD SERVICE MODEL
Key features of Cloud computing are :
Resource Pooling
Broad network access
Rapid elasticity
On demand self service
Measured service
Quality of service
III. ELEARNING SYSTEM
E-Learning is one of the most famous technologies
discovered to make the traditional way of education,
learning easier with the help of software applications and
virtual learning environment. The word ―E means the
electronic way of learning in the E-Learning. There are
various names that are used to express the term E-Learning
in a technological world such as Computer based training
(CBT), Internet based training (IBT), and Web based
training (WBT) [11]. These terms, express the way of E-
Learning teaches the lesson to the e-learner. E-learning
comes through a network enabled computer and transfers the
knowledge from the internet sources to end user's machine
[12]. Through E-Learning environment the students get
access to the materials and tools relating their studies. Two
important E-learning environments are:
Virtual learning environment: The students are able
to get face to face classroom environment through
computer applications with the help of web sources.
Personal learning environment: The E-Learners to
manage and modify their own learning.
FIGURE 2: E-LEARNING SYSTEM
The architecture of a distributed e-learning system includes
software components, like the client application, an
application server and a database server and the necessary
hardware components. Traditional learning in the remote
and rural areas has many difficulties like shortage of
teachers and problems in quality of teaching. Such problems
can be overcome by eLearning. Educated academicians can
give their input for educating rural students also.
IV. CLOUD-LEARNING SYSTEMS
Cloud-learning is using cloud computing technology for e-
learning systems. Cloud based e-learning provides hardware
and software resources to enhance the traditional e-learning
infrastructure. Once the educational materials for e-learning
systems are virtualized in cloud servers these materials are
available for use to students and other educational
businesses in the form of rent base from cloud vendors [11]. Benefits of cloud learning systems are listed below.
1. Virtualization
2. Centralized data storage
3. Lower costs
4. Improved performance
5. Instant software updates
6. Easy monitoring
7. Improved document format compatibility
Cloud-learning systems is divided into five layers [12]
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 38
Hardware resource layer
Software resource layer
Resource management layer
Service layer
Business application layer
Figure 2 represents the cloud architecture of cloud learning
system without security as a service.
FIGURE 3: ARCHITECTURE FOR CLOUD-LEARNING SYSTEMS
V. SECURITY ISSUES IN CLOUD LEARNING SYSTEM
Security issues have significant importance in this
technology as it ensures the reliability of technology in users
mind to handle it. Since the cloud learning depends on the
web based sources numerous threats are waiting to attack the
e-learners and the cloud based elearning technology through
the internet [11]. Cloud technology provides plenty of
advantages to elearning systems; security is still in doubt for
its security issues and challenges in digital world. Issues
related to cloud learning systems are follows.
1. Confidentiality violation: An unauthorized party gaining
access of the assets present in the cloud learning system.
2. Integrity violation: An unauthorized party accessing and
tampering with an asset used in cloud learning system.
3. Denial of service: Prevention of legitimate access rights
by disrupting traffic during the transaction among the
users of elearning system.
4. Repudiation: Learner’s denial of participation in any
transaction of documents
VI. PROPOSED ARCHITECTURE
In the proposed architecture, security is provided to the user
as a service - Security as a Service (SCaaS) is shown in
Figure 4.
FIGURE 4: ARCHITECTURE FOR CLOUD-LEARNING SYSTEMS WITH SCAAS
The cloud learning system participants are:
1. End user –students
2. Cloud Service Provider(CSP): An organization that
makes the service available.
3. Cloud service requester(CSR):Educational institute
staff
4. Cloud Security Provider(CScP): Provides the
security services
5. Auditor: Independent IT security assessor
The security related responsibilities of each of these users
are as follows:
End user or CSR
o Security awareness to everyone involved in the
system.
o Access agreements
o Malicious code protection
CSP
o Regular audit and monitoring to analyze, repair,
verify, track and capture malicious activity
o Monitoring for unauthorized configuration changes
o Utilizing monitoring tools to maintain a secure
information system environment.
o Backup and recovery
o Environmental controls for the customer and provider
o Physical access for customer and provider.
End user or CSP
o Account management
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 39
o Account enforcement
o User identification and authorization
o Device identification and authorization
o Authentication management
o Cryptographic key establishment and management
Auditor
o Security assessment
o Security certification
o Security accreditation
Security enhancing services which can be used to provide
security for cloud learning systems by the SCaaS are
described below.
A.Email Security
CSP will provide email security as a service securing
inbound and outbound emails. This service can be used to
protect against malicious attachments which may affect
security of the cloud learning systems by email or email
attachments through enforcing of corporate policies on spam
mails and acceptable use of email is made more secure.
B. Web gateway security
It is real time service; secure gateways operating via the
cloud to redirect web traffic to the cloud provider. It is
accomplished by policy management, web access control,
control web traffic and back up of data.
C. Identity and access management
Identity management includes the identity provisioning as
well as de-provisioning. When access of resources in the
cloud learning needs to be managed to allow responsible
access and also deny access when no longer sees necessary
for a user to have access to cloud resources.
Access management comprises the authentication and access
control services. The learners should be authenticated and
access managed through developing trusted user profiles and
policies to control cloud service access in a responsible and
traceable manner.
Access and password management
Administration provisioning
Automated provisioning and de-provisioning
Multifactor authentication and governance
Reporting
Alerting and analytics
D. Security Information and Event Management(SIEM)
It aims to collect log and event data from both virtual and
real network, applications and systems. This information is
them compared and analyzed in the cloud for real time
reporting and alerting on potential threats and compliances.
E. Remote vulnerability and security assessment(RVSA)
RVSA service has the challenges of inventory assurance,
architecture and configuration security logging and
monitoring covered.
F. Intrusion management
The strategies and systems for prevention and detection of
intrusion are implemented on cloud servers at entry points to
the cloud for broad coverage. The enterprise environment is
monitored at key vantage points to locate potential threats.
Unauthorized access and traffic is effectively made
impossible either through event based detection or traffic
network based detection.
G. Encryption
Data can be secured at rest, in transit and in use with
encryption. This should be prerequisite when computing in
the cloud to ensure all valued information is secure and
upholds its integrity.
H. Disaster recovery and business continuity
It includes the procedure deployed to assure resiliency in
the event of a disaster any disruption in service both minor
or major. Back up can be made at multiple locations
allowing for reliable failover and recovery.
I. Network security
It is accomplished through a combination of the services
already mentioned as part of the ScaaS offering, identity and
access management, web security and intrusion
management.
The mentioned security services can be categorized into
different categories as given in table 1.
TABLE 1.SCaaS SECURITY POSTURE
VII. CONCLUSION
Cate
goryDomain
Prote
ctive
Preve
ntive
Detec
tive
Reacti
ve
1Identity and access
managementP P
2 Data loss prevention P
3 Web security P P P
4 Email security P P P
5 Security assessment P
6 Intrusion management P P P
7Security information
and event managementP
8 Encryption P
9
Business continuity and
disaster recovery
planning
P P
10 Network security P P P
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 40
Cloud computing is a recently developed advanced Internet-
based computing model. By combination of cloud
computing and e-learning, building cloud-learning system
opens up new ideas for the further development of e-
learning.
The use of new technologies instead of the traditional
method always lessons the manpower, but results in many
security issues. In this paper, we discuss security concerns of
cloud based eLearning. I propose architecture to overcome
the threats in the cloud-learning systems by including
security as a service model to the cloud-learning system. The
responsibilities of each role in the system and the service
model are specified. Security related to the cloud computing
technology can be considered as a major research area and
future work can be done on the same.
REFERENCES
[1] SOM NAIDU,” E-Learning A Guidebook of Principles,
Procedures and Practices” 2 Revised Edition, CEMCA, 2006.
[2] Xiang Tana, Bo Aib, “The Issues of Cloud Computing Security
in High-speed Railway”, International Conference on
Electronic & Mechanical Engineering and Information
Technology 2011.
[3] H. Takabi, J. B. D. Joshi, G.Ahn., “Security and Privacy
Challenges in Cloud Computing Environments”, IEEE Security
Privacy Magazine, Vol. 8, IEEE Computer Society, 2010,
pp.24-31.
[4] Shivlal Mewada, Umesh Kumar Singh and Pradeep Sharma,
"Security Enhancement in Cloud Computing (CC)", ISROSET-
International Journal of Scientific Research in Computer
Science and Engineering, Vol-01, Issue-01, Page No (31-37),
Feb 2013.
[5] Surabhi Shukla and Dharamjeet kumar, "Cloud’s Software-
Security as a Service(S-SaaS) via Biometrics", International
Journal of Computer Sciences and Engineering, Vol-02, Issue-
03, Page No (119-124), Mar -2014.
[6] Shaheen Ayyub and Devshree Roy, "Cloud Computing
Characteristics and Security Issues", International Journal of
Computer Sciences and Engineering, Vol-01, Issue-04, Page
No (18-22), Dec -2013.
[7]Sajjad Hashemi, Sayyed Yasser Hashemi, “ Cloud Computing
for Elearning with more emphasis on security issues”
International Journal of Computer, Control and Quantum and
information engineering, Vol 7,No. 9, pp.607-612, 2013
[8] Rajesh Piplode, Pradeep Sharma and Umesh Kumar Singh,
"Study of Threats, Risk and Challenges in Cloud Computing",
ISROSET-International Journal of Scientific Research in
Computer Science and Engineering, Vol-01, Issue-01, Page No
(26-30), Feb 2013.
[9] A.A. Elusoji, L.N. Onyejegbu & O.S. Ayodele (2013). “An
Effective Measurement of Data Security in a Cloud Computing
Environment”. Afr J. Of Comp & ICTs. Vol 6, No. 2. pp 67-76
[10] cloud-basedlms-etec522. Weebly (2015, September 16).
(Online). Available.http://cloud-basedlms-tec522. weebly.com/
security.html.
[11] Vishal Kadam, Makhan Kumbhkar , "Security in Cloud
Environment", ISROSET-International Journal of Scientific
Research in Computer Science and Engineering, Volume-02,
Issue-03, Page No (6-10), Jun 2014.
[12] D. Kasi Viswanath, S. Kusuma & Saroj Kumar Gupta “Cloud
Computing Issues and Benefits Modern Education” Vol XII,
Issue X Version I, pp. 15-19, July 2012.
[12] Vishnu Patidar, Makhan Kumbhkar, “Analysis of Cloud
Computing Security Issues in Software as a Service”,
International Journal of Scientific Research in Computer
Science and Engineering, Vol-02, Issue -03, PP (1-5) Jun 2014.
[13] Cloudcomputingadmin.com, 2016, May 26, (online).
Available: http://www.cloudcomputingadmin.com/articles-
tutorials/security/security-service-cloud-based-rise-part1.html
© 2017, IJSRCSE All Rights Reserved 41
International Journal of Scientific Research in _______________________________ Review Paper . Computer Science and Engineering
Volume-5, Issue-1, pp.41-44, February (2017) E-ISSN: 2320-7639
Security Issues on Online Transaction of Digital Banking
Wakil Ghori
Indore Indira School of Career Studies, Indore (MP), India
Available online at: www.isroset.org
Received 21st Dec 2016, Revised 5th Jan 2017, Accepted 01th Feb 2017, Online 28th Feb 2017
Abstract— Digital banking system has a broad range of benefits that add value to customer’s fulfillment in term of
superior service quality, and at the same time it enables banks to add a competitive benefit over other financial
competitors. Presently, Digital banking customers only require a smart gadget with access to the Internet to use digital
banking services. Customers can access their digital banking accounts from anywhere in the world. However, more
attention towards digital banking security is required and needed against fake behavior because the lack of control
over security policies makes digital banking still untrusted for many customers till now. This paper presents challenges
and security issues related to digital banking. Various types of cyber attacks, fraud strategies, and prevention methods
used by digital banks, are also presented in this paper. This research work studies security and safety issues of online
banking.
Keywords— Digital Banking, Hacking, Rootkits, Phishing, encryption, OTP, QR code
I. INTRODUCTION
At the basic level, Internet banking can mean the
setting up of a web page by a bank to give information
about its products and services. At an advanced level, it
involves provision of facilities such as accessing
accounts, transferring funds, and buying financial
products or services online as well as new banking
services, such as electronic bill presentment and
payment, which allow the customers to pay and receive
the bills on a bank‟s website[1].
Now, Digital banking is not a new phenomenon anymore as
more and more financial institutions and banks worldwide
adopting this system[2]. The most outstanding feature to
online digital banking is its convenience. People are always
too busy to spend their precious time standing in line at
banks queue. Online banking provides them the ability to
carry out banking transactions in the comfort of their homes
or offices digitally. People can do banking transactions
sitting at home, at office, or lying on their bed midnight as
this can be done through computers or mobiles. There no
time boundation to do banking operation and no need to
move to bank premises in order to open new bank account,
check account balance and make funds transfer. Today banks
with Digital banking experience provide more complicated
online financial services, due to that digital security and
privacy issues are of high concern. So banks should provide
more safe and secure digital banking services.
The Information Technology revolution has brought stunning
in the business environment. Perhaps no other institutions or
organization has been influenced by advances in technology
as banking and financial institutions[3]. As a result the
banking sector cause a totally new looks in today scenario.
Electronic funds Transfer, Electronic clearings System,
Automated Teller Machine (ATM), Tele-banking, Mobile
banking and Net banking are widely in use.
Digital Banking is one of the gifts of technology to human
beings. E-Banking is a fast spreading service that allows
customers to use computer and mobile to access account
transactions from a remote location. Digital banking is also
extremely beneficial to the banks as they do not have to
acquire large office area or hire additional staff to deal with
customer demands. Internet digital banking is also extremely
beneficial for the environment since it reduces paper usage
for one. The popularity of online banking is good news not
only for us and financial institutions but also for cyber
criminals, who keep eye on online banking customers[4].
Security is the major disadvantage with digital banking.
Although all the security features and encryption software
placed with your account, there will always be hackers who
are smart enough to get into your account and misuse it, take
money. Identity theft is one of the main drawbacks of online
banking.
Rest of the paper is organized as follows: In section II, the
information related security threats in Online digital banking
is given. Section III includes the information on security tips
for safe online digital banking. In Section IV, we have
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 42
mentioned the points to improve the internet security. In
Section V, we have given the short summarization of the
related work. In Section VI we have proposed some
suggestions and recommendations. Section VII concludes the
paper with future scope.
II. ONLINE DIGITAL BANKING SECURITY
THREATS
The Commercial Banks have been facing lot of problems due
to Online Banking Crimes. Some of them are enumerated
below.
Malicious Software
Virus/Worm (Programmes that self replicate or are
sent over the internet by emails and can damage
your PC)
Trojans (Programmes that compromise computer
security by intercepting password without known to
user)
Man-in-the-Middle (MITM) Attacks
Phishing (Using a false name, website and address
for fraudulent purpose)
SMSishing (SMS phishing)
Vishing
Keylogger
Rootkits (malicious software giving unauthorized
administrator level access without the real
administrator noticing)
Unauthorized Access (Hacking)
Credit Card Fraud
Cross-Site Scripting
Password Guessing
Website Spoofing
Pharming (Redirect the users to a fraudulent
purpose) Unencrypted Transmission of Data
III. IMPORTANT SECURITY TIPS FOR SAFE
ONLINE DIGITAL BANKING
There are a number of steps we can take for an extra
layer of protection to keep us safe online.
Protect your computer and mobile devices with up-
to-date security software and install regular security
and software updates.
Only use official Mobile Banking apps and only
download apps from an official app store.
Never log in to Online Banking through a link in an
email.
Create password (or PIN) that is hard to guess.
Change your PIN or password immediately if you
think someone may have discovered it.
Don't give anyone your security details and never
write them down or store them on your mobile in a
way that might be recognized by someone else.
Never give your Personal Identification Number (or
password) and full security details to anyone who
call you, and never reveal them in an email or text
message.
Be cautious of opening attachments or links in
emails that you were not expecting or are unsure
about.
Banks or Financial Institutions never call you and
ask you to transfer money, so ignore such calls.
If your phone is lost or stolen, call your bank so
they can disable your Mobile Banking apps as a
precaution. Access your bank website only by typing the URL
in the address bar of your browser.
IV. HAVING THE FOLLOWING WILL IMPROVE
INTERNET SECURITY
Install newer version of Operating System with
latest security features.
Update Antivirus definition.
Always use latest version of web Browsers.
Firewall is enabled.
Antivirus signatures applied.
Scan your computer regularly with Antivirus to
ensure that the system is Virus/Trojan free.
Change your Internet Banking password at
periodical intervals.
Always check the last log-in date and time in the
post login page.
Avoid accessing Internet banking accounts from
public places such as cyber cafes or shared PCs.
Use OTP (One Time Password) from sensitive
digital transaction.
Figure 1 Sample Image of OTP generation process
Use QR code (Quick Response Code) for fund transfer
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 43
Figure 2 Sample Image of QR code-Scanning
V. RELETED WORK
Online banking has become increasingly important
to the profitability of financial institutions as well
as adding convenience for their customers. As the
number of customers using online banking increases,
online banking systems are becoming more
desirable targets for criminals to attack. To
maintain their customers‟ trust and confidence in
the security of their online bank accounts,
financial institutions must identify how attackers
compromise accounts and develop methods to protect
them. The unique aspect about security in banking
industry is that the security posture of a bank does not
depend solely on the safeguards and practices
implemented by the bank, it is equally dependent on
the awareness of the users using the banking channel
and the quality of end-user terminals.[5]
Emeka Nwogu and McChester Odoh, in their paper
“Security Issues Analysis on Online Banking
Implementations in Nigeria”, have given With the
help of Internet banking, many transactions can be
executed by the account holder. When small
transactions like balance inquiry, record of recent
transaction, etc. are to be processed, the Internet
banking facility proves to be very handy. The concept
of Internet banking has thus become a revolution in
the field of banking and finance[6]. Tejendra Pal Sing Brar mention in his paper that -
Electronic Banking is a new technology that has many
capabilities and also many potential problems, users
are hesitant to use the system. The use of Electronic
Banking has brought many concerns from different
perspectives: government, businesses, banks,
individuals and technology[7]. Panida Subsorn and Sunsern Limwiriyakul, introduce
their paper with words “Most industries have
deployed internet technologies as unessential part of
their business operations. The banking industry is one
of the industries that has adopted internet technologies
for their business operations and in their plans,
policies and strategies to be more accessible,
convenient, competitive and economical as an
industry. The aim of these strategies was to provide
internet banking customers the facilities to access and
manage their bank accounts easily and globally.
Nevertheless, there are inherent information security
threats and risks associated with the use of internet
banking systems that can be variously classified as
low, medium and high. In particular the
confidentiality, privacy and security of internet
banking transactions and personal information are the
major concerns for both the banking industry and
internet banking customers”[8].
VI. SUGGESTION AND RECOMMENDATION
The following suggestions are recommended for enhancing
digital banking services of banks to the customers
Banks should take essential steps to make
awareness among people about the advantages of
digital banking services available in the banks.
Many bank customers have not availed of the
internet banking services because they do not trust
the internet channel presuming it as complicated. So
banks should train the customers to get acquainted
with the system.
Internet banking is convenient and easy to use, but
customers are afraid of adopting these services
because they think that using these services is
complicated. So, bank personnel should provide on-
site training to the bank customers who intend to
use online banking services.
Banks should regularly improve their internal
security mechanism to provide privacy and security
to the customer‟s transactions.
V. CONCLUSION AND FUTURE SCOPE
Security is the most significant issue in digital banking.
There are many ways to have a secure communication via
computer and mobile networks today. It may occur in form
of risk in case of unauthorized access of information of bank
account. Many customers are still not comfortable with
online system, especially from the security point of view. In
addition to this, financial institutions and banks also face the
domestic problems like employee frauds. Many customers
hesitate to deal with an online banking system as they are not
sure of products and services quality which they will receive
from banks. Banking system may also face problems due to
wrong choice of technology, insufficient Control and
inappropriate system. Wrong selection of technology may
ISROSET- Int. J. Sci. Res. in Computer Science and Engineering Vol-5(1), Feb 2017, E-ISSN: 2320-7639
© 2017, IJSRCSE All Rights Reserved 44
cause financial loss, so it is always recommended to follow
the security measures as suggested.
With the expansion of the security technology and
mechanism of the Internet banking, as well as the continuous
improvement of the security solutions of the Internet banking
systems, the Internet banking is becoming more and more
secure, and there will be a board market of digital banking
with secure services.
REFERENCES
[1] Rajpreet Kaur Jassal and Ravinder Kumar Sehgal, “Online
Banking Security Flaws: A Study”, International Journal of
Advanced Research in Computer Science and Software
Engineering, Volume-03, Issue-08, ISSN-2277 128X , August
2013.
[2] Elbek Musaev and Muhammed Yousoof, “A Review on Internet
Banking Security and Privacy Issues in Oman”, ICIT 2015- The
7th International Conference on Information Technology,
January 2015.
[3] Mr. Shakir Shaik and Dr. S.A. Sameera, “Security Issues in E-
Banking Services in Indian Scenario”, Asian Journal of
Management Sciences, Volume-02, Issue-03, pp.(28-30). ISSN:
2348-0351, March 29, 2014.
[4] „Security features in Internet Banking‟,
newagebanking.com/finsec/modernizing-digital-security-to-
protect-banks-from-fraud/, Jan 16, 2017
[5] Kenneth Edge, “The Use of Attack and Protection Trees to
Analyze Security for an Online Banking System”, HICSS
2007-40th Annual Hawaii International Conference on
System Science, Online ISSN: 1530-1605 [6] Emeka Nwogu and McChester Odoh, “Security Issues Analysis
on Online Banking Implementations in Nigeria”, International
Journal of Computer Science and Telecommunications,
Volume-06, Issue-01, ISSN 2047-3338, January 2015,
[7] Tejinder Pal Singh Brar, Dr. Dhiraj Sharma, Dr. Sawtantar
Singh Khurmi, “Vulnerabilities in e-banking: A study of various
security aspects in e-banking”, International Journal of
Computing & Business Research, ISSN (Online): 2229-6166
[8] Panida Subsorn and Sunsern Limwiriyakul, “An Analysis of
Internet Banking Security of Foreign Subsidiary Banks in
Australia: A Customer Perspective”, IJCSI International Journal
of Computer Science Issues, Vol. 09, Issue 02, ISSN (Online):
1694-0814, March 2012.
AUTHORS PROFILE
Wakil Ghori received Bachelor of Computer
Science (Honours) Degree in 2000 and Master of
Computer Management (MCM) in 2003 from Devi Ahilya University, Indore (MP). He pursed MCA
from Institute of Advanced Studie in Education
(Deemed University), Rajasthan in 2003. He was worked as Assistant Professor in Govt. Holkar
Science College from 2004-2006 and also worked in
Renaissance College of Commerce and Management from 2006-2012. He is presently
working as an Assistant Professor at Indore Indira School of Career Studies,
Indore (MP) since 2012 till date. His areas of interest are Computer Programming, Digital Electronic, DBMS, Computer Networking.
Email:[email protected]
Our other Publications
Chemistry, Biochemistry and Engineering
International Journal of Scientific Research in
Chemical Sciences
ISSN: 2455-3174
Impact Factor :1.130
Current Issue Archive Issue
Computer Science, Engineering & Technology
International Journal of Scientific Research in
Computer Science and Engineering
ISSN: 2320-7639
Impact Factor :1.32
Current Issue Archive Issue
Life Science and Engineering
International Journal of Scientific Research in
Biological Sciences
ISSN: 2347-7520
Impact Factor : 0.729
Current Issue Archive Issue
Mathematical and Statistical Sciences
International Journal of Scientific Research in
Mathematical and Statistical Sciences
ISSN: 2348-4519
Impact Factor : 0.801
Current Issue
Archive Issue
Physics and Applied Sciences
International Journal of Scientific Research in
Physics and Applied Sciences
ISSN: 2348-3423
Impact Factor : 0.782
Current Issue
Archive Issue
Multidisciplinary Studies
International Journal of Scientific Research in
Multidisciplinary Studies
ISSN: 2454-9312
Impact Factor : 1.021
Current Issue
Archive Issue
www.isroset.org
ISROSET International Scientific Research Organization for
Science, Engineering and Technology
International Journals
Int. J. of Scientific Research in Chemical Sciences Int. J. of Scientific Research in Computer Science and Engineering Int. J. of Scientific Research in Mathematical and Statistical Sciences Int. J. of Scientific Research in Physics and Applied Sciences Int. J. of Scientific Research in Multidisciplinary Studies
Membership
ISROSET offers the following grades of membership:
Fellow Membership Click here
Life Membership Click here
Awards ISROSET offers the following grades of awards start from 2017:
International Best Researcher Award
International Best Research Scholar Award
International Highest Publication Award
International Highest Cited Paper Award
International Highest Published Book Award
International Women Empowerment Award
International Best Presentation Award
International Young Scientist Award
Thesis Publication: Publish your work with ISBN
Thesis, Dissertations, Projects, Books, Souvenir, Conference Proceedings, Bulletin
We feel pleased to publish such type of conferences, seminars, workshops
Please contact us on: +91-9977552434, 9424420408
www.isroset.org, e-mail:[email protected], [email protected]