+ All Categories
Home > Documents > Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of...

Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of...

Date post: 03-Jun-2020
Category:
Upload: others
View: 30 times
Download: 1 times
Share this document with a friend
164
ISBN: 978-93-5291-969-7 4 IC T Proceedings Proceedings of of Second International Conference on Computing, Second International Conference on Computing, Communication and Control Technology Communication and Control Technology 4 (IC T-2018) 4 (IC T-2018) Proceedings of Second International Conference on Computing, Communication and Control Technology 4 (IC T-2018) AIT, Thailand ASSOCHAM www.srmcem.ac.in
Transcript
Page 1: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

ISBN: 978-93-5291-969-7

4IC T

Proceedings Proceedings

of of

Second International Conference on Computing, Second International Conference on Computing,

Communication and Control Technology Communication and Control Technology 4(IC T-2018)4(IC T-2018)

Proceedings

of

Second International Conference on Computing,

Communication and Control Technology 4(IC T-2018)

AIT,ThailandASSOCHAM

www.srmcem.ac.in

Page 2: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 3: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

ISBN: 978-93-5291-969-7

4IC T

Proceedings Proceedings

of of

Second International Conference on Computing, Second International Conference on Computing,

Communication and Control Technology Communication and Control Technology 4

(IC T-2018)4

(IC T-2018)

Proceedings

of

Second International Conference on Computing,

Communication and Control Technology 4

(IC T-2018)

AIT,ThailandASSOCHAM

www.srmcem.ac.in

Page 4: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 5: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Contents

A Brief Overview of Advances in Metaheuristics and Connection with Unconventional Approaches

(Invited Paper)� � � � � � � � � � � � � 1

Roman Senkerik

Web based portal for database & analysis of physiological variability� � � � � � 6

Rajesh Kumar Jain, Sushma N. Bhat, Dr. Amithabh Dube, Abhishek Patil, Vidhi Jain and Dr. G. D. Jindal�Modelling and Simulation of Two Level Interleaved Boost Converter� � � � � � 10

Gaurav Kumar Srivastava, Anshuman Aditya, Ayush Mishra and Mr. Himanshu Bhushan�Application of Blockchain Technology in Cybersecurity� � � � � � � � 15

Faraz Zafar and Namrata Nagpal

Performance Analysis of Different Stages CS-VCO in 0.18µm CMOS Technology� � � � 18

Ashish Mishra and Indu Prabha Singh

Mobile Robot based Anti Personnel Landmine Detection and Diffusion� � � � � � 23

Prof. S.C.Tewari, Shivam Kaushlendra, Shivam Singh and Shraddha Saraf

Resource Optimization in C-RAN using D2D for 5G Wireless Communications� � � � � 29

Divya Pandey, Alkesh Agrawal and Manju Bhardwaj

Metamaterial Perfect Absorbers� � � � � � � � � � � 33

Alkesh Agrawal and Mukul Misra

Despeckling and Enhancement Techniques for Synthetic Aperture Radar (SAR) Images:

A Technical Review� � � � � � � � � � � � 37

Ankita Bishnu, Ankita Rai and Vikrant Bhateja

Pre-Processing of Cough Signals using Discrete Wavelet Transform� � � � � � 42

Ahmad Taquee, Vikrant Bhateja, Adya Shankar and Agam Srivastava

Crime Prediction Using Data Analytics Tools; A Review�� � � � � � � 46

Shivam Maurya, Shivani Mohan Agarwal, Shivani Prasad and Dr. Jayant Mishra

Configurable User Interface: A Perspective from Interoperable Set Top Box� � � � � 50

Bhoomi Kalavadiya, Pallab Dutta, Vipin Tyagi and Sridharan Balakrishnan

Morphological Filtering based Enhancement of MRI� � � � � � � � 54

Anuj Singh Bhadauria, Mansi Nigam, Anu Arya and Vikrant Bhateja

Car Health Monitoring Device Using Raspberry Pi� � � � � � � � 57

Suyash Gautam And Swarnima Shukla

A Review on Relay Selection Schemes in Cognitive Radio Networks� � � � � � 62

Deepmala Trivedi and Gopal Singh Phartiyal

Modelling and Performance Analysis of Distributed Generation Systems integrated with Distribution Grid�� 68

Nikita Gupta

A Peek into Renewable Energy Systems of India�� � � � � � � � 72

Dr. Nandita Kaushal

Page 6: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Cognitive Trading Framework� � � � � � � � � � � 75 Yuan Huang, Kamal Kumar Rathinasamy andNeelabh Srivastava

An overview on Millimeter Wave MIMO for 5G Communication systems� � � � � 78

Gaurav Sharma, Nidhi Agarwal and Vivek Mishra

LabVIEW Implementation of Hamming (N, K) Channel Coding Scheme�� � � � � 83

Alkesh Agrawal, Ashish Anand and Mukul Misra

The Application of AI: A Clustering Approach using K-Means Algorithm�� � � � 87

Saurabh Dixit and Arun Kumar Singh

The Internet of Things (IoT) Revolution in India � � � � � � � � 91

Himani Kamboj�MQTT Protocol Simulation Using MIMIC Simulator� � � � � � � � 96

MV Ramachandra Praveen, Prakhar Srivastava, Dr Ragini Tripathi, Razia Sultana

Cloud Computing In Indian Aerospace and Defense Sector: Relevance and Associated Challenges�� 101

Saurabh Kumar Gupta

Artificial Intelligence based Health Analysis System� � � � � � � � 108

Apoorva Chauhan, Arohi Srivastava and Atrey Tripathi

Fusion of Remote Sensing Images using Anisotropic Diffusion Technique� � � � � 115

Anil Singh, Manish Kumar, Aman Gautam and Ashutosh Singhal

Development of Small Size Window Cleaning Robot using Wall Climbing Mechanism� � � � 118

Prof. S.C. Tewari, Sakshi Pandey, Saumya Singh and Saiyed Nabeel

WDF: Comparative study and applications� � � � � � � � � 122

Akanksha Sondhi and Nidhi Agarwal

Page Ranking Framework: Personalization with Privacy Issues and Enhancement�� � � 127

Nidhi Saxena and Bineet Gupta

Massive Mimo – The Runway To 5G 132

N. K. Srivastava, Ram Krishna and S. Chandran

Comparison between ANN and ANN in offline signature verification 136

Utkarsh Shukla and Vikrant Bhateja

Page 7: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

MESSAGE

(Prof. Vinay Kumar Pathak)

Vice Chancellor

MkW- ,-ih-ts- vCnqy dyke izkfof/kd fo'ofo|ky;mRrj izns'k] y[kuÅ

Dr. A.P.J. ABDUL KALAM TECHNICAL UNIVERSITYUttar Pradesh, LucknowProf. Vinay Kumar Pathak

Vice-Chancellor

izks- fou; dqekj ikBddqyifr

It is a matter of great pleasure that Shri Ramswaroop Memorial Group of Professional

Colleges, Lucknow is organizing "Second International Conference on Computing, 4Communication and Control Technology (IC T)” from 25-27 October 2018 association with

AIT Thailand, IEEE, U.P. Section, ASSOCHAM, Lucknow, SRM University, Lucknow.

In recent times, communication has become an important tool in bringing the world close.

Advanced Computing and Communication Technologies are the current areas of interest for

experts and specialists from Industry, universities and research institutes.

I hope the conference will provide an excellent forum for exchange of information on latest

developments in the fast emerging areas of advance computing and communication and

provide stimulus for interdisciplinary research.

I Congratulate the organizers of the Institute wish the International Conference a great

success.

Sector-11, Jankipuram Extension Yojna, Lucknow-226 031 (U.P.) India, Website : www.aktu.ac.in

Office : +91 522 2772194 Fax : +91 522 2772189 E-mail : [email protected], [email protected]

Date : 03.10.2018

Page 8: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 9: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Established by UP State Govt. ACT 1 of 2012

Lucknow-Deva Road, Uttar Pradesh

MESSAGE

(Pankaj Agarwal) Chancellor,

Shri Ramswaroop Memorial University

It gives me immense pleasure that Shri Ramswaroop Memorial Group of

Professional Colleges, Lucknow in association with Shri Ramswaroop Memorial

University is organizing its second International Conference on Computing, 4Communication and Control Technology (IC T) from 25 to 27 October 2018.

India is taking quantum leaps in almost every sphere of technology. The utilization of

global collaboration models on advancements in various disciplines of technology is a

trend which is fast becoming a reality. This has opened new vistas of research in numerous

fields of technology prominent among which are computing and communication.

Exchange of ideas amongst educators, scholars and students is inevitable today for the

furtherance of technology. I am sure the conference will provide an excellent forum for

sharing of knowledge and exchange of ideas in the field.

I extend my warm greetings and felicitations to the organizers and the participants of

the conference. I wish the conference a great success.

Page 10: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 11: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Pooja Agarwal Pro-chancellor

Shri Ramswaroop Memorial University

Established by UP State Govt. ACT 1 of 2012

Lucknow-Deva Road, Uttar Pradesh

MESSAGE

I am delighted that Shri Ramswaroop Memorial Group of Professional Colleges,

Lucknow in association with Shri Ramswaroop Memorial University is organizing its second

edition of International Conference on Computing, Communication and Control Technology

(IC4T) during 25-27 October, 2018.

Current time is really the time of technological resurgence. Technology has risen

exponentially and become a part and parcel of everyday life in a manner seen never before.

Industry and government can definitely team up with universities to create meaningful

graduate research initiatives at this apt time. However, no such activity can see the light of the

day without exchange of ideas, objective questionings, deliberations, and shift in standpoints

occasioned by an international conference.

I am sure this conference will provide a wonderful opportunity to all the participants for

intersecting issues related to the current and upcoming developments in computing, control

and communication technologies. I am sure the deliberations at this conference will percolate

down to the work places and classrooms and give a new impetus to the understanding of

technology and framework of research.

I wish this international conference a scintillating success.

Page 12: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 13: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Address for correspondence : Shri Ramswaroop Memorial University, VIII-Hadauri, Post-Tindola,Lucknow-Deva Road, Uttar Pradesh-225003 Tel.: 9554953791, 9554953792, 9838904666, 9838785666, 9839771666.

Website: www.srmu.ac.in, E-mail: [email protected], [email protected]

Prof. (Dr.) A.K. SinghVice Chancellor, SRMU

Established by UP State Govt. ACT 1 of 2012

Lucknow-Deva Road, Uttar Pradesh

MESSAGE

It is expected that India will enter into league of developed nation in next 10 to 15 years.

This expectation is based upon the prediction of International Monetary Fund and other reputed

World Financial Institutions that economic growth of India will continue to gross at 7% and

above. Hence it is essential that our technical talent should keep pace with the fast changing

science and technology knowledge and solution based research.

I am sure this International Conference which is 2nd in series shall be able to generate ideas

through quality research papers and discussion specially in the areas of 'Emerging Disruption

Technologies' and give a push to solution research with active participation of Faculty, Research

Scholars and Industries.

Page 14: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 15: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Established by UP State Govt. ACT 1 of 2012

Tiwariganj, Faizabad Road, Lucknow, Uttar Pradesh

MESSAGE

Prof (Col) RK JaiswalDirector General

SRMGPC, Lucknow

I am pleased to learn that Shri Ramswaroop Memorial Group of Professional Colleges

and Shri Ramswaroop Memorial University are organizing their second, 'International

Conference on Computing, Communication, and Control Technology' (IC4T-2018) at

Lucknow from 25-27 October 2018.

I am sure this platform will provide an excellent opportunity to academia, industry and

research fraternity to come together for interchange of technical knowledge and

collaborations. I hope this congregation of scholars and experts will help in advancement of

knowledge, and give a fillip to research in the ever evolving field of computing and

communication.

I wish the conference a grand success.

Page 16: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 17: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 18: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 19: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

A Brief Overview of Advances in Metaheuristics and Connection with Unconventional Approaches (Invited Paper)

Proceedings of IC4T, 2018 1

A Brief Overview of Advances in Metaheuristics and Connection with Unconventional Approaches

(Invited Paper)Roman Senkerik

Faculty of Applied Informatics, Tomas Bata University in Zlin, Nam T.G. Masaryka 5555, 760 01 Zlin, Czech [email protected]

Abstract - This brief invited overview paper focuses on the recent advanceson the original hybridizationsand the unconventional approachesfor the metaheuristic optimization algorithms. It discusses the modern swarm algorithms, focusing mainly on the highly potential Self Organizing Migrating Algorithm (SOMA). Further, the concept of chaos-based optimization in general, i.e., the influence of chaotic sequences on the population diversity and overall metaheuristics performance, is briefly introduced here. Also,the non-random processes used in evolutionary algorithms, as well as the examples of the evolving complex network dynamics as the unconventional tool for the visualization and analysis of the population in popular optimization metaheuristics, are given here.Finally, this survey outlines the Analytic Programming (AP) as the novel framework for the symbolic regression.This review survey should inspire the researchers for applying such methods and take advantage of possible performance improvements for the optimization tasks.

Keywords–Optimization; Metaheuristics; Evolutionary algorithms; Complex networks; Chaotic systems; SOMA; Analytic Programming

I. INTRODUCTION

This brief survey explores the recently published unconventional synergy of several different research fields belonging to the computational intelligence paradigm, which are the stochastic processes, complex chaotic dynamics, complex networks (CN), and metaheuristics algorithms, specifically evolutionary computation techniques (ECT’s). The algorithms of the interest in referred works are Differential Evolution (DE) [1], Particle Swarm Optimization (PSO) [2], Self-Organizing Migrating Algorithm (SOMA) [3], and others.

The motivation behind this short survey is quite simple. In recent decades, the metaheuristic techniques became well-established and frequently used tools for solving wide portfolio of the engineering and research optimization tasks with a various level of complexity in both real and discrete domains. Despite the fact, that the ongoing research has brought many powerful and robust metaheuristic algorithms, the researchers have to deal with a well-known phenomenon

so-called no free lunch theorem [5] forcing them to test various methods, techniques, adaptations and parameter settings leading to the acceptable results. The importance of finding a well-performing algorithm is growing together with the increase of the dimensionality and the number of complex objectives in current optimization tasks. Moreover, it is necessary to emphasize the fact that, like most of these methods, they are inspired by natural evolution, and their development can be considered as a form of evolution. Such a fact is mentioned in the article [6] that even incremental steps in algorithm development, including failures, may be the inspiration for the development of robust and powerful ECTs. Also, it is always advisable to focus on simplifying algorithms, as stated in [7].

This short survey introduces several simple, yet successful, modifications and unconventional approaches applied for metaheuristic algorithms, as well as it represents a brief summarizationof the Authors own research [8] - [11] supported by many other references to (not only) similar recent studies.

II. MODERN SWARM ALGORITHMS

The swarm intelligence phenomenon is attracting researchers in the recent decade. Without an actual leader, swarms can make intelligent decisions. Simulating and mimicking of many behavior patterns were successfully used in the optimization field. The most well-knownalgorithm is ant colony optimization (ACO) [12] followed by PSO [2], SOMA [3], and Firefly Algorithm (FA) [4]. Later a broad set of algorithms mimicking various biological swarm systems were introduced, like Cuckoo Search [13], Grey Wolf Optimizer [14], artificial bee colony (ABC) [15], or the bat algorithm [16].

Nevertheless, the amount of the algorithms produced by the research community is huge,and the contribution to the research filed is minor in most cases. Thusmany critical reviews have been published saying that there may exist more bio-inspired metaheuristic algorithms than the problems that are needed to be solved [17].

Page 20: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Roman Senkerik

Proceedings of IC4T, 20182

One of the perspective algorithms with a clever mix of exploration and exploitation abilities and high potential to further updates and performance upgrades is SOMA [3], which is described in details below.

A. SOMA

SOMA [3] works with groups of individuals (population) whose behavior can be described as a competitive – cooperative strategy. The construction of a new population of individuals is not based on evolutionary principles (two parents produce offspring) but on the behavior of social group solving a task (looking for the food source, etc.). In the case of SOMA, there is no velocity vector as in PSO, only the position of individuals in the search space is changed during one generation, here called the migration loop.The principle is simple. In every migration loop the best individual is chosen, (called the Leader). An active individual from the population moves in the direction towards the Leader in the search space. The movement consists of jumps determined by the Step parameter until the final position given by the PathLength parameter. For each step (jump), the cost function for the actual position is evaluatedand the best value is saved. At the end of the process, the position of the individual with the best cost value is chosen. The main principle is depicted in Fig. 1.

III. CHAOS AND NON-RANDOM DRIVEN METAHEURISTICS

The key operation in metaheuristic algorithms is the randomness. Together with the persistent development of metaheuristic algorithms, the popularity of hybridizing them with deterministic chaos is growing every year, due to its properties like ergodicity, stochasticity, self-similarity, and density of periodic orbits. Also, vice-versa, the metaheuristic approach in chaos control/synchronization is more popular in recent years.Recent research in chaotic approach for metaheuristics mostly uses straightforwardly various chaotic maps in the place of pseudo-random number generators (PRNG).

The original concept of embedding chaotic dynamics into

Figure 1. Main SOMA principle

the ECTs as chaotic pseudo-random number generator (CPRNG) is given in [18], followed by numerous successful implementations in PSO and DE algorithm [19] - [24].Recently the chaos driven heuristic concept has also been utilized in many other modern swarm-based algorithms [25] - [27].

A. Chaos as the CPRNG

The general idea of CPRNG is to replace the default PRNG with the chaotic system. Following nine well known and frequently studied discrete dissipative chaotic maps were used as the CPRNGs for various metaheuristics: Arnold Cat Map, Burgers Map, Delayed Logistic Map, Dissipative Standard Map, Henon Map, Ikeda Map, Lozi Map, Sinai Map,andTinkerbell Map. With the typical settings and definitions as in [28], systems exhibit typical chaotic behavior.Also, the chaotic flows and oscillators have been widely studied, as well as other physical or chemical phenomenon showing chaos [8].

B. Non-Random processes and ECTs

As stated above, an important part of ECTs, are random processes. An interesting study [29]is discussing whether random processes are neededin ECTs. Simple experiments revealedthe fact that random number generators can be replaced by deterministic processes with short periodicity (like sin function). Authors are claiming, that advantageis the possibilityto repeat the experiment,analysis of algorithm full path on the searched fitness landscape, and the possibility of easier construction of any proofs and related conclusions.

IV. POPULATION AS THE COMPLEX NETWORK

In this chapter, that represents a follow-up of more detailed studies [10], [30], [31], a fusion of two different attractive areas of research: (complex) networks and ECTs, is described. Interactions in a ECTs during the optimization process can be considered like user interactions in social networks or just people in society. It has been observed that networks generated by evolutionary dynamics show properties of CN in certain time frames and conditions [32]. The CN approach is utilized to show the linkage between different individuals in the population. Each individual in the population can be taken as a node in the CN graph, where its links specify the successful exchange of information in the population.

The population is visualized as an evolving CN that exhibits non-trivial features – e.g., degree distribution, clustering, and various centralities. These features are important and can be utilized for the adaptive population control as well as parameter control during the ECTs run. The initial studies [33], [34] describing the possibilities of transforming population dynamics into CN were followed by the successful adaptation and control of the metaheuristic algorithm during the run

Page 21: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

A Brief Overview of Advances in Metaheuristics and Connection with Unconventional Approaches (Invited Paper)

Proceedings of IC4T, 2018 3

through the given CN frameworks[35] - [38].

Since the internal principles are different for “classical” evolutionary-based (DE), and swarm-based algorithms (PSO, FA, and SOMA), several different approaches for capturing the population dynamics have been developed and tested[39] - [44]. In the case of the “classical” evolutionary algorithms, there is a direct link between parent solutions and offspring. Thus we are capturing the spreading of positive information within the population. In the case of swarm algorithms, it depends on the inner swarm mechanisms, but mostly, it is possible to capture the communications within the swarm based on the points of attraction driven information updating. An illustrative example is given in Fig. 2.

V. ANALYTIC PROGRAMMING

Currently, AP [45] is a novel approach to symbolic structure synthesis which uses ECTs for its computation. Since it can utilize any metaheuristic (i.e., evolutionary/swarm-based algorithm) and it can be easily applied in any programming language, it can be considered as an open symbolic regression framework. It has been proven on numerous problems to be as suitable for symbolic structure synthesis as Genetic Programming[46]or Grammatical Evolution (GE) [47].

The basic functionality of AP is formed by three parts – General Functional Set (GFS), Discrete Set Handling (DSH) and Security Procedures (SPs). GFS contains all elementary objects which can be used to form a program, DSH carries out the mapping of individuals to programs and SPs are implemented into mapping process to avoid mapping to pathological programs and into cost function to avoid critical situations.An illustrative example of mapping procedure (details can be found in [45]:

Figure 2. Adjacency graph for a short time snapshot (10 iterations) of DE algorithm (left), the Degree Centrality value is highlighted by the size

of the node (red color); and corresponding community plot (right).

The last part is represented by the constant handling. The constant values in synthesized programs are usually estimated by the second (slave)ECTs or by non-linear fitting. Alternatively, it is possible to use the extended part of the individual in EA for the evolution of constant values.

VI. CONCLUSIONS

The primary aim of this original work is to provide more insights into the recentadvances and unconventional approachesfor ECTs. Modern swarm algorithm SOMA, the influence of chaotic and non-random sequences on the metaheuristics performance, the original concept of the evolving CN analysis for the better understanding of the population dynamics in popular optimization metaheuristics, and finally the open framework AP for symbolic regression, are briefly reviewed here. Important conclusions and findings can be summarized as:

Presented SOMA represents the clever balance between exploration and exploitation abilities and can be considered somewhere in between the classical evolutionary algorithms and swarm algorithms (due to the “presence” of mutation and absence of “crossover”).

Differential Evolution (DE) can be considered as one of the best-performing algorithm, especially,the modern state of the art versions of SHADE variant [1], as proved in recent IEEE CEC competitions results.

The research of randomization issues and insights into the inner dynamic of metaheuristic algorithms was many times addressed as essential and beneficial.Overall, chaotic versions seem to be very effective in finding the min. values of the objective function.Although intensive benchmarking (CEC benchmark suites) showed mixed statistical results, in real-life optimization problems, the chaos driven heuristics is performing very well, especially for some instances in the discrete domain. Nevertheless, there are many unexplored aspects and theories, like an auto-parameter adaptation for attractors driving metaheuristics, synchronization with optimized (dynamical) systems, and many more.The chaotic approach is a simple enhancement, keeping the used metaheuristics simple and fast, yet with possibly improved performance.

The CN approach for population dynamics analysis has several advantages: Firstly, the dimensional independence – since the size of the network is given only by the number of nodes (individuals from the population), the resulting features analyses are not directly connected to the dimensionality of the search space. Another advantage is that the CN framework can be used almost on any metaheuristic as a simple plug-in.

AP represents the modern symbolic regression framework, that can utilize the current best algorithms available effectively.

Page 22: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Roman Senkerik

Proceedings of IC4T, 20184

Challenges in ECTS are:

Focusing on the deeper analysis of population dynamics and parameter adaptation mechanisms of the algorithms rather than developing many new variants and hybrids of algorithms (mainly swarm based). Thus keeping the metaheuristics fast, easy to understand and to implement. The researchers should focus on the preserving of population diversity, a clever mix between exploration and exploitation and try to simplify the metaheuristic algorithms.

Bridging the gap between theory and practice.Developing better and real problems based benchmark suites can improve the reputation and applicability of the metaheuristics algorithms in real life. In many cases, algorithms that excel in benchmark tests, fail in real optimization engineering tasks [48].

Dealing with the large-scale, many-objective and “expensive” simulation-based problems.Together with the development of SW/HW, computers and simulators, the engineers are solving more and more complex problems with higher precisions. Thus there is a need for the constant development of powerful, fast, preferably parallel algorithms, further surrogate assisted approaches and frameworks for highly complex, costly and many-objective problems.

Developing new theories and complexity/runtime analyses.

ACKNOWLEDGMENT

This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic within the National Sustainability Programme Project no. LO1303 (MSMT-7778/2014), further by the European Regional Development Fund under the Project CEBIA-Tech no. CZ.1.05/2.1.00/03.0089. This work is also based upon support by COST (European Cooperation in Science & Technology) under Action CA15140 (ImAppNIO), and Action IC1406(cHiPSet). The work was further supported by resources of A.I.Lab at the Faculty of Applied Informatics, Tomas Bata University in Zlin (ailab.fai.utb.cz).

REFERENCES

[1] Das, S., Mullick, S. S., & Suganthan, P. N. (2016). Recent advances in differential evolution–an updated survey. Swarm and Evolutionary Computation, 27, 1-30.

[2] Engelbrecht, A. P. (2010, September). Heterogeneous particle swarm optimization. In International Conference on Swarm Intelligence (pp. 191-202). Springer, Berlin, Heidelberg.

[3] Zelinka, I. (2016). SOMA—Self-organizing Migrating Algorithm. In Self-Organizing Migrating Algorithm (pp. 3-49). Springer, Cham.

[4] Fister, I., Fister Jr, I., Yang, X. S., & Brest, J. (2013). A comprehensive review of firefly algorithms. Swarm and Evolutionary Computation, 13, 34-46.

[5] Droste, S., Jansen, T., & Wegener, I. (1999, July). Perhaps not a free lunch but at least a free appetizer. In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume 1 (pp. 833-839). Morgan Kaufmann Publishers Inc.

[6] Piotrowski, A. P., & Napiorkowski, J. J. (2018). Step-by-step improvement of JADE and SHADE-based algorithms: Success or failure?. Swarm and Evolutionary Computation.

[7] Piotrowski, A. P., & Napiorkowski, J. J. (2018). Some metaheuristics should be simplified. Information Sciences, 427, 32-62.

[8] Senkerik, R., Zelinka, I., & Pluhacek, M. (2017). Chaos-Based Optimization-A Review. Journal of Advanced Engineering and Computation, 1(1), 68-79.

[9] Zelinka, I., Lampinen, J., Senkerik, R., & Pluhacek, M. (2018). Investigation on evolutionary algorithms powered by nonrandom processes. Soft Computing, 22(6), 1791-1801.

[10] Senkerik, R., Zelinka, I., Pluhacek, M., & Viktorin, A. (2016, October). Study on the development of complex network for evolutionary and swarm based algorithms. In Mexican International Conference on Artificial Intelligence (pp. 151-161). Springer, Cham.

[11] Senkerik, R., Viktorin, A., Pluhacek, M., & Kadavy, T. (2018, May). Population Diversity Analysis for the Chaotic Based Selection of Individuals in Differential Evolution. In International Conference on Bioinspired Methods and Their Applications (pp. 283-294). Springer.

[12] Dorigo, M., &Birattari, M. (2011). Ant colony optimization. In Encyclopedia of machine learning (pp. 36-39). Springer, Boston, MA.

[13] Yang, X. S., & Deb, S. (2009, December). Cuckoo search via Lévy flights. In Nature & Biologically Inspired Computing, 2009. NaBIC 2009. World Congress on (pp. 210-214). IEEE.

[14] Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in engineering software, 69, 46-61.

[15] Karaboga, D., &Basturk, B. (2007). A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of global optimization, 39(3), 459-471.

[16] Yang, X. S. (2010). A new metaheuristic bat-inspired algorithm. In Nature inspired cooperative strategies for optimization (NICSO 2010) (pp. 65-74). Springer, Berlin, Heidelberg.

[17] Fister Jr, I., Yang, X. S., Fister, I., Brest, J., &Fister, D. (2013). A brief review of nature-inspired algorithms for optimization. arXiv preprint arXiv:1307.4186.

[18] Caponetto R, Fortuna L, Fazzino S, Xibilia MG (2003) Chaotic sequences to improve the performance of evolutionary algorithms. IEEE Transactions on Evolutionary Computation 7 (3):289-304.

[19] Coelho LdS, Mariani VC (2009) A novel chaotic particle swarm optimization approach using Hénon map and implicit

Page 23: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

A Brief Overview of Advances in Metaheuristics and Connection with Unconventional Approaches (Invited Paper)

Proceedings of IC4T, 2018 5

filtering local search for economic load dispatch. Chaos, Solitons & Fractals 39 (2):510-518.

[20] Davendra, D., Zelinka, I., Senkerik, R.: Chaos driven evolutionary algorithms for the task of PID control. Computers & Mathematics with Applications 60(4), 1088-1104 (2010).

[21] Zhenyu, G., Bo, C., Min, Y., & Binggang, C. (2006, September). Self-adaptive chaos differential evolution. In International Conference on Natural Computation (pp. 972-975). Springer, Berlin, Heidelberg.

[22] Pluhacek, M., Senkerik, R., Davendra, D.: Chaos particle swarm optimization with Eensemble of chaotic systems. Swarm and Evolutionary Computation 25, 29-35 (2015).

[23] Pluhacek, M., Senkerik, R., Davendra, D., Oplatkova, Z. K., & Zelinka, I. (2013). On the behavior and performance of chaos driven PSO algorithm with inertia weight. Computers & Mathematics with Applications, 66(2), 122-134.

[24] Pluhacek, M., Senkerik, R., Viktorin, A., & Kadavy, T. (2018). Chaos-enhanced multiple-choice strategy for particle swarm optimisation. International Journal of Parallel, Emergent and Distributed Systems, 1-14.

[25] Fister Jr, I., Perc, M., Kamal, S. M., &Fister, I. (2015). A review of chaos-based firefly algorithms: perspectives and research challenges. Applied Mathematics and Computation, 252, 155-165.

[26] Metlicka, M., Davendra, D.: Chaos driven discrete artificial bee algorithm for location and assignment optimisation problems. Swarm and Evolutionary Computation 25, 15-28 (2015).

[27] Wang, G. G., Deb, S., Gandomi, A. H., Zhang, Z., & Alavi, A. H. (2016). Chaotic cuckoo search. Soft Computing, 20(9), 3349-3362.

[28] Sprott JC (2003) Chaos and Time-Series Analysis. Oxford University Press.

[29] Zelinka, I., Lampinen, J., Senkerik, R., & Pluhacek, M. (2018). Investigation on evolutionary algorithms powered by nonrandom processes. Soft Computing, 22(6), 1791-1801.

[30] Chen, G., & Zelinka, I. (2018). Evolutionary Algorithms, Swarm Dynamics and Complex Networks.

[31] Senkerik, R., Pluhacek, M., Viktorin, A., Kadavy, T., Janostik, J., & Oplatková, Z. K. (2018). A Review On The Simulation of Social Networks Inside Heuristic Algorithms. In ECMS (pp. 176-182).

[32] Skanderova, L., Fabian, T., & Zelinka, I. (2016, July). Small-world hidden in differential evolution. In Evolutionary Computation (CEC), 2016 IEEE Congress on (pp. 3354-3361). IEEE.

[33] Zelinka, I., Davendra, D., Lampinen, J., Senkerik, R., & Pluhacek, M. (2014, July). Evolutionary algorithms dynamics and its hidden complex network structures. In Evolutionary

Computation (CEC), 2014 IEEE Congress on (pp. 3246-3251). IEEE.

[34] Davendra, D., Zelinka, I., Metlicka, M., Senkerik, R., & Pluhacek, M. (2014, December). Complex network analysis of differential evolution algorithm applied to flowshop with no-wait problem. In Differential Evolution (SDE), 2014 IEEE Symposium on (pp. 1-8). IEEE.

[35] Skanderova, L., & Fabian, T. (2017). Differential evolution dynamics analysis by complex networks. Soft Computing, 21(7), 1817-1831.

[36] Metlicka, M., & Davendra, D. (2015, May). Ensemble centralities based adaptive Artificial Bee algorithm. In Evolutionary Computation (CEC), 2015 IEEE Congress on (pp. 3370-3376). IEEE.

[37] Gajdos, P., Kromer, P., & Zelinka, I. (2015, December). Network visualization of population dynamics in the differential evolution. In Computational Intelligence, 2015 IEEE Symposium Series on (pp. 1522-1528). IEEE.

[38] Janostik, J., Pluhacek, M., Senkerik, R., Zelinka, I., & Spacek, F. (2016). Capturing inner dynamics of firefly algorithm in complex network—initial study. In Proceedings of the Second International Afro-European Conference for Industrial Advancement AECIA 2015 (pp. 571-577). Springer, Cham.

[39] Pluhacek, M., Janostik, J., Senkerik, R., Zelinka, I., & Davendra, D. (2016). PSO as complex network—capturing the inner dynamics—initial study. In Proceedings of the Second International Afro-European Conference for Industrial Advancement AECIA 2015 (pp. 551-559). Springer, Cham.

[40] Skanderova, L., Fabian, T., & Zelinka, I. (2017). Differential evolution dynamics modeled by longitudinal social network. Journal of Intelligent Systems, 26(3), 523-529.

[41] Viktorin, A., Senkerik, R., Pluhacek, M., & Kadavy, T. (2017, July). Towards better population sizing for differential evolution through active population analysis with complex network. In Conference on Complex, Intelligent, and Software Intensive Systems (pp. 225-235). Springer.

[42] Viktorin, A., Pluhacek, M., & Senkerik, R. (2016, September). Network based linear population size reduction in SHADE. In Intelligent Networking and Collaborative Systems (INCoS), 2016 International Conference on (pp. 86-93). IEEE.

[43] Senkerik, R., Viktorin, A., Pluhacek, M., Janostik, J., & Davendra, D. (2016, July). On the influence of different randomization and complex network analysis for differential evolution. In Evolutionary Computation (CEC), 2016 IEEE Congress on (pp. 3346-3353). IEEE.

[44] Skanderova, L., Fabian, T., & Zelinka, I. (2018). Analysis of causality-driven changes of diffusion speed in non-Markovian temporal networks generated on the basis of differential evolution dynamics. Swarm and Evolutionary Computation.

Page 24: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Rajesh Kumar Jain, Sushma N. Bhat, Dr. Amithabh Dube, Abhishek Patil, Vidhi Jain and Dr. G. D. Jindal

Proceedings of IC4T, 20186

Web based portal for database & analysis of physiological variability

1Rajesh Kumar Jain, 2Sushma N. Bhat, 3Dr. Amithabh Dube, 4Abhishek Patil, 2Vidhi Jain and 5Dr. G. D. Jindal1SO-G,Electronics Division, BARC, Trombay, Mumbai, India.

2Senior research fellow, BRNS sponsored project, BARC, Trombay, Mumbai, India.3Professor, Department of physiology, SMS Medical college, Jaipur, India.

4Junior research fellow, BRNS sponsered project, SMS Medical college,Jaipur, India.5Professor, BME DepartmentM G M College of Engineering and Technology, Kamothe, [email protected], [email protected], [email protected], [email protected],

[email protected] and [email protected]

Abstract— In view of importance of the variability analysis in the field of Medicine and absence of a consolidated portal for storing raw data in a data base and processing the same using web based application, a web based pilot database and variability analyzer has been developed. This portal allows the user to register and upload his data, which becomes part of the database after administrator’s approval. Also data can be segregated from the database based on age, sex, geographical location and clinical conditions. The user can down load the application software as well as his data in revised format for processing in local computer. In addition user can also down load the data uploaded by other portal users as per his requirement and use for his research work. This not only brings uniformity to the variability studies but also minimizes the data collection efforts by the users. Development of this web based portal is described in this paper.

Keywords—Physiological variability, web-portal, heart rate variability, fast Fourier transform.

I. INTRODUCTION

Variability is the sign of life; therefore higher variability in physiological parameters is generally an indicator of better health. The use of variability for the diagnosis of several diseases is age old in Indian system of medicine known as Ayurveda. Modern medicine has taken cognizance of this concept during second half of the last century and applied the same using modern tools. History of variability analysis folds back to 1965, when Hon and Lee [1] observed diminution of the fetal heart rate variations necessitated action for rapid delivery. Since then Heart Rate Variability (HRV) has gained popularity as a noninvasive research tool for physiological interpretation and exploration of clinical applications [2, 3, 4, 5, 6, 7, 8].

Heart rate variability analysis is usually carried out in time domain or frequency domain. In time domain analysis, the time difference between beat to beat is calculated. Results are determined by standard deviation of the data. It is found that lesser the deviation, lesser is the variability. In

frequency domain analysis, Fast Fourier Transform (FFT) and auto regressive modelling are generally used. The power spectral density (PSD) thus obtained gives understanding of the variability for 24-hours data (long term variability) and 5-minutes data (short term variability). Over the decades, three major components of the spectrum are considered for variability analysis, known as Very Low Frequency (VLF – 0.0 to 0.04Hz), Low Frequency (LF - 0.04 to 0.15Hz) and High Frequency (HF 0.15 to 0.5Hz) peaks. It is observed that sympathetic activity correlates with LF and parasympathetic activity with HF. Nonlinear analysis is also used many times in the form of wavelet transform, triangular indices and Poincare plots.

Physiological parameters investigated for variability analysis are heart rate (RR-interval), blood pressure (systolic, diastolic and mean), peripheral blood flow and peripheral pulse morphology. Among them HRV has been investigated most due to ease of recording of raw data and/or abundant availability of electrocardiographic (ECG) database. Numerous methodologies have evolved for assessment of HRV. Many web-portals and mobile applications are available for the same [9, 10, 11, 12]. Significant amount of the research done is using MIT ECG database. During the last decade, variability study with peripheral pulse variability has led to ease of application of variability analysis as single data acquisition yields HRV, peripheral blood flow variability and morphology index variability.

Physionet provides toolkit for variability analysis of the Electrocardiographic data available in its data base. It neither permits analysis on data collected from other instruments nor permits one to upload his data into the database. Other portals, available for the purpose, permit variability analysis on any data collected by the user but lack database facility. Bhabha Atomic Research Centre has been actively working in this area since last two decades and has collected huge data on different category of subjects [13, 14, 15, 16, 17]. To facilitate research in this area, possibility of sharing this data with the

Page 25: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Web based portal for database & analysis of physiological variability

Proceedings of IC4T, 2018 7

community appeared an appropriate action. This will help researcher to use the data already available in the database and acquire only specific data for his investigation. With this in view, web based portal for data base and application software used for variability analysis has been developed. Following article describes the web portal for database and processing software.

II. MATERIALS AND METHODS

To begin with portal has been envisaged to be useful for the researchers in two respects.

• He should be having free access to the data available in the data base as per his requirement,

• He should be able to analyze any data acquired by him specifically for his research work with the application software available in the portal.

In the process, researcher’s raw data should automatically become part of the database for expansion of the same. This will not only help the research community to have access to variability analyzer software but also reduce data collection burden. For instance a researcher who wants to study manifestations of chronic renal failure on HRV, need not collect data on control subjects as same is available in the data base. Also he can use data on chronic renal failure patients (if available in the data base). Thus he has to collect data in small number of patients to complete his research investigation.

To accomplish these objectives the portal has been designed in such a way that first user has to register and then upload his acquired data on to the portal. His data files are then converted into analyzable format in the portal. This step is essential as user’s data compulsorily gets into the portal, which becomes part of database after administrator’s approval. Pending administrator’s approval, user can now download all the data uploaded by him in revised format and also he can down load required data from the database. He can then perform all analysis off line and complete his task. Web portal stores the peripheral pulse data or ECG data. This can be used for the study of various physiological variabilities viz. Heart Rate, Peripheral Blood Flow, Morphology Index. Deviation in these variabilities can help in disease characterization.

Web portal is developed using php using code-igniter framework. Currently for the pilot study, the application has been deployed on a free domain server with hyperlink www.ePV-repo.website.tk. Apache services are used as servers and MySql as database for storing records. Client side requests are handled using JavaScript. Different user preferences control the data access. Web page also enables user to download and process the file locally. This flexibility facilitates localized data analysis and private data mining.

The processing software is developed on Windows platform

using LabWindows CVI for front end and data processing. Microsoft Access is used for storing the data in the Database after the analysis of data and producing various reports. These files are stored locally to the user enabling the user to have a quantitative analysis of the data.

III. DETAILS OF THE PORTAL

Portal has a log-in page and a dashboard as shown in Figure 1. Different menu items are available for data search and display of records. Depending on the login user, the dashboard menu items differ. Only user with role defined as ‘Admin’ can see other user details and edit their password or delete user account.

‘User’ and ‘Technician’ can upload the data to the database, where as a ‘Researcher’ can only download the data from the database. Data uploaded by ‘User’ or ‘Technician’ will be available for download only after approval from ‘Admin’.

Figure 1 shows the dashboard of the portal.

Figure 1. Dashboard of the portal

Supported data formats are ‘.dat’ files collected from Peripheral Pulse Analyser. Data collected by any other instrument can also be uploaded as external files in ‘.csv’ format. These files need to contain only raw data without any header information.

Data listing lists all the records available in the database. These records can be searched based on ‘Patient name’, ‘Gender’, ‘age’, ‘Disease’, ‘Collection center’, date when the data was ‘Uploaded on’. Figure 2 shows the page where data entry and search can be initiated.

Figure 2. Data managing screen

Page 26: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Rajesh Kumar Jain, Sushma N. Bhat, Dr. Amithabh Dube, Abhishek Patil, Vidhi Jain and Dr. G. D. Jindal

Proceedings of IC4T, 20188

III. ANALYSIS SOFTWARE

ePV Analyser software can be downloaded from the portal for data analysis.

The software workflow is given below;

• Set parameters to search peaks in the input file

• Load file.

For frequency domain analysis;

• Identify the first peak in the signal and locate all peaks though software

• Adjust any false peak marking manually

• Morphology index gets displayed

• Short Fast Fourier Transform is calculated and displayed

• Display panel consolidates all the calculated values; Heart rate variability and morphology Index graphs and frequency spectrum. Figure 3. Shows the screen shot of the same.

• Other parameters calculated are Average, Total power, Center frequency, Amplitude, Area in percentage and total area for each of these components. One has the flexibility to change the range of frequencies.

Figure 3. First graph(dZ) is the data with peaks marked, Second graph(HR_dZ) is the RR-Intervals, Third graph(MI_dZ) shows the morphology index.

sFFT displays the short Fast Fourier Transform for the signal selected.

Figure 4. Shows the calculated parameters.

For graphical analysis;

• Poincare plot for HRV and MI are calculated and plotted.

Statistical analysis

• For each of variability parameters; Mean, SDNN, Variance and RMSSD are displayed.

Figure 5. Poincare Plot and statistical results

RESUTLS

Salient features of the web-portal which are unique are as follows:

• Lakhs of peripheral pulse data which is a collected over a period of thousands of man days are available for the research. To ease uniform processing, the data processing software is also available

• User privilege management through ‘Admin’ login

• All the data uploaded are alphabetically arranged. They can be arranged depending on age, gender or disease of the subject. Other parameters on which the data can be arranged are; sampling rate, collection center, collection date and the user who uploaded the data file to the database, for filtering and easy access

• Processing software helps to display acquired data in time and frequency domain

• Different variabilities are derived from acquired data

• Calculated parameters are saved in Access database for further analysis and reporting

• Report generation includes all the variability data

• Display, storage and printing of data in standard format

• Easy adaptability by clinicians/ researchers

Page 27: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Web based portal for database & analysis of physiological variability

Proceedings of IC4T, 2018 9

• Comparison facility of the data from the same disease group and the control group

• Analysis of the data in the validation phase, user defined reports/queries for comparison of different variability of different disease group

• Flexibility to set the frequency ranges for VLF, LF and HF, so that peak can be shifted automatically

Acknowledgment

Authors are grateful to Board of Research in Nuclear Sciences, Department of Atomic Energy, Government of India, for sponsoring this research project. Authors are thankful to Principal, SMS Medical College Jaipur, Principal, M. G. M. College of Engineering and Technology Navi-Mumbai and Head, Electronics division, BARC, Mumbai for their continuous encouragement and support. Authors are thankful to Ms. Jharna Agarwal, Mr. Pranav Patil and Ms. Sneha Bansod for data collection at various centers.

REFERENCES

[1] Hon EH, Lee ST. Electronic evaluations of the fetal heart rate patterns preceding fetal death, further observations. Am J Ob-stet Gynec 1965; 87: 814–26

[2] Camm AJ, Malik M, Bigger JT Jr, Breithardt G, Cerutti S, Co-hen RJ, Coumel P, Fallen EL, Kennedy HL, Kleiger RE, etal. Heart rate variability: Standards of measurement, physilogical interpretation and clinical use. Task force of the European So-ciety of Cardiology and the North American Society of pacing and electrophysiology. Circulation 1996 Mar; 93(5): 1043-1065.

[3] Bianchi A, Bontempi B, Cerutti S, Gianoglio P, Comi G and Natali Sora MG, Spectral analysis of heart rate variability sig-nal and respiration in diabetic subjects, Med. and Biol. Eng. and Comput., Vol. 28, pp 205-211, 1990

[4] Pagani M, Lombardi F, Guzzetti S, Rimoldi O, Furlan R, Piz-zinelli P, Sandrone G, Malfatto G, Dell’Orto S, Piccaluga E, et al. Power spectral analysis of heart rate and arterial pressure variabilities as a marker of sympatho-vagal interaction in man and consious dog. Circ Res 1986 Aug; 59(2):178-193.

[5] Jindal GD, Sawant MS, Pande JA, Rohini A, Jadhwar P, Naik BB, Deshpande AK. Heart rate variability: Objective assesment of autonomic nervous system. MGM J Med Sci 2016;3(4): 198-205.

[6] Malliani, A., Lombardi, F., & Pagani, M. (1994). Power spec-trum analysis of heart rate variability: a tool to explore neural regulatory mechanisms. British Heart Journal, 71(1), 1–2.

[7] Guidelines; Heart rate variability; European Heart Journal (1996) 17, 354–381.

[8] Acharya, U Rajendra & Joseph, Paul & Kannathal, N & Lim, Choo & Suri, Jasjit. (2007). Heart rate variability: A review. Medical & biological engineering & computing. 44. 1031-51. 10.1007/s11517-006-0119-0.

[9] Rhenan Bartels, Leonardo Neumamm, Tiago Peçanha, Al-ysson Roncally Silva Carvalho; BioMedical Engineering On-Line (2017) 16:110.

[10] Pichot V, Roche F, Celle S, Barthélémy J-C, Chouchou F. HR-Vanalysis: A Free Software for Analyzing Cardiac Autonomic Activity. Frontiers in Physiology. 2016;7:557. doi:10.3389/fphys.2016.00557.

[11] Mika P.TarvainenabJuha-PekkaNiskanenacJukka A.Lippon-enaPerttu O.Ranta-ahoaPasi A.Karjalainena; Kubios HRV – Heart rate variability analysis software;Computer Methods and Programs in Biomedicine; Volume 113, Issue 1, January 2014, Pages 210-220

[12] Perrotta, Andrew & T. Jeklin, Andrew & Hives, Benjamin & E. Meanwell, Leah & Warburton, Darren. (2017). Va-lidity of the Elite HRV Smart Phone Application for Exam-ining Heart Rate Variability in a Field Based Setting. Jour-nal of Strength and Conditioning Research. 31. 1. 10.1519/JSC.0000000000001841.

[13] Jindal GD, Nerurkar SN, Pednekar SA, Babu JP, Kelkar MD, Deshpande AK and Parulkar GB, Diagnosis of peripheral ar-terial occlusive diseases using impedance plethysmography, J. Postgrad. Med. Vol. 36, p.147, 1990

[14] Jindal GD, Ananthakrishnan TS, Mandlik SA, Sinha V, Jain RK, Kini AR, Nair MA, Kataria SK, Mahajan UA and Desh-pande AK, Medical Analyzer for the study of physiological variability and disease characterization, External Report, BARC/2003/E/012

[15] Jindal GD, Jain RK, Sinha V, Mandalik SA, Sarade B, Tanawade P, Pithawa CK, Kelkar PM, Deshpande AK. Early detection of coronary heart desease using peripheral pulse an-alyzer. BARC Newslett 2012 May-Jun;(326):15:21.

[16] G. D. Jindal, Rajesh Kumar Jain, Sushma N. Bhat, Jyoti A. Pande, Manasi S. Sawant, Sameer K. Jindal & Alaka K. Deshpande (2017) Harmonic analysis of peripheral pulse for screening subjects at high risk of diabetes, Journal of Medical Engineering & Technology, 41:6, 437-443, DOI: 10.1080/03091902.2017.1323968

[17] Jain RK , Goyal S , Bhat SN , Rao S , Sakthidharan V, Kumar P, Rappayi SK, Jindal SK & Jindal GD. (2018). Development of Software for Automatic Analysis of Intervention in the Field of Homeopathy. The Journal of Alternative and Comple-mentary Medicine. 24. 10.1089/acm.2017.0157.

Page 28: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Gaurav Kumar Srivastava, Anshuman Aditya, Ayush Mishra and Mr. Himanshu Bhushan

Proceedings of IC4T, 201810

Modelling and Simulation of Two Level Interleaved Boost Converter

Gaurav Kumar Srivastava, Anshuman Aditya, Ayush Mishra and Mr. Himanshu BhushanElectrical Engineering, SRMGPC, Lucknow

[email protected]

Abstract—Multi Level Interleaved Boost Converter has some edge over conventional Boost Converter in diminishing the ripple in input current and output voltage, increased efficiency, stability etc. It employs two or more Boost Converters connected in parallel. This paper presents modelling and simulation of Two Stage Interleaved Boost Converter in MATLAB Environment. State Space Averaging Technology is used to determine the steady state value of parameters and Small Signal Modelling for designing the transfer function of the converter. A feedback loop control of converter is possible with PID Controller’s assistance.

Index Terms— Interleaved Boost Converter, Small Signal Modelling, State Space Averaging, PID

I. INTRODUCTION

The conventional step up converter is employed to increase voltage level but there are some problems associated with it i.e., stability issues at high duty ratio, high ripple content etc. In order to mitigate the problem several topologies are proposed. Paralleling of Boost Converter is one of them, it reduces the size of filter component [5].

The performance becomes better as the numbers of parallel units/phases increases but it makes the converter analysis quite complex. In this paper, Two Level IBC is modelled and analyzed to provide a basic understanding and operation of this converter.

The converter is realized by state space averaging approach [3-4] and analyzed by small signal analysis [2].

II. INTERLEAVED BOOST CONVERTER

The IBC is combination of many similar boost converters connected in parallel. Each unit is controlled by switching signals which have the same frequency and phase shift.

Division of the input current among paralleled converters make it reliable enough. Also, this makes both current at the input side and voltage at the output side smooth as a result of the ripple cancellation, so that the size of the filter circuit elements can be reduced remarkably.

The two level IBC is shown in figure 1. The two converters are necessarily connected in parallel but they have to operate in an interleaved mode i.e., pulses given to the cells has a phase shift of 180˚.

Figure 1. 2 Level Interleaved Boost Converter

III. CONVERTER DESIGN

The design process of this converter [7] is similar to the normal step up converter but here the value of inductor chosen is just greater than the critical value (say 1.25 times) because there is cancellation of ripple content [6] due to which inductor current ripple isn’t an issue.

Thus 1.25 cL L′ = × and 1 2 2

LL L L′

= = =

Where ( )212c

D D RL

f−

= .

The capacitor value can be designed by considering the ripple voltage similar to the boost converter. The ripple content gets reduced due to interleaving. Therefore

o

o

DCvR f

V

= ∆× ×

.

Table I. Design Parameters Of Two Level IbcParameters Value

Power 120 WVin 12 V

L1 = L2 = L 8 µHC 24 µF

Frequency, f 100 kHzDuty Ratio, D 0.4

Page 29: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Modelling and Simulation of Two Level Interleaved Boost Converter

Proceedings of IC4T, 2018 11

IV. MODELLING AND ANALYSIS

The two converters must operate on same duty ratio and carry equal current [1]. With this assumption we have

2 1id d− = and 21

id dN

= − (1)

For two level IBC, the number of parallel converters, N = 2.

The operation of IBC can be summarized in 4 modes of operation as explained below:

MODE 1:

For this mode of operation, switch S1 is closed and the switch S2 is open, due to which inductor current IL1 is increasing and IL2 is decreasing.

Examine figure 2(a) by applying Kirchhoff’s Law, we get the equation as follows

Figure 2(a). IBC in Mode 1

[ ]1 1

2 2

0

1-

10

10 0

10 0

-1 0

in

L L

L L

d idt

d idt

d vcdt

L

C

i L

i vL

vcRC

= +

and [ ]1 0 0 1C = .

MODE 2:

For this mode of operation, both switches are open, in this period inductor current IL1 and IL2 are decreasing.

Figure 2(b). IBC in Mode 2

Evaluate figure 2(b) by applying Kirchhoff’s Law, we get the equation as follows

[ ]1 1

2 2

1-

1-

1 1 -

10 0

10 0

1 0

in

L L

L L

d idt

d idt

d vcdt

L

L

C C

i L

i vL

vcRC

= +

and [ ]2 0 0 1C =

MODE 3:

In this case, switch S2 is closed and switch S1 is open, due to which inductor current IL2 is increasing and IL1 is decreasing.

Figure 2(c). IBC in Mode 3

Examine figure 2(c) by applying Kirchhoff’s Law, we get the equation as follows

[ ]1 1

2 2

1-

1

10 0

10 0 0

1 00

in

L L

L L

d idt

d idt

d vcdt

L

C

i L

i vL

vcRC

= +

and [ ]3 0 0 1C =MODE 4:

For this mode of operation, both switches are open, in this period inductor current IL1 and IL2 are decreasing.

Figure 2(d). IBC in Mode 4

Page 30: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Gaurav Kumar Srivastava, Anshuman Aditya, Ayush Mishra and Mr. Himanshu Bhushan

Proceedings of IC4T, 201812

Examine figure 2(d) by applying Kirchhoff’s Law, we get the equation as follows

[ ]1 1

2 2

1-

1-

1 1 -

10 0

10 0

1 0

in

L L

L L

d idt

d idt

d vcdt

L

L

C C

i L

i vL

vcRC

= +

and [ ]4 0 0 1C =

From (1), 1 3 2( ) (0.5 )(2 )A D A A D A= + + − ,

1 3 2( ) (0.5 )(2 )B D B B D B= + + − and

1 3 2( ) (0.5 )(2 )C D C C D C= + + −

Thus, matrices A, B and C can be calculated as:

( )

( )

1-0 0 -

1-0 0 -

(1 ) (1 ) 1-

DLD

AL

D DC C RC

=

− −

,

1

1

0

B

L

L

=

and [ ]0 0 1C = (2)

The transfer function can be obtained by small signal analysis as:

( ) ( ) ( )11 3 2 1 3 2

( ) 2 2( ) g

x s sI A A A A X B B B Vd s

− = − + − + + −

Since 1 2 3B B B B= = = Therefore it can be modified as

( ) ( )11 3 2( ) 2 ( )x s sI A A A A X d s−= − + −

Putting the appropriate values in above equation we get the following matrix:

1 1

2 2

1 10 0 0( )

1( ) 0 0 0 ( )

( ) 1 1 1 0

L L

L L

c c

DsL Li s IDi s s I d sL L

v s VD D sC C RC C C

−′ ′ = ′ ′ − − + − −

On evaluating the matrix, we obtain following result

1 1

2 2

22

22

22

2

10 0

( )1 1

( ) 0 0 0 ( )2

( ) 1 10

L L

L L

c c

s D D sDs

RC LC LC L Li s Is D sD

i s s I d ss D RC LC L L

s sv s VRC LC sD sDs C CC C

′ ′ ′+ + − −

′ ′= + + −

′+ +

′ ′ − −

Thus required transfer function is given by:

2

22

1( ) 2 2( ) 2

c in

Lsv s V RD

LCd s s DsRC LC

− ′ =′

+ +

V. SIMULATION RESULT

Figure 3 shows the closed loop model of IBC with a PID controller. The error signal produced by comparing sawtooth wave with a reference voltage which is then used to decide desired duty ratio.

Figure 4 shows the pulses given to two MOSFET switches by PWM technique. It compares the error signal by saw tooth wave. The pulses are identical but shifted by 180˚.

The input current is ripple free at duty ratio 50 % because the two currents completely nullify ripple content.

Figure 3. Closed Loop Model of IBC

Page 31: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Modelling and Simulation of Two Level Interleaved Boost Converter

Proceedings of IC4T, 2018 13

Figure 4 Generation of Pulses for two switches

The output voltage is nearly constant with 1 % ripple content.

Figure 5. Inductor current waveform at 50% duty ratio

Figure 6. Output Voltage waveform

Figure shows the closed loop response of converter using automatic PID tuning and the corresponding values of parameters are Kp= 0.008883, Ki = 9.87 and Kd = 0.

VI. CONCLUSION

The converter consist of two stage interleaved and inter-coupled boost converter cells. Its hardware implementation will have advantages like: the duty cycle of two units can vary largely but the inter-coupled units have only small unbalance in shared current. It can be designed to nullify input current ripple at certain duty ratio depending on number of phases. It is less costly because the two cells can share the same magnetic core.

The transfer function has a pole in right side of imaginary axis so there is tendency to become unstable if not compensated properly.

Use of lead compensators in cascade may provide better control of current as well and reduce the peak overshoot in inductor current.

It is also shown that the models and small-signal approach are very useful in the designing the active feedback controllers of multiphase converter cells.

REFERENCES

[1]. N. Jantharamin, L. Zhang,“ Analysis of Multiphase Interleaved Converter by using State-space Averaging Technique” ECTI-CON 2009 6th international confer-ence publications, May 2009,pp.288-291

[2]. R. Middlebrook, “Small-signal modeling of pulse-width modulated switched-mode power converters”, Proceed-ings of the IEEE, vol. 76, no. 4, pp. 343-354, 1988.

[3]. J. S. Anu Rahavi, T. Kanagapriya, Dr.R.Seyezhai, “De-sign and Analysis of Interleaved Boost Converter for Renewable Energy Source” International Conference on Computing, Electronics and Electrical Technologies [ICCEET],2012

Figure 7. Closed loop frequency response

Page 32: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Gaurav Kumar Srivastava, Anshuman Aditya, Ayush Mishra and Mr. Himanshu Bhushan

Proceedings of IC4T, 201814

[4]. M. Veerachary, T. Senjyu, and K. Uezato, “Modeling and analysis of interleaved dual boost converter,” in Proc. IEEE Int. Symp. Ind. Electron., 2001, vol. 2, pp. 718-722.

[5]. C.N.M. Ho, H. Breuninger, S Pettersson, G.Escobar, L.Serpa, and A. Coccia, “ Practical Implementation of an Interleaved Boost Converter using SiC Diodes for PV Application” in Proc IEEE, 8th International conference on power electronics ECCE Asia, June 3, 2011,pp.372-379.

[6]. M. S. Elmore, “Input current ripple cancellation in synchronized, parallel connected critically continuous boost converters,” in Proc. IEEE APEC’96, vol. I, San Jose, CA, Mar. 1996, pp. 152-158.

[7]. H. M. Swamy, K. P. Guruswamy, and S. P. Singh, “De-sign, Modeling and Analysis of Two Level Interleaved Boost Converter,” IEEE International Conference on Machine Intelligence and Research Advancement (IC-MIRA), pp. 509-514, December, 2013

Page 33: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Application of Blockchain Technology in Cybersecurity

Proceedings of IC4T, 2018 15

Application of Blockchain Technology in CybersecurityFaraz Zafar and Namrata Nagpal

Amity University, Uttar Pradesh, Lucknow -226028Email: [email protected] and Email: [email protected]

Abstract— The rapid increase in the reported incidents of cybercrime and cybersecurity breaches poses a major threat to the privacy of users globally. It questions the current model employed to combat such threats and prevent user data and privacy from being compromised. The advent of Blockchain technology introduced a whole new model of user privacy in the financial area by letting users perform secure transactions via cryptocurrencies such as Bitcoin and Ethereum. This paper explores the applications of this novel technology in cybersecurity and how it can be used to enhance user privacy and prevent security breaches.

Keywords- Blockchain, Privacy, Cybersecurity

I. INTRODUCTION

The amount of digital data has reached staggering amounts globally. A recent report claims 90% of the global data to have been generated over the past 2 years. [1] With Big Data and AI now major players, this data is constantly being collected, analyzed and interpreted, bringing about a revolution in personalized goods and services. This revolution has been brought about however, at the cost of users’ privacy. Centralized organizations, both public and private sector, collect huge amounts of sensitive personal data and information. Data has been an inseparable part of global headlines lately, the most famous being the Facebook-Cambridge Analytica Scam [2]. The scam caused a global outcry in its wake with people questioning the amount of control they have over their sensitive information. WannaCry ransomware, another famous incident covered extensively by the public media, infected millions of computers globally [3]. Cybercriminals are getting better and more daring by the day whereas for people and organizations, the odds have never been higher. Today, data has become one of the most valuable assets of an economy [4] and its protection is a necessity.

II. BLOCKCHAIN

Blockchain serves as an ultra-modern digital ledger which can be tweaked to not only record monetary transactions but virtually, anything of value. [5] The digital data that is put away on a Blockchain exists as a mutually shared — and a perpetually logged — database. Utilizing the system along

decentralized lines has clear advantages. The Blockchain database isn’t stored at a centralized location, implying that its records are really open and effectively unquestionable. These records are decentralized having no central storage location for a hacker to exploit. It is hosted by millions of PCs at any particular instance of time, each having the same copy that is visible to everyone present on that network.

III. SECURITY IN BLOCKCHAIN

Blockchain represents a revolutionary new technology that facilitates secure storage of data over the internet. The decentralization aspect of the technology does away with the need to have a central administrator, enabling true peer to peer networking and eliminating the risks associated with centralization.

Since the inception of Bitcoin in 2008, the Bitcoin Blockchain has worked without critical interruption. Building upon the nearly infallible reputation of the internet, the framework aims to revolutionize the way data is stored and shared; a boon for the technology as it makes its way out of the developmental stage. [6]

IV. BLOCKCHAIN SECURITY ADVANTAGES

Blockchain represents the underlying technology which helps users maintain a collective, reliable and a decentralized database. It is thus a perpetually expanding list of records, immune to tampering and modification, courtesy of the latest cryptographic techniques [7]. A typical Blockchain system consists of a series of blocks which are connected in a specific order. This chained representation is from where Blockchain derives its name. The users in the system, also known as ‘nodes’, perform the task of validation and storing the data in blocks. The consensus algorithm used to confirm transactions and add new blocks to the chain is called Proof of Work (PoW) algorithm. Each block in the chain mainly comprises of a cryptographic hash of the past block, transaction related information and a timestamp [8]. This recurrent process validates the legitimacy of the preceding block, starting from the Genesis block [9]. The various security advantages of Blockchain are listed below.

Page 34: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Faraz Zafar and Namrata Nagpal

Proceedings of IC4T, 201816

i. Privacy Protection:

Blockchain adopts Peer to Peer networking system which eliminates the need of a centralized database for storing confidential information. This in turn eliminates centralized points that a hacker might target and steal valuable information. Similarly, a Blockchain does not have a central point of failure making it more robust than centralized networking systems. Blockchains employ Asymmetric Cryptography wherein each user has 2 keys: a public key that is visible to everyone on the network and is used to encrypt messages/transactions for the particular user as well as a private key which can only be used to decrypt the message encrypted via the user’s public key. A user’s public key has no relation to his/her public address and, computing a user’s private key from his/her public key is an impossible task. Thus, Blockchain maintains user anonymity and privacy.

ii. Crash Recovery:

Data on a Blockchain is distributed amongst peer nodes unlike a traditional database where all the data is stored at a central location. Each user on the Blockchain has the right to generate and maintain a full copy of the data. Although this causes data redundancy, it vastly improves the reliability and fault tolerance of the network. In case some of the nodes are attacked or compromised, it will not cause damage to rest of the network.

iii. Preventing Data Manipulation:

Blockchain has a unique data writing mechanism which prevents the data in a block from being modified once written. This mechanism involves the generation of a timestamp the instant a new record is created. [10] Modification of data henceforth is prohibited. Also, the recording of a new transaction is decided using a consensus mechanism which generally requires the mutual agreement of over 50% of the users of the network.

V. APPLICATIONS OF BLOCKCHAIN IN CYBER SECURITY

i. Preventing Distributed Denial of Service (DDoS) Attacks:

DDoS attacks still pose a major cyber security threat globally. [11] These have been costing businesses up to $2.5 million on an average [12]. The worst of these have reached the 100 Gb/s threshold. These attacks are carried out by botnets or ‘Zombie Army’ [13]. Many of the ‘bots’ today combine previous threats, thereby instilling them with the ability to propagate like worms, hide like viruses and include attack methods from toolkits [13]. Hackers use these bots to instigate an attack on a website essentially by sending it myriads of junk requests thereby increasing the traffic until the website gets overwhelmed and crashes. The legitimate users of the website

or cloud based software platform are thereby denied access to the resources. The source of the problem resides within the Domain Name System (DNS) servers used today. A DNS is a categorized nomenclature scheme for the devices connected to the web. It is a registry that links complicated IP addresses to simple, memorable domain names. The core services are centralized in the DNS server. This makes them susceptible to DDoS attacks. Blockchain counters this problem by enabling the decentralization of Domain Name System (DNS) servers. With the decentralization of DNS servers, consensus protocols can be used to establish trust between a user and a server while avoiding a central point of control. Domain names ending with .bit, .coin, .emc, .lib, .bazar etc are already in use. Blockchain also lets its users rent the unused bandwidth of the collective community to withstand DDoS attacks.

ii. Identity Theft Protection:

Identity theft has costed users roughly $100 Billion over the past half-decade according to a 2017 study. [14] These incidents mostly pertain to political benefits, credit card scams, employment scams and tax related matters. Decentralized ID or DID is a Blockchain based identity management platform that stores verifiable user identity credentials on a Blockchain to combat identity frauds. It records all its users’ activities and transactions on a distributed ledger for transparency as well as security. It allows for the transfer, processing and verification of all ID related information such as Passports, Driving Licenses, Bank related documents via a secure Blockchain. Microsoft states that “After examining decentralized storage systems, consensus protocols, Blockchains, and a variety of emerging standards we believe Blockchain technology and protocols are well suited for enabling Decentralized IDs”. Public Blockchains such as Bitcoin, Ethereum, Litecoin would soon be providing a solid foundation for rooting DIDs, recording operations and anchoring attestations. [15]

iii. Fraud Protection:

The e-commerce industry is rapidly shifting towards a customer centric business model. Customer satisfaction is a priority in general with a special emphasis on hassle-free returns. This has arguably placed consumers in a position which enables them to leverage the company’s services using fraudulent means. Frauds such as a chargeback fraud wherein a customer fraudulently claims to never have received an order have been on the rise by as much as 20% per annum. [16] Although this particular type of fraud can be taken to the court by the online retailers, the time and expense associated with arbitration and other formalities often exceed the value of the actual fraud. Online retailers therefore frequently overlook these incidents which have collectively costed the e-retail industry roughly $7 Billion in 2016 [17]. The Blockchain solution to this issue is the utilization of Smart Contracts. Computer protocols behind Smart Contracts

Page 35: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Application of Blockchain Technology in Cybersecurity

Proceedings of IC4T, 2018 17

digitally aid, validate or impose the transaction or execution of an agreement. Smart Contracts are trackable and irreversible. [18] The terms and conditions agreed upon by a purchaser and a dealer are plainly written in lines of code and is put away on a Blockchain. Smart Contracts are self-executing thereby eliminating the need of central authorities and third parties such as payment gateways. These Contracts currently work with major cryptocurrencies such as Ethereum and Ripple. This will let e-retailers automatically and indisputably receive payments once the confirmation of order delivery is recorded.

VI. CONCLUSION

This paper explored the typical aspects of Blockchain that can be implemented to enhance cyber security and drastically reduce the risks posed by the current cyber security practices and models. The effective utilization of Blockchain technology is the key to prevent data breaches and enhance the privacy of users globally. Sensitive data in the hands of central organizations and third parties is prone to be manipulated and misused. Blockchain gives businesses and individuals the ability to have full control over their assets while being transparent. The decentralized platform offered by Blockchain is the stepping stone to a safer, decentralized web.

REFERENCES

[1] P. Brandtzæg, “Big Data, for Better or Worse: 90% of World’s Data Generated Over Last Two Years - SINTEF”, SINTEF, 2013.

[2] O. Solon, “Facebook says Cambridge Analytica may have gained 37m more users’ data”, the Guardian, 2018

[3] S. Jones and T. Bradshaw, “Global alert to prepare for fresh cyber attacks | Financial Times”, Ft.com, 2017.

[4] “The world’s most valuable resource is no longer oil, but data”, The Economist, 2017.

[5] D. Tapscott and A. Tapscott, Blockchain revolution. [London]: Portfolio Penguin, 2016.

[6] “What is Blockchain Technology? A Step-by-Step Guide For Beginners”, Blockgeeks, 2018.

[7] “Blockchain Explained - Intro - Beginners Guide to Block-chain”, BlockchainHub, 2017.

[8] “How Blockchain Technology Works. Guide for Begin-ners”, Cointelegraph, 2018.

[9] D. Lee., Handbook of digital currency. Amsterdam: Elsevier, 2015.

[10] B. Gipp, N. Meuschke and A. Gernandt, “Decentralized Trust-ed Timestamping using the Crypto Currency Bitcoin”, Arxiv.org, 2015.

[11] A. Networks, Pages.arbornetworks.com, 2010.

[12] “May 2017 Worldwide DDoS Attacks & Cyber Insights Re-search Report | Neustar”, Discover.neustar, 2017.

[13] E. Cooke, F. Jahanian and D. McPherson†, Usenix.org, 2005.

[14] A. Pascual, K. Marchini and S. Miller, “2017 Identity Fraud: Securing the Connected Life | Javelin”, Javelinstrategy.com, 2017.

[15] A. Patel, “Decentralized digital identities and blockchain: The future as we see it - Microsoft 365 Blog”, Microsoft 365 Blog, 2018. [Online].

[16] A, Zaczkiewicz. “As Chargebacks Soar 20%, Retailers Stung by Financial Pain of ‘Friendly Fraud’”, 2016

[17] S. Stone, “Ecommerce Can Expect Nearly $7 Billion in Chargebacks in 2016 | Chargeback”, Chargeback, 2016.

[18] A. Tar, “Smart Contracts, Explained”, Cointelegraph, 2017.

Page 36: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Ashish Mishra and Indu Prabha Singh

Proceedings of IC4T, 201818

Performance Analysis of Different Stages CS-VCO in 0.18µm CMOS Technology

Ashish Mishra and Indu Prabha Singh E.C.E. Department, S.R.M.G.P.C., Lucknow, India

[email protected] and [email protected]

Abstract—In this work, study of two different CS-VCO (Current Starved Oscillator). Thereby two different inverter stage topolologies (3-stage & 5-stage) are discussed on behalf of phase noise, dissipated power and frequency of oscillation parameters. By the analysis, it is measured that dissipated power for the designed circuit (3-invert stage CS-VCO) can be reduced upto (28.5%) as compared to to 5-inverter stage CS-VCO topology. In this fashion, the design approach for power optimal and robust CS-VCO is formed. Performance analysis of circuit is observed on 0.18µm technology of CMOS design.

Keywords—CMOS; PLL; CS-VCO; PSS; P-Noise; LPF

I. INTRODUCTION

Voltage Controlled Oscillator evolves a key role in recent communication technology. The design of VCO incorporate several trade-offs in frequency of oscillation, dissipated power, phase noise and speed of operation. Thus in a PLL design, it is essential to target these parameters. In integrated circuit design, there is a need to obtain VCO for less power dissipation. Thus an appropriate method is considered to scale down the dissipated power by decreasing the voltage supplied.

PLL in communication based system involves PFD (Phase Frequency Detector), CP (Charge-Pump), LPF (Low Pass Filter), VCO (Voltage Controlled Oscillator) and a (Divide- by-N) circuit as mention in figure 1 [4], [15].

One of the major building block of PLL is VCO which works at high oscillation frequency. Oscillator design with reduced power is important for complete PLL system. In general, there are two major types of oscillator that are preferred for system design: LC tank oscillator, current starved VCO or ring-VCO. Tuned oscillator results improved performance of P-Noise but occupy more area in chip design because of

an spiral type of arrangement i.e., is undesirable to design concern. The frequency of oscillation has certain limitations for LC-tank VCO. But in case of inverter stage VCO, the P-Noise performance is not as per the demand compared to LC-VCO still the oscillation frequency range is good enough and occupied area by the system is reduced [3], [4]. Thus wide range of frequency achieved from such design is also suitable to handle the change of process very quickly. The aim to design different stages current starved VCO is to decrease the power dissipation along with the occupied chip area in the PLL design. The noise in the system is available mainly in two forms: One form is the long term jitter that incorporate because of change over-time in clock-edge at output to that for an ideal case and other one is a periodic-type noise that occurs due to change over-time in output clock-period [5]. Voltage Controlled Oscillator is the important part for several transceivers i.e., RF based to the purpose of frequency selection and generation of signal. PLL in these type of transceivers has vast demand to the programmable based frequency. PLL is available with less accurate oscillator that is handled by the reference input voltage. In an ideal VCO, there exists linear type of relation between oscillation frequency and controlled voltage. In general most of the systems are based on tuned oscillator [12].

In this research work, three-stage and five-stage CS-VCO is designed. It is analyzed that 3-inverter stage VCO, draws small power for its circuit operation as compared to five-stage CS-VCO. As frequency of oscillation is inversely related to inverter stage count which states if the count of inverter stage increases, frequency of oscillation reduces. So it can be predicted that 3-inverter stage VCO is comparatively better than 5-inverter stage CS-VCO. So it is great to have rich frequency response by only changing number of inverter stages and also improved p-noise result is achieved [15].

The paper work is divided into 4-parts: Section I: This section covers the abstract and introduction part which is already discussed, Section II: details about schematic of 3-stage as well as 5-stage, Section III: In this section, design equations are utilized for transistors sizing, In last Section IV: results of simulation are briefed and in the last, conclusion is mentioned in Vth section.

Fig.1. General block diagram for PLL

Page 37: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Performance Analysis of Different Stages CS-VCO in 0.18µm CMOS Technology

Proceedings of IC4T, 2018 19

II. SCHEMATIC DESCRIPTION

A. 3-Inverter Stages VCO (CS-VCO)

The circuit for 3-inverter stage CS-VCO is mentioned in Fig.2. The working is almost same as spiral type VCO. This configuration deals with MOSFETs arranged in the middle which are associated to form multiple inverter stages, while top-most and bottom-most transistors (M1) and (M4) in the circuit deals as constant current source that limit the current exactly available to inverter or in different manner we can say that inverter is starved to current. The available drain terminal current for transistors (M0) and (M3) is exactly same that is controlled by input reference signal. The available currents in transistors (M0) and (M3) are exactly mirror image for all other inverters or current- source stages [4], [5], [15].

Fig.2 depicts 3-inverter circuits that are arranged in cascaded manner. The applied input is PWL to its test-circuit.

B. 5-Inverter Stages VCO (CS-VCO)

Fig.3 is schematic for 5-inverter stage CS-VCO. Such design

approaches 5-stages of inverter are connected in cascaded manner. The applied input is again same i.e., PWL [4].

III. DESIGN EQUATIONS

The transistor sizing is important to a CMOS circuit. So ratio of W/L to individual transistor is performed by their design equations as under follows. To a simple inverter stage, the resultant value of capacitance to the drain can be solved by [1], [6]:

)(25

ppnnoxoutintot LWLWCCCC +∗∗=+= (1)

oxC is Oxide Capacitance, which is termed as,

o rOX

ox

Ct

∈ ∗∈= (2)

Charging time acquired for capacitance ( totC ) to charge from

(0 to spV ) for a still value of current ( 4DI ) is calculated as:

4

1D

SPtot I

VCt ∗= (3)

The discharge time to ( totC ) from ( DDV to spV ) for a another still current ( 1DI ) is calculated as:

1

2)(

D

SPDDtot I

VVCt −∗= (4)

Now total duration for (dD

DDtot TI

VCtt 1)( 21 =∗=+ ) is found as:

dD

DDtot TI

VCtt 1)( 21 =∗=+ (5)

Equation for oscillation frequency is given as [8]:

12

d D DDosc

DD tot

T I VfN N V C∗= = =

∗ ∗ (6)

Where delay in time is ( dT ) . Also oscf = centerf if

,2DD

inVCO Dcenter DVV I I= =

(N) deals with the inverter stage counting. The current at drain

( DI ) can be calculated as:

Dcenter center DD totI N f V C= ∗ ∗ ∗ (7)

Fig.2. Designed 3-stage CS-VCO

Fig.3. Designed 5-stage CS-VCO

Page 38: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Ashish Mishra and Indu Prabha Singh

Proceedings of IC4T, 201820

Also, 2

)( 2thngs

Dcenter

VVI

−=

β (8)

Where (β) is given by:

DDDavgDDavg IVIVP ∗=∗=

(9)

The average consumed power is calculated as:

DDDavgDDavg IVIVP ∗=∗= (10)

IV. RESULTS OF SIMULATION

For optimal result for the circuit, it is desired to observe different characteristics of VCO and parameters for different conditions. To achieve characteristics, Periodic Steady State (PSS), Phase-Noise and transient behavior are essential.

A. Transient Behaviour

This behavior is observed to have response with respect to time.

B. PSS Observation

PSS type analysis observes the frequency, output power, range of tuning, dissipated power etc. For this, a particular node terminal voltage is measured in a periodic manner to which period of time could be calculated.

C. Phase-Noise Response

P-Noise is generally a type of flicker distortion. It is measured in dBc/Hz. For oscillators, p-noise is the actual type noise [11].

Fig.4 is the plot between oscillation frequency variations to the control input voltage for 3-stage current starved VCO. It helps to find the frequency range to which the plot is almost linear type to the change of controlled input voltage and the linearity is achieved in the span of (1.0229-3.1071) GHz.

Fig.5 is response of p-noise in dbc/Hz to 5-stage current starved oscillator. The measurement result observe that phase-noise response is (-88.16)dBc/Hz @1MHz, (-113.01)dbc/Hz @10MHz offset frequency.

Fig.6 is phase-noise plot to three-stage current starved oscillator. From the analysis, p-noise results in (-80.12)dbc/Hz @1MHz and (-105.3)dbc/Hz @100MHz offset frequency.

Fig.7 is transient behavior for five-stage current starved VCO. The observed oscillations are for 100ns. The applied input is Piecewise Linear (PWL). From the plot, resultant oscillations are initiated to 40.06ns.

Fig.8 is transient analysis for 3-stage current starved VCO. The observed oscillations are for 100ns. The input applied is Piecewise Linear (PWL). From the plot, resultant oscillations are initiated to 72.213ns. Fig.4. Plot between oscillation frequency and Control Voltage for 3-Stage

CS-VCO

Fig.5. P-Noise Vs Offset Frequency for 5-Stage CS-VCO

Fig.6. Plot between P-Noise and offset frequency for 3-Stage CS-VCO

Page 39: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Performance Analysis of Different Stages CS-VCO in 0.18µm CMOS Technology

Proceedings of IC4T, 2018 21

Fig.7. Plot between control voltage and stop time for 5-stage CS-VCO

Fig.8. Plot between control voltage and stop time for 3-stage CS-VCO

Fig.9. Plot between dissipated power and frequency for 5-Stage CS-VCO

Fig.9 is the result for power dissipation (mW) to 5-inverter stage CS-VCO. The observed dissipated power is around 10.48 mW to the frequency of 2.19 GHz.

Fig.10 is the result for dissipated power in (mW) to 3-stage current starved Oscillator. The result declares that dissipated power by the circuit is (7.49 mW) to the frequency of 3.995 GHz.

Fig.10. Plot between dissipated power and frequency for 3-Stage CS-VCO

TABLE I. Comparison Table for different types of VCO designs on different CMOS TECHNOLOGY.

Topology

Centre Frequency (fcent)/

Range of Operation (GHz)

Supply Voltage VDD

(Volt)

Dissipated Power

(mW)

Phase

Noise (dbc/Hz)

Ring VCO [2]

1.0 2.5 15.5

-106

@600 KHz, 900 MHz

Ring VCO [8]866 MHz 3.3 24.5

-106@

500KHRing VCO [3]

3.125 1.8 12.6-91@

1MHz

[10] 5.79 1.8 -------99.5@

1MHzDifferential Ring VCO [14]

1.41 1.8 12.5-89.8@

600KHz

Three-Stage CSVCO (1.02-3.99) 1.8 7.49

-80.17@

1MHz

and

-105.31@

10MHz

Five-Stage CSVCO

(437.47-2.187) 1.8 10.48

-88.16@

1MHz and

-113.01@

10MHz

Page 40: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Ashish Mishra and Indu Prabha Singh

Proceedings of IC4T, 201822

V. CONCLUSION

In this work, the comparison of two different CS-VCO is evolved. According to the analysis, it is found that five-stage CS-VCO responds better in P-Noise behavior as compare to 3-stage CS-VCO but if talk about power dissipation performance, 3-inverter stage current starved VCO is better and according to the behavior, the dissipated power is (7.4mW) which is quite less if compared with 5-stage CS-VCO i.e., around (10.4mW). The analysis depicts that 3-inverter stage VCO is better where less power dissipation is main concern to the design. On the other hand 5-stage CS-VCO may work in the field where enhanced phase noise is main concern. One more advantage to 3-stage current starved VCO is that, it is efficient for high frequency devices such as Bluetooth, GSM etc.

REFERENCES

[1] T.H. Lee and A. Hajimiri, “ Oscillator Phase noise: A tutori-al,” IEEE J. Solid-State Circuits, vol. 35, pp. 326-336, March 2000.

[2] W. Shing, T. Yan and H.C. Luong, “A 900-MHz CMOS low-phase noise Voltage Controlled Ring Oscillator,” IEEE Trans-actions on Circuits and System II: Analog and Digital Signal Processing, vol. 48, pp. 216-221, February 2001.

[3] A. Maxim et al., “A low-jitter 125-1250 MHz process-inde-pendent and ripple-poleless 0.18um CMOS PLL based on a sample-reset loop filter,” In IEEE J. Solid-State Circuits, vol. 41, pp. 1673-1683, Nov. 2001.

[4] R. J. Baker and D. E. Boyce, “CMOS Circuit Design, Layout, and Simulation,” IEEE Press Series on Microelectronic Sys-tems, 2002.

[5] B. Razavi, T.C. Lee, “A stablization technique for Phase-Locked Frequency Synthesizers,” IEEE J. Solid State Circuits, vol. 38, pp. 888-894, June 2003.

[6] S. Kang and Y. Leblebici, “CMOS Digital Integrated Cir-cuits: Analysis and Design,” Tata McGraw-Hill Edition, 2003.

[7] D. P. Bautista and M. L. Arnada, “A low power and high speed VCO Ring-Oscillator,” In Circuits and Systems, ISCAS’ 04, vol. 4, pp. IV-752, May 2004.

[8] W. Xin, Y. Dunshan and S. Sheng, “A full swing and low pow-er Voltage-Controlled Ring Oscillator,” In Electron Devices and Solid-State Circuits, pp. 141-143, December 2005.

[9] H. Janardhan, M.F. Wagdy, “Design of a 1.0 GHz digital PLL using 0.18µm CMOS Technology” In null, pp. 599-600, April 2006.

[10] A. Bansal, Y. Zheng and C.H. Heng, “2.0 GHz CMOS noise-cancellation VCO,” IEEE Asian Solid- State Circuits Conference, pp. 461-464, Nov 2008.

[11] F. Aznar, S. Celma and B. Calvo, “Inductorless AGC amplifi-er for 10G Base-LX4 Ethernet in 0.18µm CMOS,” Electronics Letter, vol. 44, pp. 409-410, March 2008.

[12] L. S. Paula, S. Banpi, E. Fabris and A. A. Susin, “A wide band CMOS Differential Voltage-Controlled Ring Oscillator,” Proc. of 2Ft Annual Symposium on Integrated Circuits and System Design, pp. 85-89, Sep. 2008.

[13] F. Aznar, S. Celma and B. Calvo, “A 0.18-μm CMOS 1.25 Gbps front end receiver for low-cost short reaches optical communications,” Proceedings of the 36th European Sol-id-State Circuits Conference, pp.554-557, Sep. 2010.

[14] C.S. Azqueta, S. Celma and F. Aznar, “A 3.125-GHz, four stage Ring VCO in 0.18µm CMOS,” In Circuits and Systems (ISCAS), vol. 44, pp. 1137-1140, May 2011.

[15] A. Mishra, G.K. Sharma, and D. Boolchandani, “Performance Analysis of Power Optimal PLL Design Using Five-Stage CS-VCO In 180nm,” In Signal Propagation and Computer Tech-nology (ICSPCT), pp. 764-768, July 2014.

Page 41: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Mobile Robot based Anti Personnel Landmine Detection and Diffusion

Proceedings of IC4T, 2018 23

Mobile Robot based Anti Personnel Landmine Detection and Diffusion

Prof. S.C.Tewari, Shivam Kaushlendra, Shivam Singh and Shraddha SarafECE Department, SRMGPC, Lucknow

[email protected], [email protected], [email protected] and [email protected]

Abstract-This paper deals with the deign and development of a track mounted demining mobile robot using a 16-bit microcontroller along with several sensors and detectors. The designed system movements are controlled using a Personal Computer or Laptop, that is, inbuilt with Bluetooth. The entire mechanical hardware is designed by assembling various sub units which are briefed. The system comprises of a 360 degree surveillance system for monitoring the movement and position of the demining mobile robot. A three Passive Infrared (PIR) sensors are used for detecting any living object within 8 meters of range. Finally the results of the experimental work was demonstrated after fabrication and integration.

Keywords―Track Mounted Demining Mobile Robot, Crane Unit, Camera Unit, Metal Detector, PIR sensors, Battery, Microcontroller.

I. INTRODUCTION

Landmines are weapons or explosives which are buried under the soil that are activated by pressure, and may kill or cause harm when stepped upon it, and also cause long term physiological effects. The landmines are usually buried 10mm to 40mm below the soil and requires about minimum pressure of 9Kg to detonate them. The face diameter of these Anti-Personnel (AP) mines ranges from 5.6 to 13.3cm. The military priority is to breach the minefield quickly in order to create a safe path for troops or ships. Speed is vital, both for tactical reasons and because units attempting to breach the minefield may be under enemy fire [1]. In times of relative peace, the process of mine removal is referred to as demining. It is vital that this process be exhaustive. Even if only a small handful of mines remain undiscovered, incomplete demining can actually lead to an increase in civilian mine casualties as local people re-occupy an area they previously avoided in the belief that it has been made safe. In this context demining is one of the tools of mine action [2]. The main of this paper is to design an automatic robot, which is capable of detecting buried landmines and tracking their locations, while enabling the operator to control the robot wirelessly from a distance. The detection of the buried landmine is done by using metal detectors, since most landmines are made up of metal components. The robot is capable of moving in all four directions and can move on, almost, any surface. The

communication system used, allows the operator to stay at a safe distance and thereby enabling him to control the robot wirelessly or remotely using Bluetooth. The robot travels at a speed of 28.6 metre/hour. When the metal is detected, robot stops and drops down the detonating material, and then moves 300 mm distance. After this the detonator unit gets detonated by the operator. The energised circuit ignites the explosive material which explodes and destroys the landmine [4].

II. PROPOSED DESIGN

It is proposed to design a demining mobile robot for detection and diffusion of only metallic AP mines. Subsequently for diffusing the detected landmine, a proximity landmine detonator is designed, using a dc booster circuit. The proposed system consists of the following 7 important subsystems:i. Command and control unit of track mounted demining

mobile robot unit and proximity detonator unitii. Track mounted mobile robot unitiii. The crane unitiv. The metal detector unitv. The surveillance unitvi. The proximity landmine detonator unitvii. The robotic arm and jaw unitThe block diagram of the proposed design is shown in figure 1. Each of the seven units is explained in the later sections along with their proper working. The detailed hardware and software design of the proposed structure has been explained in the preceding sections along with the suitable supported diagrams of the related work. The working the proposed design is also explained in these sections.

Figure 1: Block diagram of Control Unit of Track Mounted Mobile Robot and Proximity Mine Detonator

Page 42: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Prof. S.C.Tewari, Shivam Kaushlendra, Shivam Singh and Shraddha Saraf

Proceedings of IC4T, 201824

Figure 2: Block diagram of Main Locomotive Unit of Track Mounted Demining Mobile Robot

III. MECHANICAL HARDWARE

The hardware design of the proposed demining mobile consists of the six separate units i.e. the track mounted mobile robot, crane unit, the metal detector unit, robotic arm and jaw unit, the camera unit and the proximity detonator unit, respectively. These units are then interfaced to the microcontroller through the wired connection. Starting with the track mounted mobile robot unit it has been made up using ply board of 4 mm thickness which was cut into a size of 480 mm X 300 mm and then the corners were made round using electrical saw. Six wheels of diameter 70 mm , were mounted on the lower part of the ply board using a 112 mm vertical aluminium support to increase the height of the assembly from the ground. Then the dc motors of 60 RPM were connected to the wheels, one at each corner. Then the motors were connected together on each side for ease of connection in the Motor Driver IC.

Figure 3: Block Diagram of Proximity Landmine Detonator Unit

Figure 4: Track Mounted Demining Mobile Robot with lower half of the Crane Unit

After this the crane unit was designed using a circular tin of 110 mm diameter. A hole was then impinged from the Ply in order to connect a Johnson Motor for beneath for rotating the Crane. A wheel was connected to this Motor and it was then stick to the circular tin using glue gun resin. Rack and pinion was then mounted on the surface of the tin and then the pinion was connected to another Johnson motor which was used for lifting the Crane Vertically and then descending it later. The assembled system upto this step is shown in figure 4. The remaining part of the Crane Unit was designed. For this purpose a PVC pipe of length 610 mm and diameter 25.4

Page 43: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Mobile Robot based Anti Personnel Landmine Detection and Diffusion

Proceedings of IC4T, 2018 25

mm was selected. A hole of 15 mm was made in between for connecting the rack of the assembly. After this the rack was fitted with the pipe using resin. On one side the metal detector unit was fixed and on the other side the robotic arm and jaw unit was fixed.

Figure 5: Crane Unit

Figure 6: The Metal Detector Unit

Figure 7: Robotic Arm and Jaw Unit

Hereafter, the metal detector unit was designed using the inductor of 10 mH that fills the need of metal detection and serves the multivibrator segment of the circuit. The 555 IC vibrates at a particular frequency in normal state which in turn drives the output buzzer for the purpose of indication. The

555 timer circuit is produces precise clock pulses which can be utilized to control the applications as over necessity. In the circuit there is an RLC circuit formed by 47K resistor, 2.2µF capacitor, and 150turn inductor. This RLC circuit is the metal detection part. The coil wound here is an air cored one, so when a metal piece is brought near the coil, the metal piece acts as a core for the air cored inductor. By this metal acting as a core, the inductance of the coil changes or increases considerably. With this sudden increase in inductance of coil the overall reactance or impedance of the RLC circuit changes by a considerable amount when compared without the metal piece.

Figure 8: Camera Unit

Figure 9: Proximity Landmine Detonator Unit

Figure 10: Track Mounted Demining Mobile Robot Assembly

Page 44: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Prof. S.C.Tewari, Shivam Kaushlendra, Shivam Singh and Shraddha Saraf

Proceedings of IC4T, 201826

The next unit is that of robotic arm and jaw which was made by assembling the various subparts which came along with the purchased system. The wireless camera unit was then fixed over the center of the crane unit on a 60 RPM motor and fixed with the resin. In the next module, the proximity detonator unit was designed using a high voltage dc booster circuit with an output of approximately 2000V. The switching mechanism was done mechanically using a wire mesh which was fixed on the PCB itself using Resin glue. After this the negative end was connected to the black wire while the positive end was connected to the rotating axle of the dc motor, which was fixed

Figure 11: Circuit Designing of Control Unit of Track Mounted Demining Mobile Robot Unit

Figure 12: Circuit Diagram of Control Unit of Proximity Landmine Detonator Unit

on the PCB surface using resin. The entire Assembly was then fitted into a box of dimensions 125mm X 85mm X 55mm and the PCB, shown in figure 10. was cut from between and then connection were redone using jumper wires. After this an explosive storing container was designed and connected nearthe output terminals of the dc booster circuit, with the help of resin.

The microcontroller unit is referred to as the heart of the system which was interfaced with the other subunits.

Page 45: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Mobile Robot based Anti Personnel Landmine Detection and Diffusion

Proceedings of IC4T, 2018 27

IV. WORKING OPERATION OF THE PROPOSED DESIGN

The working of the entire system is controlled with the help of a microcontroller. The pin wise connection of the system with the microcontroller is shown in figure 11. The circuit diagram of the proximity detonator unit is also shown in figure 12.

Refer figure 11, here the connection of the microcontroller are shown with respect to various subsystems. PORT A, PORT C and half of the PORT D is used for establishing connection to the L293 motor driver ICs. PORT B is connected to the buzzer and also consists of the output response from the respective Passive Infrared Sensors. Remaining 4 pins of PORT D are connected with the Bluetooth Tx/Rx module that enables transmission and reception of the data. The entire system is powered using a rechargeable 12V DC battery with 1.3Ah current rating. Alternatively, a charging circuit is also present in the circuit for charging the battery, once it gets depleted. The microcontroller is fed a regulated power supply of 5V using 7805 voltage regulator.

Now refer figure 2, this shows the block diagram of a tele-remote circuit which enables switching “Forward, Backward, Right, Left” and up & down the Track Mounted Mobile Vehicle through remote. It can be used to move the Robot from safe distance. Metal detectors are considered as the most reliable sensors for mine detection work. However, landmine detection performance of the metal detectors is highly dependent on the distance between the sensor heads and the buried landmines. Therefore, the landmine detection performance of the metal detectors could be substantially improved if the gap and attitude of the sensor heads can be controlled. In case of robots assisted land mine detection, this function can be performed in a convenient manner where the sensor heads should accurately follow the ground surface maintaining almost uniform gap between the ground surface and the sensor heads by controlling the gap and attitude of the sensor head to the ground surface[4]. . Also it consists of PIR sensors in the right, left and backward direction for detection of intruders i.e. human beings or animals. Continuous surveillance feed of the area is sent through the Camera Unit mounted over the Robot. The Robotic arm and jaw mounted on the other head of the Crane Unit is carrying a Proximity Landmine Detonator, which is dropped over the detected area. The alphabetical codes to be sent and their corresponding functions performed due to commands encoded in microcontroller are as follows:

i. ‘a’ and ‘b’ are used for rotating the entire crane unit clockwise and anti clockwise respectively.

ii. ‘c’ and ‘d’ are used for decreasing and increasing the height of the Crane Unit respectively.

iii. ‘e’ and ‘f’ are used for rotating the Camera Unit clockwise and anti clockwise respectively.

iv. ‘i’ is used for opening the Jaw of the robotic arm.

v. ‘U’, ‘D’, ‘R’ and ‘L’ are used for Forward, Backward, Right and Left motion of the Track Mounted Mobile Robot respectively.

Refer figure 12, shows the circuit diagram of proximity detonator unit showing the connection of the microcontroller with respect to L293 motor driver IC[4]. . PORT C is used for establishing connection to the L293 motor driver ICs. Half of the PORT D is used for connecting the Bluetooth Tx/Rx module that enables transmission and reception of the data. The entire system is powered using a 9V DC battery. The microcontroller is fed a regulated power supply of 5V using 7805 voltage regulator IC.

Refer figure15, the proximity landmine detonator unit comprises of a Bluetooth Controlled motor that performs the task of mechanical switching for switching ON the detonator and igniting the explosive.[3]. The entire unit is powered by two 9V batteries which are used for supplying power to the respective circuits of microcontroller and the DC Booster.

Figure 13: Trackmounted Demining Mobile Robot moving and detecting the prototype of AP Mine. Area being surveillanced through camera feed.

Figure 14: Crane Unit rotated and then the Proximity Detonator being dropped over the marked area

Figure 15: Proximity Detonator being detonated remotely thereby igniting the explosive and destroying the landmine prototype

Page 46: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Prof. S.C.Tewari, Shivam Kaushlendra, Shivam Singh and Shraddha Saraf

Proceedings of IC4T, 201828

Whenever the device is paired with the detonator switch it gets connected. Now, whenever the user sends ‘a’ using the ‘Hyperterminal’ software, the motor rotates and the switch is turned ON. Finally, the Booster Circuit sparks and the explosive gets lit up thereby destroying the landmine.[4]

V. SOFTWARE INTEGRATION AND RESULT ANALYSIS

We have used the AVR Studio 4.0 software that provides a platform for writing the source code in embedded ‘C’ and then the hex file of the code is generated after compiling it. This hex file is burnt onto the microcontroller using AVR Burner 1.0. Thus the user gets rid of writing any difficult and redundant high level language coding. The software used for designing of the circuit and the PCB is the DipTrace 4.0.

The prototype version of the proposed design is constructed and tested under different conditions. To test the capability of the prototype to detect metallic AP mines and camera unit, it was tested against a prototype of AP mine, as shown in figure 13. To check the functionality of the other units, viz. crane unit and track mounted demining mobile robot and the proximity detonator unit an entire test was conducted as shown in figure 14 and figure 15. All motions were according to the commands given. Thus the designed demining mobile robot is capable of detecting and diffusing the anti personnel landmines.[3]

VI. CONCLUSIONSA mobile robot based anti personnel landmine detection

and diffusion system has been designed fabricated integrated. The system design is based on the AP mine detection through a locomotive, all-terrain vehicle (ATV) and, thereby, its

diffusion using a proximity detonator. The setup is designed in such a way that it is inexpensive. The PIR sensors are used for detecting any living body near the remote vehicle. The mobile robot was demonstrated for detecting and demining capability. The control algorithm used by the proposed design is extremely simple. The designed system can detect only metallic mines which is a limitation. Neverthless, it can further be developed to detect plastic explosives using proper reasons.

REFERENCES

[1]. S.-Y.Tseng, “High voltage generator using boost/fly-back hybrid converter for stun gun applications,” IEEE transactions on Applied Power Electronics Conference and Exposition (APEC) , Volume 15, Issue 4, pp.851-856, February 2010.

[2]. D. Sudac, “Improved system for inspecting minefields and residual explosives,” IEEE transactions on Nuclear instrumentation, Volume 15, Issue 4, pp.108-120, June 2013.

[3]. Lorena Cardona, Jovani Jiménez, Nelson Vanegas “Landmine Detection Technologies to face the demin-ing problem in Antioquia”, Volume 81, Issue 183, pp. 115-125. Medellin, February, 2014.

[4]. J Tania Alauddin, Md. Tamzid Islam and Hasan U. Zaman, “Efficient Design of A Metal Detector Equipped Remote-Controlled Robotic Vehicle,” IEEE transac-tions on International Conference on Microelectronics, Computing and Communications (MicroCom) , Volume 61, Issue 4, pp. 1 –5, January 2016.

Page 47: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Resource Optimization in C-RAN using D2D for 5G Wireless Communications

Proceedings of IC4T, 2018 29

Resource Optimization in C-RAN using D2D for 5G Wireless Communications

Divya Pandey, Alkesh Agrawal and Manju BhardwajFaculty of Electronics Communication Engineering Institute of Technology, Shri Ramswaroop Memorial University, Lucknow

[email protected] [email protected] [email protected]

Abstract - In proposed algorithm resource optimization in C-RAN using D2D for 5G wireless networks, the D2D (device to device) communication is refer towards radio technology which reduce the burdens on the fronthaul delay. An MCS system’s service quality depends on the platform, which brings the users under a common cloud with very low delay. In proposed algorithm resource optimization in C-RAN using D2D communication for 5G wireless networks, a recent architecture used for MCS with combining two latest technologies, which are C-RAN and D2D. The resource allocation in C-RAN with D2D is originated into a stochastic optimization problem, which designed for maximizing the overall throughput and reduces the delay. D2D is used as an effected method for very low delay between links. Simulation results validate that D2D be able to develop throughput and reduce the delay of the constrained front haul in C-RAN. Furthermore, common throughput delay trade off which achieve in the proposed solution.

Keywords: 5G, Cloud radio access networks (C-RAN), Device-to-device communications (D2D).

I. INTRODUCTION

The volatile expansion of mobile communication globally predicted that the data traffic in mobile communication, in 2010-2020 and millions of devices are interconnected with each other which enhance the technology. The wireless communication in mobile traffic provided tremendous growth in the interconnection of smart devices. The C-RAN has been planned as growth of the ultra-dense heterogeneous wireless network for the fifth generation wireless network [1]. In fig. 1 shows the growing all generations of wireless technologies and also shows the data rate, mobility and spectral efficiency. When the wireless technologies are growing then the data rate, mobility are increases [2]. All the RRHs (Remote Radio Heads) are linked to the centralized processing BBU (Base Band Unit) pool through the fronthaul [3]. In C-RAN, the large scale cooperative processing gains which managed by BBU pool, which jointly precodes and decodes the user equipment symbols with CoMP (Centralized Coordinated Multi-point) transmission techniques and also progress the SINR (Signal-to-Inference Plus Noise) [4]. The C-RAN has been established in the direction of given high Spectral Efficiency and Energy

Efficiency. The capacity and time delay of the fronthaul is inhibited [5], which give the major performance of C-RAN. To reduce the heavy traffic load on the constrained fronthhaul and the transmit latency is decrease, direct-to-direct (D2D) communications which introduced into C-RAN that means D2D communications permit the direct communication between all pairs of D2D UEs without using of RRH. D2D is one of the key technologies which reduce he delay in the current MCS.

II. RELATED WORKS

A new method has been proposed [6] for D2D communication, which is power control. Its limits is interference among the cellular and D2D link with establish the SINR degradation of the cellular link. The method of D2D communication i.e. sub channel sharing has been considered to make certain the joint interference among the D2D pairs and same sharing channels. The energy efficiency [7] of D2D communication can be maximized using two ways as resource allocation using iterative method and energy effect power control method. The assumption for above scheme is considered here as all D2D pairs are to be operated only in D2D mode. The system performance is improved by mode selection, so it plays a very important role. Thus, mode selection have taken addicted to description when analyze a resources allocations solutions

Fig. 1 The Evolution of All Wireless Technologies

Page 48: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Divya Pandey, Alkesh Agrawal and Manju Bhardwaj

Proceedings of IC4T, 201830

designed for D2D communication. The D2D pair is established the Channel State Information CSI of D2D links and cellular links of the transmission mode [8]. The base station which proceeds as the head and all D2D UEs participate as the groups. In cellular network, there are three different types of resource sharing modes used for D2D communications. The sum rate of cellular networks focus to inter-tier interference limits, to maximize the optimization problems. The author has been proposed [9] power control, channel assignment and a joint mode selection, i.e. maximize the system throughput. The spectral efficiency of D2D communications performance is enabled Co-MP downlink C-RAN.

III. SYSTEM MODEL

A. End-to-End Delay: The network delay performance which measured as the time between a packet available at transmitter and at the receiver availability is measured in milliseconds. In this Section, we first introduced system model [3] after that network strength and average delay is defined. Let us consider D2D communication where C-Ran uplink are implemented through a BBU pool, N remote radio heads, and D2D pairs is K, as shown in fig 2. Wherever all D2D pair consists of transmitter user equipments, and a potential receiver which name is receiving user equipment. All remote radio head is provided with N antennas whereas D2D user equipment is provided only one antenna. For D2D pair, two transmission modes are used, which are C RAN mode and D2D mode. The transmitter user equipment and the receiver equipments of a D2D pair is used in C-RAN mode which correspond with each other through remote radio head and all remote radio head is collect data symbol from the transmitter user by using mutual beam forming method. While D2D pairs used in D2D mode which set up direct D2D links. The source of C-RAN uplinks reuses the equal range.

Fig. 2 System model of C-RAN

Let us suppose that the network which working in time slotted mode with slot t, that is t slot refers to the time interval [t, t+1, t {0, 1, 2…}. Let’s suppose that BBU pool is obtain the CSI of all C-RAN uplinks and D2D links. We assumed that the CSI is followed quasi-static block loss here the channels stay stable during the time of a slot t and it dispersed independently and identically more different slots. Supposed that Ṝ= {1, 2, 3… R} and ₭= {1, 2, 3… K} which indicate the RRH and the D2D pairs set. Particular slot t, the transmitter equipment of D2D pair k which is received signal, used in CRAN mode can be written as given below equation-

Where show the uplink receiver bean forming vector for the transmitter equipment of RRH n and D2D pair k,

shows the CSI vector from RRH n to the transmitter equipment of D2D pair k, show the send out vector on RRH. We assume that transmission rate for cloud radio access network.

C-RAN mode, of downlinks is no fewer than the uplinks and this statement can be recognized since the power of RRHs has higher transmitted power [15]. So that, by Shannon’s formula [16], the successfully achieve transmission rate, which unit is given bit per Hz, of D2D pairs k use in C-RAN mode is specified as

Where is the wide beam forming vector for the transmitter equipment of D2D pairs k.

Particular time slot t, the established signal of the receiver equipment of D2D pairs which used in D2D mode which is written given below equation-

Where is the gain of channel from the transmitter equipment of D2D pairs k to the receiver equipment of D2D pairs k, denotes the AWGN on the receiver equipment

Page 49: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Resource Optimization in C-RAN using D2D for 5G Wireless Communications

Proceedings of IC4T, 2018 31

of D2D pairs k.

The binary mode selection pointer of D2D pair k at time slot t (for mode selection) is defined as:

So that, to achieved transmission rate of a general expression of D2D pairs which given as below-

Table I: Summary of all important notations

Symbol DefinitionṜ RRHs set, defined as {1, 2, 3, …R}₭ D2D pair set, defined as {1, 2, 3, …R}

un,k Beam forming receiver vector of the uplinks RRHs uk The wide beam forming vector, the transmitter user

equipment of D2D pair k The CSI vector form RRHs n to the transmitter user equipment of D2D pairs k

The CSI vector forms all RRH towards transmitter equipment of D2D pairs.

Gain of channel from the transmitter equipment D2D pairs toward the receiver equipment of D2D pairs

rk Transmit power of D2Dbk Binary mode selection of D2D pairs Mk Average throughput of D2D pairs Lk Data queue length of D2D pairs

PDmax The tolerable interference threshold of D2D pair kPmax The peak transmit power of D2D pairCn Fronthaul capacity of RRHs

B. Average Throughput

The average throughput per cell is define, sum of the total numbers of bits received with effectively by all users in the system divided by total number of cells being simulated in system and total number of simulated time. As the queuing theory frame work be able to set up the connection between the queuing delays, the transmission rate and arrival rate [14]. In proposed algorithm, we show queuing delay [15].

Let us assume that buffering queues for individual traffic are maintained by all D2D pair. Let us consider characterize the data queue length of D2D pair k and also we represents the traffic arrivals using stochastic process by for slot t. Therefore, the queue length of data can be processed according to.

=H (t) is the discrete times mean rate stable process. If

Fig. 3 Average Delay vs. offered load

And if all the queues of the individual sequence are stable, then network is also stable.

IV. SIMULATION RESULTS

The simulation in this paper is simplified by considering one radio resource block for all RRHs. The scheme for resource allocation proposed [10]- [15] can be used directly if large number of RRHs is evaluated. We simulate using MATLAB consisting of R=20RRHs and M= 4 antennas. In proposed algorithm, D2D pair is work in C-RAN mode in every time slot. The optimization problem is showed by beam forming design scheme for performance of downlink C-RAN. The simulation results of the proposed algorithm provide the performance of C-RAN scenario in terms of delay.

In fig 3 shows the simulated results of average delay. And we calculate the average delay over the offered load. The average delay, for all algorithms is average queue length which increase under given as arrival rate of traffic λ=1 which unit is bits per slot per hertz.

In fig 4 shows the simulated results of the average throughput. The throughput of overall system is increasing towards the optimum value. When the offered load increases then the average throughput is also increases. The distance among a D2D pairs is huge then the C-RAN mode is more beneficial and in interference management, the BBU pool is extra powerful when inter-tier interference among D2D pair is served.

Page 50: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Divya Pandey, Alkesh Agrawal and Manju Bhardwaj

Proceedings of IC4T, 201832

Fig. 4 Overall Throughput vs. offered load

V. CONCLUSION

In heterogeneous ultra-dense network, there are many constrained the cloud radio access network as the capacity limited front haul and long end-to-end delay device to device is best technique to progress the C-RAN. In proposed algorithm, we showed the stochastic optimization problem of resource allocation of C-RAN which consider as dynamic traffic arrival and time vary conditions. The stochastic optimization problem is changed into throughput maximum which is a non linear function. Here optimization problem is divided in to three sub problems for an analysis proposes- mode selection, beam forming and throughput analysis.

The results of simulation show that the proposed algorithms provide the better performance in conditions of delays and throughput. In addition the proposed algorithms have the flexibility towards the reduction in delay and throughput.

REFERENCE

[1] Yitao Mo, Mugen peng, Hongyu Xiang, and Yaohua Sun, “Resource Allocation in Cloud Radio Access Networks with Device-to-Device Communications,” IEEE Access, vol. 5, pp. 1250-1261, Mar. 2017.

[2] Gupta Akhil, Jha Rakesh, “A Survey of 5G Network: Architecture and Emerging Technologies,” IEEE Access, 2015.

[3] Kazi Moh. Saidul Haq, Shahid Mumtaz, and Jonthan Rodriguez, “Enhanced C-RAN Using D2D Network,” IEEE Communication Magazine, pp. 100-107, Mar. 2017.

[4] M. Peng, Y. Li, Z. Zhao and C. Wang, “System Architecture and key technologies for 5G heterogeneous cloud radio access network ‘’ IEEE Netw., vol. 29, no. 2, pp. 6_14, Mar./Apr. 2015.

[5] M. Peng, Y. Sun, X. Li, Z. Mao, and C. Wang, “Recent advances in cloud radio access networks: System architectures, key techniques, and open issues,’’ IEEE Commun. Surveys Tuts., vol. 18, no. 3, pp. 2282_2308, 3rd Quart., 2016.

[6] M. Peng, H. Xiang, Y. Cheng, S. Yan, and H. V. Poor, “Inter-tier interference suppression in heterogeneous cloud radio access networks,’’ IEEE Access, vol. 3, pp. 2441_2455, Dec. 2015.

[7] M. Peng, S. Yan, K. Zhang, and C. Wang, “Fog-computing-based radio access networks: Issues and challenges,’’ IEEE Netw., vol. 30, no. 4, pp. 46_53, Jul./Aug. 2016.

[8] M. Peng, Y. Li, T. Q. S. Quek, and C. Wang, “Device-to-device under laid cellular networks under Rician fading channels,’’ IEEE Trans. Wireless Commun., vol. 13, no. 8, pp. 4247_4259, Aug. 2014.

[9] C.-H. Yu, O. Tirkkonen, K. Doppler, and C. Ribeiro, “On the performance of device-to-device underlay communication with simple power control,’’ P roc. IEEE Veh. Technol. Conf. Spring, Barcelona, Spain, Apr. 2009, pp. 1-5.

[10] W. Zhao and S. Wang, “Resource sharing scheme for device-to-device communication underlying cellular networks,’’ IEEE Trans. Commun., vol. 63, no. 12, pp. 4838_4848, Dec. 2015.

[11] Y. Jiang, Q. Liu, F. Zheng, X. Gao, and X. You, “Energy-efficient joint resource allocation and power control for D2D communications,’’ IEEE Trans. Veh. Technol., vol. 65, no. 8, pp. 6119_6127, Aug. 2016.

[12] H. El Sawy, E. Hossain, and M. S. Alouini, “Analytical modeling of mode selection and power control for underlay D2D communication in cellular networks,’’ IEEE Trans. Commun., vol. 62, no. 11, pp. 4147_4161, Nov. 2014.

[13] K. Zhu and E. Hossain, “Joint mode selection and spectrum partitioning for device-to-device communication A dynamic Stackelberg game,’’ IEEE Trans. Wireless Commun., vol. 14, no. 3, pp. 1406_1420, Mar. 2015.

[14] C.-H. Yu, K. Doppler, C. B. Ribeiro, and O. Tirkkonen, “Resource sharing optimization for device-to-device communication underlaying cellular networks,’’ IEEE Trans. Wireless Commun., vol. 10, no. 8, pp. 2752_2763, Aug. 2011.

[15] G. Yu, L. Xu, D. Feng, R. Yin, G. Y. Li, and Y. Jiang, “Joint mode selection and resource allocation for device-to-device communications,’’ IEEE Trans. Commun., vol. 62, no. 11, pp. 3814_3824, Nov. 2014.

[16] J. Liu, M. Sheng, T. Q. S. Quek, and J. Li, “D2D enhanced co-ordinate multipoint in cloud radio access networks,’’ IEEE Trans. Wireless Commun., vol. 15, no. 6, pp. 4248_4262, Jun. 2016.

Page 51: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Metamaterial Perfect Absorbers

Proceedings of IC4T, 2018 33

Metamaterial Perfect AbsorbersAlkesh Agrawal and Mukul Misra

Faculty of Electronics & Communication Engineering, Shri Ramswaroop Memorial University, Lucknow, [email protected], [email protected]

Abstract— The manuscript presents the review of study and design of Metamaterial Perfect Absorbers (MPAs). For the past few years a lot of research is witnessed in the field of Metamaterial Perfect Absorbers, from bulky Salisbury screen and Jaumann screen to narrow-band MPA, multi-band MPA, broad-band MPA, MPA with polarization insensitivity to incident electromagnetic waves, MPA insensitive to wide incident angle of incident electromagnetic waves. Recent trends in MPA combines all the desirable characteristics to improve absorptance over wide frequency band insensitive to polarization and wide incidence angle with ultra- thin dimensions and thickness.

Keywords—Absorptance, broad-band, metamaterial perfect absorbers, permittivity, permeability

I. INTRODUCTION

An Electromagnetic (EM) wave absorber is a device which is capable of absorbing almost all the incident radiation thereby inhibiting all other processes of transmission, reflection and scattering. The first EM wave absorbers, were independently developed by two well known scientists, W. W. Salisbury and J. Jaumann [1]. Their main aim was the improvement in the performance of the radar system and to have protection against enemy’s radar system. The EM wave absorbers are classified as Resonant absorbers and Broad-band absorbers [2]. The Resonant absorbers are the type of absorbers where the material properties are frequency dependent of incident EM waves whereas the Broad-band absorbers are the type of absorbers where the material properties are frequency independent of incident EM waves therefore these absorbers can absorb incident EM waves over large range of frequencies. The Resonant type EM absorbers developed were Salisbury screen and Jaumann absorber. The basic structure of EM wave absorber comprises of a resistive sheet, some lossless dielectric of thickness (λo/4) in between and metal ground plane. According to theory of Transmission line a metallic plate usually acts like a short circuit but the same metallic plate when placed behind a load at distance (λo/4), acts like an open circuit at the resistive sheet, making conductance value zero, therefore the incident EM wave experiences only admittance of the resistive sheet. When the impedance of the load and that of free space matches the reflectivity becomes zero and with addition of losses, high magnitude of absorption is achieved.

Geometric transition absorber is an example of Broad-band

EM wave absorber commonly used for test setups in Anechoic Chambers. The wedge shaped or pyramidal shaped lossy materials are designed to impinge EM waves from free space into these structures in order to minimize the reflections of the incident EM waves and to gradually increase the EM wave absorption over the length of wedge shaped or pyramidal shaped geometry of the lossy materials. The performance of EM wave absorbers is limited firstly because of their large thickness due to minimum spacing of (λo/4) between resistive sheet and metal ground plane which increases for broad-band absorption due to cascading of dielectric lossy layers. Secondly the performance of EM wave absorbers is confined to microwave frequencies due to limitation of matching of material impedance with that of free space used in EM wave absorbers. The improvement in performance of EM wave absorbers with the requirement of a material with impedance matching with that of free space from where the EM waves are incident and capable of attenuating the incident EM waves rapidly led to a new field of research identified as Metamaterial Perfect Absorbers (MPAs).

Fig. 1. (a) Salisbury Screen. (b) Jaumann screen [1].

Fig. 2. Broad-band EM Wave absorber.

Page 52: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Alkesh Agrawal and Mukul Misra

Proceedings of IC4T, 201834

II. METAMATERIAL PERFECT ABSORBERS

A. Design and Absorption Mechanism

Conventional MPA consists of three layers with periodic array of Split Ring Resonators (SRRs), constituting top layer, bottom ground plane of copper lamination separated by the lossy or lossless dielectric material [3]. An incident electromagnetic wave produces surface currents which further induce Electric field (E-field), and Magnetic field (H-field). The EM waves resonates the resonators when propagated through the MPA and get absorbed by the bottom ground plane of copper lamination.

High absorptance is achieved by generating matched conditions between free space impedance and normalized surface impedance of MMA thereby fixing the size, shape and periodicity of the Split Ring Resonators (SRRs) or Electric Ring Resonators (ERRs).

III. TRENDS IN DESIGNING MPAS

With the demonstration of first MPA in 2008, significant research has been carried to improve the designing and performance of the MPAs maintaining the conventional three layered geometry [3]. At present the research work focuses on MPAs with narrow band absorption [3-4], Polarization independent MPAs [5-8], MPAs [6-8] with high absorption at wide angle of incidence of EM waves, MPAs [9-11] with broad band absorption.

A. Narrow-band MPAs

The first narrow band MPA was experimentally demonstrated in 2008 by Landy et al. [3]. The unit cell of MPA structure was three layered structure with top layer of electric ring resonator, middle layer of FR-4 substrate of 0.2 mm thick, cut wire section at bottom layer. The unit cells were at 0.72 mm separation in 2D array. The MPA showed absorptance of 88% at 11.48 GHz.

The first experimental work on MPA in Microwave region was soon extended in THz region by H. Tao et. al [12] in 2008 with 70% absorptance at 1.3 THz, in mid-IR region by Xian- liang Liu et al. [13] in 2010 with absorptance of 97% at 6 µm, in NIR regime by J. Hao et al. [14] with 88% wide angular absorptance of incident EM radiations at 1.58 µm and N. Liu et al. [15] with 99% polarization insensitive and wide angular absorptance at 1.6 µm separately in 2010.

B. Dual-band and Multi-band MPAs

After the narrow band MPAs, the research work focussed on dual-band and multi-band resonance MPA designs. Two different multi resonance MPA designs were reported separately at the same time in THz regime by Hu Tao et al. [16] and Qi-Ye Wen et al. [17]. The uniqueness in the designs

was that instead of having two resonating structures in a unit cell, the same structure resonates at two different resonant frequencies.

After that a lot more research has been carried out that focussed on Polarization insensitive MPAs, MPAs with wide angular incident EM waves, Broad-band MPAs in microwave region and THz region.

C. Polarization Insensitive MPAs

Ayop et al. [18] in 2014 proposed polarization insensitive circular Ring-Shaped MPA with three frequency bands lying in X-band. The MPA design is highly symmetric. The design consists of three copper ring resonators concentric to each other of different radii, fabricated on 0.8 mm FR-4 substrate with resonance peaks at 8 GHz, 10 GHz and 12 GHz with absorptance 97.33%, 91.84% and 90.08% respectively.

Dincer et al. [19] in 2014 proposed Polarization angle independent MPA in the Microwave region, Infrared region, and Visible Region at 4.25 GHz, 268.82 THz, and 542.97 THz respectively for solar cell applications with absorptance greater than 99%. The MPA design consists of resonators of square-shape with the unit cell gaping. For Infrared and Visible Regime, the top and bottom layer were of silver with thickness 35nm and 30nm respectively whereas for Microwave range copper layers were used of 0.036 mm separated by FR-4 substrate of 1.6 mm.

Chaurasiya et al [20] in 2016 proposed Compact multi-band polarisation-insensitive MPA with four absorptance peaks at 4.11 GHz, 7.91 GHz, 10.13 GHz and 11.51 GHz with absorptance of 98.81%, 99.68%, 99.98%, and 99.34%, respectively. The unique design of three layered MPA consists of combination of outer circular ring and concentric inner circular split ring. The splits of the inner ring are placed at the four quadrants of the cross-connected outer ring. The MPA was fabricated on double sided FR-4 substrate with 1 mm thickness.

D. Wide Incidence angle and Polarization angle Insensitive MPAs

Ghosh et al. [21] in 2015 proposed an ultra-wideband ultra-thin MPA that consists of concentric split rings with above 90% absorptance for frequency range of 7.85-12.25 GHz which was polarization angle and wide incidence angle insensitive of incident EM waves. The highly symmetric structure was fabricated on FR-4 substrate of 2 mm thick.

Sood et al. [22] in 2015 proposed wide-band, wide-angle, polarization insensitive, ultra-thin MPA. The MPA showed more than 90% absorptance ranging from 5.27 GHz to 6.57 GHz, with an occupancy of 33% of C band. There were two absorptance peaks at 5.34 GHz and 6.33 GHz with absorptance of 99.20% and 99.62%, respectively. The unique design of MPA has four circular slots were etched symmetrically with

Page 53: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Metamaterial Perfect Absorbers

Proceedings of IC4T, 2018 35

respect to centre of the unit cell on the top hexagonal metallic layer of copper with 0.035 mm thickness, fabricated on double sided FR-4 substrate with 1.6 mm thickness.

Agrawal et al. [7] in 2017 proposed wide incidence angle and polarization angle insensitive, ultra-thin, multi-band MPA. The measured results showed 99.5%, 99.8%, 99.5% and 99.9% absorptance at 7.20 GHz (C-band), 9.3 GHz (X-band), 12.61 GHz (X-band), and 13.07 GHz (KU-band) respectively. The unit cell of MPA design consisted of eight continuous rings with one set of concentric continuous rings each placed in one of the four quadrants. The same set of concentric continuous rings is fabricated on diagonally opposite side. The MPA was three layered structure fabricated on FR-4 substrate of 1mm thickness with the copper ring structures on top, and copper laminated sheet at bottom, both of thickness 0.035 mm.

IV. APPLICATIONS OF METAMATERIAL PERFECT ABSORBERS

MPAs find applications in suppressing the antenna side- lobes and narrowing the main lobes thereby improving the antenna radiation pattern [1-2], in suppressing the electromagnetic interference aroused due to microwave exposure in mobile communication [1-2], in improving radar performance by reducing radar cross section [23], in thermal emitters [24] and wavelength tunable sensors [6, 25], in solar cell applications in the Microwave region, Infrared region, and Visible region [19], in electronic configurable devices for different polarizations and modulation [26].

V. CONCLUSION

Metamaterial Perfect Absorbers are the devices capable of absorbing incident electromagnetic radiations to appreciable high level of absorptance. Conventional MPAs are three layered structures developed to overcome the short-comings of Salisbury and Jaumann Screen. A lot of improvement in the design and performance of MPAs has been reported, from single-band absorptance to multi-band absorptance, from narrow-band absorptance to broad-band absorptance, from polarization sensitive to polarization insensitive absorptance, from normal incidence absorptance to wide incidence angle absorptance.

REFERENCES

[1] W.W. Salisbury, “Absorbent body for electromagnetic waves,” United States Patent 2599944, 1954.

[2] W. H. Emerson, “Electromagnetic wave absorbers and anechoic chambers through the years,” IEEE Transactions on Antennas and Propagation, vol. AP-21, no. 4, pp. 484–490, 1973.

[3] N. I. Landy, S. Sajuyigbe, J. J. Mock, D. R. Smith, and W. J. Padilla, “Perfect metamaterial absorber,” Physical Review Letters, Vol. 100, pp. 207402, 2008.

[4] X. Liu, T. Starr, A. F. Starr, and W. J. Padilla, “Infrared spatial and frequency selective metamaterial with near-unity absorbance,” Physical Review Letters, Vol. 104, pp. 207403, 2010.

[5] S. Bhattacharyya, S. Ghosh, and K. V. Srivastava, “Triple band polarization-independent metamaterial absorber with bandwidth enhancement at X-band,” Journal of Applied Physics, Vol. 114, pp. 094514, 2013.

[6] F. Dincer, M. Karaaslan, S. Colak, E. Tetik, O. Akgol, O. Altıntas, and C. Sabah, “Multiband polarization independent cylindrical metamaterial absorber and sensor application,” Modern Physics Letters B, Vol. 30, no. 8, 1650095, 2016.

[7] A. Agrawal, M. Misra, and A. Singh “Oblique Incidence and Polarization Insensitive Multiband Metamaterial Absorber with Quad Paired Concentric Continuous Ring Resonators,” Progress In Electromagnetics Research M, Vol. 60, pp. 33–46, 2017.

[8] M. Agarwal, A. K. Behera and M. K. Meshram, “Wide-angle quad-band polarization insensitive metamaterial absorber,” Electronics Letters, vol. 52, no. 5, pp. 340-342, 2016.

[9] O. Ayop, M. K. A. Rahim, N. A. Samsuri, “Dual band polarization insensitive and wide angle circular ring metamaterial absorber,” 8th European Conference on Antennas and Propagation (EuCAP), pp. 955-957, 2014.

[10] W. Pan, X. Yu, J. Zhang, and W. Zeng, “A Broadband Terahertz Metamaterial Absorber based on Two Circular Split Rings,” IEEE Journal of Quantum Electronics, Vol. 53, no. 1, 2017.

[11] S. Ghosh, S. Bhattacharyya, D. Chaurasiya, and K. V. Srivastava, “An ultra -wideband ultra -thin metamaterial absorber based on circular split rings,” IEEE Transsactions on Antennas Propagation Letters, Vol. 14, pp.1172–1175, 2015.

[12] H. Tao , N. I. Landy , C. M. Bingham , X. Zhang , R. D. Averitt , W. J. Padilla, “ A metamaterial absorber for the terahertz regime: Design, fabrication and characterization,” Optics Express, Vol. 16 , pp. 7181‒7188, 2008.

[13] X. Liu, T. Starr, A. F. Starr, and W. J. Padilla, “Infrared spatial and frequency selective metamaterial with near-unity absorbance,” Physical Review Letters, Vol. 104, pp. 207403, 2010.

[14] J. Hao, J. Wang , X. Liu, W. J. Padilla, L. Zhou, and M. Qiu, “High performance optical absorber based on a plasmonic metamaterial,” Applied Physics. Letters, Vol. 96, No. 25, pp.251104, 2010.

[15] N. Liu, M. Mesch, T. Weiss, M. Hentschel, and H. Giessen, “Infrared perfect absorber and it’s application as plasmonic sensor,” Nano Letters, Vol.10, No.7, pp. 2342-2348, 2010.

[16] H. Tao, C. M. Bingham, D. Pilon , K. Fan, A. C. Strikwerda, D. Shrekenhamer, W. J. Padilla, X. Zhang, and R. D. Averitt, “A dual band terahertz metamaterial absorber,” Journal of Physics D: Applied Physics,Vol.43, No.22, pp. 225102, 2010.

[17] Q. Y. Wen, H. W. Zhang, Y. S. Xie, Q. H. Yang, and Y. L. Liu, “Dual band terahertz metamaterial absorber: design,

Page 54: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Alkesh Agrawal and Mukul Misra

Proceedings of IC4T, 201836

fabrication and characterization,” Applied Physics Letters, Vol. 95, No.24, pp. 241111, 2009.

[18] O. Ayop, M. K. A. Rahim, N. A. Murad, N. A. Samsuri, and R. Dewan, “Triple Band Circular Ring-Shaped Metamaterial Absorber for X-Band Applications”, Progress In Electromagnetics Research M, Vol. 39, 65–75, 2014.

[19] F. Dincer, O. Akgol, M. Karaaslan, E. Unal, and C. Sabah, “Polarization Angle Independent Perfect Metamaterial Absorbers for Solar Cell Applications in the Microwave, Infrared, and Visible Regime”, Progress In Electromagnetics Research, Vol. 144, pp. 93–101, 2014.

[20] D. Chaurasiya, S. Ghosh, S. Bhattacharyya, A. Bhattacharya, and K. V. Srivastava, “Compact multi-band polarisation-insensitive metamaterial absorber”, IET Microw. Antennas Propag., Vol. 10, pp. 94–101, 2016.

[21] S. Ghosh, S. Bhattacharyya, D. Chaurasiya, and K. V. Srivastava, “An Ultra-wideband Ultra-thin Metamaterial Absorber Based on Circular Split Rings”, IEEE Antennas and Wireless Propagation Letters, Vol. 14, pp. 1172-1175, 2015.

[22] D. Sood and C. C. Tripathi, “A Wideband Wide-Angle Ultra-Thin Metamaterial Microwave Absorber”, Progress In Electromagnetics Research M, Vol. 44, 39–46, 2015.

[23] A. Motevasselian, and B. L. G. Jonsson, “Radar cross section reduction of aircraft wing front end,” Proceedings IEEE International Conference on Electromagnetics in Advanced Applications (ICEAA ’09), 237–240, Turin, Italy, 2009.

[24] X. Liu, T. Tyler, T. Starr, A. F. Starr, N. M. Jokerst, and W. J. Padilla, “Taming the blackbody with infrared metamaterials as selective thermal emitters,” Physical Review Letters, Vol. 107, 045901, 2011.

[25] T. Maier, and H. Brueckl, “Wavelength-tunable micro-bolometers with metamaterial absorbers,” Optical Letters, Vol. 34, 3012-3014, 2009.

[26] B. Zhu, Y. Feng, J. Zhao, C. Huang, and T. Jiang, “Switchable metamaterial reflector/absorber for different polarized electromagnetic waves,” Applied Physics Letters, Vol. 97, 051906, 2010.

Page 55: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Despeckling and Enhancement Techniques for Synthetic Aperture Radar (SAR) Images: A Technical Review

Proceedings of IC4T, 2018 37

Despeckling and Enhancement Techniques for Synthetic Aperture Radar (SAR) Images:

A Technical ReviewAnkita Bishnu, Ankita Rai and Vikrant Bhateja

Department of Electronics and Communication Engineering, Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, India

[email protected], [email protected] and [email protected]

Abstract— Synthetic Aperture Radar (SAR) images which are formed by complex signals are corrupted by speckle noise also called multiplicative noise. Speckle noise reduces the image classification and its interpretation due to which the despeckling step becomes an important method for processing of SAR images. Moreover, it is seen that only despeckling is not sufficient to acquire important information out of the image. Despeckling creates blurred images with unclear edges which leads to undertake another important method which is called enhancement of the images to increase the contrast of the images. This paper aims at discussion of fourteen different despeckling and enhancement techniques used for speckled images and finally important recommendations have been drawn at the end.Keywords— SAR images, Multiplicative noise, Speckle, Despeckling, Image enhancement.

I. INTRODUCTION Synthetic Aperture Radar (SAR) is a form of radar which

helps in capturing two-dimensional or three-dimensional images of targets such as landscapes and other geographical areas. SAR uses the radar antenna movement over a particular target. The subsequent processing of these radar echoes combines the recordings from these multiple antenna positions. In this way higher resolution SAR images are formed. In other words, SAR is involved in active emission and also acquire the distributed signals reflected from a part of the target region [1]. The received signal is composite as it is in the form of in-phase and quadrature channels summed up inconsistently by several reflected waves [2]. SAR imagery is widely used in the monitoring of land surfaces, water bodies, environment and disaster management because of its all-weather round-the-clock procurement ability and its pereatability characteristics. SAR images are also used in remote sensing and mapping of the surfaces and sub surfaces of other planets [3]. However, during SAR images acquisition, there occurs system-inherent

granular noise also called speckle noise. The speckle noise corrupts the entire image due to which the image appearance becomes weak thus degrading the radiometric quality of the image. Because of the presence of the presence of speckle, SAR image gets blemished for which acquiring useful information out of them, its analysis and interpretation becomes quite difficult [4]. The process of denoising keeping the appearances and corners unblemished is an important pre-altering step for SAR image interpretation [5]. To increase the visual quality of the image it becomes necessary to despeckle or denoise the SAR image for their successful exploitation [6]. But, the main challenges which come with despeckling are presence of residual noise, spatial resolution preservation, weakening of edges and contours and blurring of the image as a whole. Therefore, after the application of despeckling method, it becomes imperative that the image is enhanced and sharpened. The enhanced image highlights the necessary useful information to extend the deviation amidst distinct characteristics, so as to refine the ability to clarify and analyse the image as per requirement [7]. The remainder of this paper is organised in subsequent sections. In section II, general discussion on speckle noise modelling is introduced. Section III presents a technical review of the despeckling techniques of SAR images being used so far. Section IV introduces a technical review of the enhancement techniques of SAR images. Finally, some inferences and perspectives are drawn in section V.

II. SPECKLE NOISE MODELLING

A. Speckle Noise

Speckle is a grainy noise that implicitly occurs in the radar captured images and diminishes the aspect of the images. Speckle noise is said to be granular noise because of the random distribution of light signals when it is scattered by any rough surfaces. Speckle can be defined as the unsteadiness of pixel values around a mean which is the required backscatter coefficient of the region. It occurs when the backscattering

Page 56: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Ankita Bishnu, Ankita Rai and Vikrant Bhateja

Proceedings of IC4T, 201838

signals interfere with the transducer aperture. The uneven black and white dots in a SAR image are the results of the constructive and destructive interference of the backscattering signals with that of the targets [8].

B. Multiplicative Model of Speckle Noise

Speckle is an inherent SAR images characteristic which originates from the coherent pulses reflected from the earth surface. Speckle noise can be modelled as a multiplicative noise which in further can be regarded as an undesired signal that gets multiplied into some desired signals that occurs during acquisition time. The modelling of speckle noise as multiplicative noise is supportive to the fact that the desired image data acquired in the form of the terrain backscatter is multiplied by a stationary random process representing the coherent fading in the images. Thus, the radar captured images are said to be exploited by multiplicative-convolved noise or speckle noise [9]. As discussed before, variation in pixel values of image of a relatively uniform object is seen with changing position due to constructive and destructive interference. Mathematically speckle noise can be expressed as in Eq. 1.

( , ) ( , )* ( , )g n m f n m u n m= (1)

where, g(n,m) is the viewed image, f(n,m) is the originally captured image and u(n,m) is the multiplicative element.

III. DESPECKLING TECHNIQUES FOR SAR IMAGES

Since the advent of SAR images, multitude of studies have taken place for addressing the issue of despeckling. Wide variety of filters have been developed using frequency domain, spatial domain and adaptive techniques to clear away speckle noise from the SAR images. Some of these filters are discussed in the following subsections.

A) Scalar Filters

Scalar filters operate on the ratio of local statistics of the images that helps in the denoising and smoothing in the regions of homogeneity where speckle is fully concentrated. Fundamental scalar filters are Mean Filter, Median Filter, Adaptive Mean Filter, Anisotropic Diffusion Filter and Homomorphic Filter [10].

1. Mean, Median and Adaptive Mean filters

The main principle on which the mean filter works is the centre pixel is substituted by the aggregate of the all pixels defined in the taken window. It removes speckle noise to a certain extent. Therefore, this filter gives a haze appearance to the image as it results in the loss of some important details of image [11]. Median filter is a non-linear filter which performs better than mean filter as the replacement of the centre pixel

is done by the median values of all the pixels defined in the taken window. Due to its median characteristic it is used to lessen the impulsive speckle noise present in image. The main advantage of using the median filter is it preserves the edges of the image. The drawback of this filter is that the computation time is large [10]-[11]. Adaptive mean filter is the extension of the mean filter. It mainly aims at removing the pixels having a too unusual value from the filtering window [9].

2. Anisotropic Diffusion (AD) Filter

AD filter relies on the diffusion process in a numerical way based on the partial differential equations. While the diffusion process is isotropic in nature in homogeneous regions, it acts as anisotropic when a change in colour is present in image. The filter is iterated 10 to 100 times [12], for diffusion process to take place. For single iteration, mathematically, it is expressed by the Eq. 2.

( )( ) ( ), , , , .t div c x y z t c x y t t c IΙ = ∆ = ∆ + ∇ ∇ (2)

where, div is the divergence operant, ∆ and ∇ are the gradient and laplacian operators in relative to space variables respectively and c(x,y,t) is constant.

3. Homomorphic Filter

Homomorphic filter is mainly used to despeckle multitemporal SAR images. The principle aspect is to have the logarithmic transformation of the image prior to filtering and then the augmented value of the outcome. The required filter is obtained after performing the summation [9]. The results obtained from the smoothing effects is visually and numerically almost similar to the mean filter. However, when homomorphic filter is tested with median filter, less satisfactory results are obtained both visually and numerically [13].

B) Adaptive Filters

Several adaptive filtering techniques are introduced so as to obtain better consequences by different window sizes and also to preserve the image contours.

1. Frost, Lee and Kuan Filters

Frost filter is a linear, convolutional filter used to denoise SAR images. It is augment-weighted aggregating filter and operates on the coefficient of variation of the degraded image. Within the window size of nxn, the centre pixel value is substituted by weighted sum of the neighbourhood pixels values. The weighting factor decreases as we move away from centre pixel [14]. Lee filter performs better than other filters with respect to texture and edge conservation. It is based on Local Statistics Mean Variance (LSMV) to keep finer details. Low variance of a particular region of the image performs better denoising operation. It can protect information in both low and high contrast medium for high variance. Drawback

Page 57: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Despeckling and Enhancement Techniques for Synthetic Aperture Radar (SAR) Images: A Technical Review

Proceedings of IC4T, 2018 39

of Lee Filter is it does not discard the speckle adjacent to boundaries as a result they appear little blemished [15]. Kuan filter is local linear Minimum Mean Square Error (MMSE) filter. It is improved filter than Lee filter as it gives specified results. It changes over the multiplicative speckle system into the additive linear form. The only drawback of this filter is ENL parameter is needed for computation [16].

2. Enhanced Frost and Enhanced Lee Filter

Lee’s & Frost’s filters are batten by presenting two thresholds on the co-efficient of variation to make them more efficient than other filters. Proceeded with averaging when the lower threshold value is higher than the local coefficient of variation. Filter will function like all-pass filter at the time local coefficient of variation is greater than larger threshold. Local variance lies among both thresholds, scaling between averaging and identity action is executed [17].

3. Gamma Map Filter

This filter applies coefficient and contrast variation. Its performance is higher than the Frost and Lee filter as it curtails the damage of quality information. Action of Gamma Map filter is analogous to Enhanced Frost filter beside the fact if local coefficient variation stays within two thresholds, pixel value works on the Gamma estimation of divergence ratios [18].

4. Wiener Filter

Other name of Wiener filter is Least Mean Squared Error (LMSE) Filter. It is able to restore obscured images and it despeckles the image by correlating it with favourable denoised image. As the filter runs on local image variance, larger local variance of image leads to lesser amount of smoothing and greater amount of smoothing is the outcome of smaller variance which is why Wiener filter produces better results than linear filtering. Disadvantage of Wiener filter is it needs major processing time [19].

5. Rayleigh Likelihood Filtering Technique

Raleigh distribution is used for despeckling and utilises maximum likelihood estimation approach. The Rayleigh Maximum-Likelihood (R-ML) approach is taken into account to estimate the underlying signal. The R-ML estimator is then statistically examined by calculating first and second central moments. However, they are only responsible for smoothing close to the edges [20]. Most of the adaptive filters gives higher execution under low noise. But, additional filtering in consistent areas is a major limitation of these methods.

C) Transformed Domain Filters Another method to solve the problem of speckle reduction

is the application of transform domain filters with the help of noise restrain principle for SAR images. A multiple scale linear processing is put forth for denoising radar images.

These filters are acted upon the hypothetical speckle noise being a Gaussian distribution after log restriction [21]. 1. Pyramid based despeckling filters

Pyramid transform is also used for reducing speckle. A laplacian pyramid was introduced on the basis of ratio acknowledging the multiplicative attribute of speckle. This method is the extension of typical Kuan filter in multiple scale domain where interscale layers of the proportional laplacian pyramid are being processed. But, this method varies with respect to the need for estimation of noise variance in each interscale layers [22].

2. Wavelet Based Despeckling Filter

Wavelet filter investigates Bayesian structure to implement wavelet thresholding. The wavelet coefficients of the logarithmic noise and the wavelet coefficients of logarithmic signal are described by Maxwell and conditional Gaussian distributions respectively. Major drawback of this filtering method is triggering estimate for transform domain which is still a constraint among other transform domain methods [23].

IV. ENHANCEMENT TECHNIQUES FOR SAR IMAGES

With the coming up of various despeckling techniques, many challenges are faced which include weak texture blemished image, unclear edges along with residual noise. Because of these reasons, host of enhancement techniques for a specific purpose to increase the visual clarity and image interpretation which are required for the further research purpose, have been developed over the years [11]. The enhancement techniques are predominantly organised in two categories: image enhancement in spatial field and image enhancement in transformed field. Overview of these enhancement techniques are discussed below.

A) Contrast Limited Adaptive Histogram Equalization (CLAHE)

CLAHE, block-based functional technique has the ability to surmount the unnecessary enhancement of noise in the homogeneous region of image along with typical Histogram Equalization (HE). Procedure of CLAHE is different from standard HE on the basis that CLAHE processes on small patches of the image that works out with various histograms, each referring to a particular subdivision of the image and utilise them to redistribute the low weight values of the image [24]. The main disadvantage is that it is more time-consuming process than the existing algorithm.

B) Range Limited Bi-histogram Equalization (RLBHE)

As the HE changes the brightness of the image, to preserve the background brightness RLBHE is used. This method splits the input original histogram in two individual sub-histograms by

Page 58: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Ankita Bishnu, Ankita Rai and Vikrant Bhateja

Proceedings of IC4T, 201840

means of a threshold value that lessen the intra-class deviation and to differentiate objects from backdrop. The equalised image is used to obtain minimum exact mean brightness error which is used to compare the original image and the equalised one. This method restores the brightness and provides natural enhancement to the image [25].

C) Unsharp Masking

The unsharp masking is used for increasing contrast enhancement especially edges of the image. It sharpens the edge of the element without increasing noise or blemish. It is a high pass filtered scaled version of an image which is added to the original image itself in which unsharp negative image to create a mask of the original image. Due to its low computational complexity, unsharp masking is well suited for the enhancement process. Due to the presence of high pass filter (HPF) the image might contain overshooting distortion around sharp edges. Although, this method is sufficiently helpful for enhancing images of less brightness, it does not distinguish between the physical image information and noise. This leads to ineffective visual clarity. Mathematically, it is represented by the Eq. 3 [26].

( ) ( )( )üüüüs x y f x y G f x y f x y= + × − (3)

where, f(x,y) and s(x,y) depicts the raw image and enhanced image respectively G is a constant and f’(x,y) denotes the blurred image.

D) Image Enhancement in Transform Domain

In transform domain methods, the main difference between the Fourier transform and the wavelet transform is the later can analyse non-stationary signal. The wavelet transform provides promising results for localization in both spatial as well as frequency domain. Therefore, transform domain methods are now well used in image processing such as image enlargement, enhancement, compression and denoising [27].

V. INFERENCES AND CONCLUSION

• Scalar filters are based on the variance of the image which filters three separate parts of the input image. These filters give despeckling results in terms of variance reduction. For each specific part, the variance on the filtered image is estimated which is followed by computation of the ratio of variance (initia1 image)/variance (filtered image). Results show that performance of scalar filters in speckle reduction is favourable but not competent enough to preserve finer details [10].

• Adaptive filters [14]-[18] are basically those filters whose nature change on the basis of the statistical trait of the image which is inside the filter region described

by mxn window. Almost all the adaptive filters perform reasonably well when encountered to depressed noise. However, these filters are limited by supplement filtering in the regions of homogeneity.

• Transform domain filters can be categorised according to the standard functions. Non-adaptive wavelet transform domain method gives better outcomes in speckle reduction while keeping the image details like edges intact [21].

• Image enhancement in the transform domain [27] are based on various transforms like Fourier and wavelet methods. These transforms are largely used in image denoising, image merging and sharpening.

Many despeckling methods have come up by combining different domains. Significant improvements can be observed in terms of image analysis and interpretation, and enhancement of spatial resolution, edge and texture and point-like targets. A multitude studies of both the despeckling and enhancement methods show that this field gives very promising results with respect to the denoising and contrast enhancement of SAR images.

REFERENCES

[1] V. Bhateja, A. Gupta and A. Tripathi, “Despeckling of SAR Images in Contourlet Domain Using a New Adaptive Thresholding,” Proc. of IEEE 3rd International on Advance Computing Conference (IACC), pp. 1257-1261, 2013.

[2] V.Bhateja, A.Tripathi and A.Gupta (2014), “Recent Advances in Intelligent Informatics,” Springer International Publishing, Switzerland, pp. 23, .

[3] A. Gupta, A. Tripathi and V. Bhateja, “Despeckling of SAR Images in Contoulet Domain Using a New Adaptive Thresholding,” Proc. of 3rd IEEE International Advance Computing Conference (IACC), pp. 22-23, Febraury 2013.

[4] V.Bhateja, K.Rastogi, A.Verma and C.Malhotra, “A Non-Iterative Adaptive Median Filter for Image Denoising,” Proc. of International Conference on Signal Processing and Integrated Network (SPIN), pp. 113, 2014.

[5] A. Jain, S. Singh and V. Bhateja, “A Robust Approach for Denoising and Enhancement of Mammographic Images Contaminated with High Density Impulse Noise,” International Journal for Convergence Computing, Vol. 1, No. 1, pp. 38, 2013

[6] A.Jain and V.Bhateja, “A Novel Image Denoising Algorithm For Suppressing Mixture Of Speckle and Impulse Noise in Spatial Domain,” Proc. of 3RD International Conference, pp. 207, March 2011.

[7] L. Li and Y. Si, “A Novel Remote Sensing Image Enhancement Method Using Unsharp Masking in NSST Domain,” Journal of the Indian Society of Remote Sensing, Vol. 18, No. 6pp. 2-3, June 2018.

Page 59: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Despeckling and Enhancement Techniques for Synthetic Aperture Radar (SAR) Images: A Technical Review

Proceedings of IC4T, 2018 41

[8] M. Forouzanfar and H. Abrishami-Moghaddam (2010), “Ultrasound Speckle Reduction in the Complex Wavelet Domain, in Principles of Waveform Diversity and Design,” SciTech Publishing, USA, Section B - Part V: Remote Sensing, pp. 558-77.

[9] V. S. Frost, J. A. Stiles, K. S. Shanmugan and J. C. Holtzman, “A Model for Radar Images and Its Application to Adaptive Digital Filtering of Multiplicative Noise,” IEEE Transactions on Pattern Analysis And Machine Intelligence, Vol. 4, No. 2, March 1982.

[10] J. Chanussot, F Maussanga and A. Hetet, “Scalar Image Processing Filters for Speckle Reduction on Synthetic Aperture Sonar Images,” Proc. of OCEANS ‘02 MTS/IEEE, Biloxi, MI, USA, pp. 2294-2299, October, 2002.

[11] R. C. Gonzalez and R.E. Woods (1992), “Digital Image Processing,2/E,” Prentice Hall, New Jersey, Chapter 5, pp. 231-232.

[12] P. Perona and J. Malik, “Scale-Space and Edge Detection using Anisotropic Diffusion,” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vo1. 12, No. 7, pp. 629-639, July 1980.

[13] D. Coltuc and R. Radescu, “The Homomorphic Filtering by Channel’s Summation,” Proc. of IEEE International Geoscience and Remote Sensing Symposium (lGARSS’02). Toronto, Canada, June 2002.

[14] Y. Yu and S. T. Action, “Speckle Reducing Anisotropic Filtering,” IEEE Transactions On Image Processing, Vol. 11, No. 11, pp. 1260-1270, November 2002.

[15] J.S Lee, “Speckle Analysis and Smoothing of Synthetic Aperture Radar Images,” Journal on Computer Graphics and Image Processing, Vol. 17, pp. 24-32, 1980.

[16] D. Kuan, A. Sawchuck, T. Strand, P. Chavel, “Adaptive Noise Smoothing Filter For Images With Signal Dependent Noise,” IEEE Transaction on Pattern Anal. Mach. In tellimetry, Vol. 7, No. 2, pp. 165–177, February 1985.

[17] A. Lopes, R. Touzi and E. Nezry, “Adaptive Speckle Filters and Scene Heterogeneity,” IEEE Transaction on Geosci.ence Remote Sensing, Vol. 28, No. 11, pp. 992–1000, November 1990.

[18] D. A. Nelson Mascarenhas, “An Overview of Speckle noise filtering in SAR images.” European Space agency. Provided by the NASA Astrophysics Data System.

[19] A. Garg, J. Goal, S. Malik, K. Choudhary and Deepika, “De-speckling of Medical Ultrasound Images using Wiener Filter and Wavelet Transform,” International Journal of Electronics and Communication Technology, Vol. 2, No. 3, March 2011.

[20] T. C. Aysal and K. E. Barner, “Rayleigh-Maximum-Likelihood Filtering for Speckle Reduction of Ultrasound Images,” IEEE Transactions On Medical Imaging, Vol. 26, No. 5, pp. 712, May 2007

[21] X. Zong, A.F. Laine, E.A. Geiser, “Speckle Reduction and Contrast Enhancement of Echo Cardiograms via Multiscale Nonlinear Processing,” IEEE Transactions on Medical Imaging, Vol. 17, No. 8, pp. 532–540, August 1998.

[22] B.Aiazzi, L.Alparone, S.Baronti and F.Lotti,“Multiresolution Local Statistics Speckle Filtering Based on a Ratio Laplacian Pyramid,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 36, No. 5, pp. 1466-1476, September.1998.

[23] M.I.H. Bhuiyan, M. Omair Ahmad, M.N.S. Swamy, “New Spatially Adaptive Wavelet-Based Method for the Despeckling of Medical Ultrasound Images,” IEEE International Symposium on Circuits Systems, New Orleans, LA, USA, pp. 2347–2350, May 2007.

[24] J. Maa, X. Fan, S. Yang, X.. Zhang and Zhu, X, “Contrast Limited Adaptive Histogram Equalization-Based Fusion in YIQ and HSI Color Spaces for Underwater Image Enhancement,” International Journal of Pattern Recognition and Artificial Intelligence, Vol 32, No.7, pp. 1854018, July 2018.

[25] C. Zuo, Q. Chen, and X. Sui, “Range Limited Bi-Histogram Equalization for Image Contrast Enhancement,” Optik—International Journal for Light and Electron Optics, Vol. 124, No. 5, pp. 425–431, May 2013.

[26] R. Gupta and V. Bhateja, “A New Unsharp Masking Algorithm for Mammography usng Non- Linear Enahncement Function,” Proc of Intrenational Conference on Information System Design and Intelligent Applications, Vishakhapatnam, India, pp. 113-114, January, 2012

[27] M. T. Alonso, C. L. Martínez, J. J. Mallorquí and P. Salembier, “Edge Enhancement Algorithm Based on the Wavelet Transform for Automatic Edge Detection in SAR Images,” IEEE Transactions On Geoscience And Remote Sensing, Vol. 49, No. 1, pp. 222, January 2011.

Page 60: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Ahmad Taquee, Vikrant Bhateja, Adya Shankar and Agam Srivastava

Proceedings of IC4T, 201842

Pre-Processing of Cough Signals using Discrete Wavelet Transform

Ahmad Taquee, Vikrant Bhateja, Adya Shankar and Agam SrivastavaDepartment of Electronics and Communication Engineering,

Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, [email protected], [email protected], [email protected] and [email protected]

Abstract—During the acquisition of the cough sound signal, the signal gets contaminated due to the noise present in the surroundings or certain other reasons and it is a tough task to remove that noise from that contaminated cough sound signal. In the previous two decades, various types of filters have been suggested for the filtering of noise from the cough sound signals. However, cough signals denoising requires improvement. So, this paper presents a thresholding-based technique using discrete wavelet transform (DWT) for eliminating the noise present in the acquired cough sound signals and which will also improve the signal-to-noise ratio (SNR) of the acquired cough sound signals.

Keywords—Cough, Denoising, DWT, SNR, Thresholding.

1. INTRODUCTION

Cough has been one of the most recurrent indication of almost all type of respiratory diseases in the childhood. It has been a common symptom in initial and medium stages of respiratory diseases such as asthma, bronchiolitis, pneumonia, bronchitis, croup, cyanosis, etc. [1]. Cough signals are fundamental symptom of pneumonia, but the mathematical analysis of cough signals have never been used in diagnosing the disease. Further, it has been reported that cough signals fetch essential information on the lower respiratory tract of the lung which allows to identify diseases [2]. When the cough signals are acquired for the analysis of different types of diseases, it gets contaminated due to various types of noises. So, the noise filtering of the acquired cough signals plays a major role in the signal analysis. In the previous two decades, several techniques have been presented to recover the original cough signals from the noisy signals [3]. The conventional filtering approaches for cough signals denoising includes low pass filters [4], different types of filter banks such as linear phase and orthogonal filters [5,6], different modelling techniques have been proposed such as Mean Shift Algorithm [7], Kalman Filters [8], Empirical Mode Decomposition [9,10] and State Vectors with time delay [11]. But, these techniques are only useful when the cough signals are contaminated with one more than one type of noises. Earlier, denoising based on the wavelet of such signals using thresholding was done by Donoho and Johnstone [12]. With the development of different types of transforms, various approaches have been

suggested for denoising of the cough signals such as Fourier Transform (FT) and Short Time Fourier Transforms (STFT) and Gabor Transform. Since, the FT represents the cough signals in frequency domain only. So, it is not possible to analyze the cough signals in the time domain. Hence, FT is not suitable for the time-varying signal and it also unable to preserve the sharpness of the signal [13]. STFT uses a small window for the analysis of the cough signals and the resolution in STFT is limited. The cough signal is analyzed in different window sizes and if the window size is big then it will provide better resolution of frequency but adverse resolution of time and if the window is narrow then it will provide better time resolution but the frequency resolution is not very good [13]. FT is the method of choice to remove noise but the most widely used method is the wavelet transform [14]. In this paper, it has been presented a threshold-based technique using Wavelet Transforms (WT) for noise reduction. Wavelet transform is a better implement for multiresolution cough signals analysis. Wavelet can be easily applicable by using convolution-based algorithm. The WT based technique using soft thresholding has been proposed in the [15]. But, the soft thresholding technique may not effective in filtering of the cough sound signals. So, a hard thresholding technique using DWT of cough signals has been proposed. Further, this paper is categorized into following sections: Section II gives the description of the proposed filtering methodology. Section III consists of the results and discussions whereas Section IV includes the conclusion of the proposed work.

II. PROPOSED FILTERING METHODOLOGY

A. Wavelet Transform

For the filtering of the static signals such as cough sound signals, WT has been applied to the cough signals. The decomposition of the cough sound signals takes place in the time frequency scale plane [16]. Residual noise and White Gaussian noise present in the cough sound signals are not easily eliminated by the simple combination of the filters. Therefore, WT is used for noise filtering of the cough sound signals [17]. The WT represents the signals in both frequency as well as time domain [16]. In WT, variants of wavelets are generated from a single category of fundamental wavelet

Page 61: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Pre-Processing of Cough Signals using Discrete Wavelet Transform

Proceedings of IC4T, 2018 43

y t( ). This fundamental wavelet is nomenclated as mother wavelet. The scale (or dilation) factor s and the translation (or shift) factor t are described as the two important parameters of WT. The dilated and shifted types of the mother wavelet is shown below as:

y s,t = 1s

y t − ts

÷

(1)

Wavelet transforms of a signal x(t) can be represents as a

function of mother wavelet y t( ) as:

T s,t( ) = x t( )y ∗ t − ts

÷−∞

∫ dt (2)

Where, ∗ denotes the complex conjugate of y [18]. The cough signals are analyzed using DWT into sub-bands at different frequencies and which decomposes the cough signals into approximation and detailed coefficients.

A. Methods for Pre-Processing of Cough Signals

The acquired cough sound signals are contaminated with various type of noise which are major issues in cough signals analysis. The pre-processing approach is used for removing the White Gaussian noise present in the cough sound signals [19]. In the presented work, a threshold-based noise filtering technique using discrete wavelet transform of cough signals has been proposed. The detail sub-bands obtained from the decomposition of cough sound signals deals with the residual noise which is difficult to eliminate through simple filtering processes [16]. Let the following model shows discrete noisy cough signals [20]:

y n = x n + e n (3)

Where, x[n] is the original acquired cough signals, e[n] is amount of noise signal added and y[n] is the noisy cough signal. A threshold value l is selected with the help of universal threshold given in [21]:

l = 2 log2 N (4)

Where, N denotes the length of the cough signal and s

represents standard deviation. The value of s can be calculated as [22]:

s =MAD d n ( )

0.6745

(5)

Where, MAD is Median Absolute Deviation. The calculated value of l from the Eq. 4 is applied to all the coefficients of the cough signals. The equation of hard thresholding is as in [23]. The following SNR formula will help in checking the performance of the overall analysis of the cough sound signals [22]:

SNR dB( ) = 10log10i=1

N

∑ xdn n ( )2

i=1

N

∑ x n − xdn n ( )2(6)

Where, xdn[n] denotes denoised cough signals and x[n] is the original cough signals.

III. RESULTS AND DISCUSSIONS

The cough signals are acquired with the help of microphone. White Gaussian noise is introduced to the original cough sound signals, the cough signals becomes noisy. Fig. 5(a) shows the noisy cough sound signals sample whose SNR is calculated with the help of Eq. 6. The DWT is applied at different level and with different wavelet families on the noisy cough signals and a threshold value l is calculated with the help of Eq. 4. The hard thresholding is done by applying threshold value l to all the coefficients of the decomposed noisy cough signals. The best level of decomposition at level 8 and the best wavelet family Symlets (sym4) are chosen because at this, the SNR of the filtered cough signals are nearly to the original cough signals. High values of SNR are obtained by Symlets (sym4) wavelet family which represents better noise elimination as compared to other various types of wavelet families [16]. Fig. 5(b) shows the filtered cough signals at level 8 using sym4 wavelet family. Thus, an improved result of noise filtering using hard thresholding and DWT has been derived. The reconstruction of the cough signals is done from the new threshold coefficients with the help of inverse discrete wavelet transform (IDWT). The essential data present in the reconstructed cough sound signals are saved without disturbing the characteristic information present in the cough signals [16].

Page 62: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Ahmad Taquee, Vikrant Bhateja, Adya Shankar and Agam Srivastava

Proceedings of IC4T, 201844

(a) Noisy Cough Signals Sample.SNR = -13.3582 dB.

(b) Filtered Cough Signals Sample.SNR = -13.3590 dB.

Fig. 5. (a) Noisy Cough Signals, (b) Filtered Cough Signals.

I. CONCLUSIONIn the presented work, the analysis of the acquired cough

sound signals has been done by applying a filtering technique using hard thresholding and DWT. A threshold value is selected for the process of thresholding of cough sound signals to remove the noise and to keep the cough signals smooth and distortion free. It is found with the entire analysis that the hard thresholding technique improves the SNR of the acquired cough signals.

REFERENCES

[1] Y. A. Amrulloh, U. R. Abeyratne, V. Swarnkar, R. Triasih and A. Setyati, “Automatic Cough Segmentation from Non-Contact Sound Recordings in Pediatric Wards,” Biomedical Signal Processing and Control, Vol. 21, pp. 126-136, August 2015.

[2] U. R. Abeyratne, V. Swarnkar, A. Setyati and R. Triasih, “Cough Sound Analysis Can Rapidly Diagnose Childhood Pneumonia,” Annals of Biomedical Engineering, Vol. 41, No. 11, pp. 2448–2462, November 2013.

[3] R. Aggarwal, J. K. Singh, V. K. Gupta, S. Rathore, M. Tiwari and A. Khare, “Noise Reduction of Speech Signal using Wavelet Transform with Modified Universal Threshold,” International Journal of Computer Application, Vol. 20. Issue 5, pp. 14-19, April 2011.

[4] T. Slonim, MA Slonim and EA Ovsyscher, “The Use of Simple FIR Filters for Filtering of ECG Signals and a New Method for Post-Filter Signal Reconstruction,” Proc. of Computers in Cardiology Conference, London, UK, pp. 871-873, September, 1993.

[5] V. X. Afonso, W. J. Tompkins, T. Q. Nguyen, S. Trautmann and S. Luo, “Filter bank-based Processing of the Stress ECG,”

Proc. 17th International Conference of the Engineering in Medicine and Biology Society, Montreal, Canada, pp. 887-888, September 1995.

[6] Y. Wu, R. M. Rangyaan, Y. Zhou and S. Ng, “Filtering Electrocardiographic Signals using An Unbiased and Normalized Adaptive Noise Reduction System,” Medical Engineering & Physics, Vol. 31, Issue 1, pp. 17-26, March 2008.

[7] J. Yan, Y. Lu, J. Liu, X. Wu and Y. Xu, “Self-Adaptive Model-based ECG Denoising using Features Extracted by Mean Shift Algorithm, Biomedical Signal Processing and Control, Vol. 5, Issue 2, pp. 103-113, April 2010.

[8] O. Sayadi and M. B. Shamsollahi, “ECG Denoising and Compression Using a Modified Extended Kalman Filter Structure, IEEE Transactions on Biomedical Engineering, Vol. 55, Issue 9, pp. 2240-2248, September 2008.

[9] H. Liang, Q. Lin and J. D. Z. Chen, “Application of the Empirical Mode Decomposition to the Analysis of Esophageal Manometric Data in Gastroesophageal Reflux Disease,” IEEE Transactions on Biomedical Engineering, Vol. 52, Issue 10, October 2005.

[10] H. Liang, Z. Lin and F. Yin, “Removal of ECG Contamination from Diaphragmatic EMG by Nonlinear Filtering,” Nonlinear Analysis, Vol. 63, Issue 5-7, pp. 745-753, December 2015.

[11] M. B. Velasco, B. Weng and K. E. Barner, “ECG Signal Denoising and Baseline Wander Correction Based on the Empirical Mode Decomposition,” Computers in Biology and Medicine, Vol. 38, Issue 1, pp. 1-13, January 2008.

[12] D. L. Donoho and I. M. Johnstone, “Adapting to Unknown Smoothness via Wavelet Shrinkage,” Journal of the American Statistical Association, Vol. 90, Issue 432, pp. 1200-1224, February 2012.

Page 63: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Pre-Processing of Cough Signals using Discrete Wavelet Transform

Proceedings of IC4T, 2018 45

[13] P. Arora and M. Bansal, “Comparative Analysis of Advanced Thresholding Methods for Speech-Signal Denoising,” Vol. 59, Issue 16, pp. 28-32, December 2012.

[14] M. S. Chavan, M. N. Chavan and M. S. Gaikwad, “Studies on Implementation of Wavelet for Denoising Speech Signal,” Vol. 3, Issue 2, pp. 1-7, June 2010.

[15] M. Alfaouri and K. Daqrouq, “ECG Signal Denoising By Wavelet Transform Thresholding,” American Journal of Applied Sciences, Vol. 5, Issue 3, pp. 276-281, 2008.

[16] V. Bhateja, S. Urooj, R. Verma and R. Mehrotra, “A Novel Approach for Suppression of Powerline Interference and Impulse Noise in ECG Signals,” Proc. of IMPACT-2013, Aligarh, India, pp.103-107, November, 2013.

[17] V. Bhateja, S. Urooj, R. Mehrotra, R. Verma, A. Lay-Ekuakille and V. D. Verma (2013), “A Composite Wavelets and Morphology Approach for ECG Noise Filtering,” International Conference on Pattern Recognition and Machine Intelligence, pp. 361-366.

[18] S. Poungponsri and X. Yu, “An Adaptive Filtering Approach for Electrocardiogram (ECG) Signal Noise Reduction using

Neural Networks,” Neurocomputing, Vol. 117, pp. 206-213, February 2013.

[19] V. Bhateja, A. Srivastava and D. K. Tiwari (2017), “An Approach for the Preprocessing of EMG Signals using Canonical Correlation Analys,” Smart Computing and Informatics, pp. 201-208.

[20] B. N. Singh and A. K. Tiwari, “Optimal Selection of Wavelet Basis Function Applied to ECG Signal Denoising, Digital Signal Processing, Vol. 16, Issue 13, pp. 275-287, May 2006.

[21] D. L. Donoho, “De-noising by Soft-Thresholding,” IEEE Transactions on Information Theory, Vol. 41, Issue 3, pp. 613-627, May 1995.

[22] N. H. Narona, S. Mukherjee and V. Kumar, “Wavelet Based Non-Linear Thresholding Techniques for Pre-Processing ECG Signals,” International Journal of Biomedical and Advance Research, Vol. 4, Issue 8, pp. 534-544, August 2013.

[23] H. A. R. Akkar, W. A. H. Hadi, I. H. Al-Dosari, “A Squared-Chebyshev Wavelet Thresholding Based 1D Signal Compression,” Defence Technology, pp. 1-6, August 2018.

Page 64: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Shivam Maurya, Shivani Mohan Agarwal, Shivani Prasad and Dr. Jayant Mishra

Proceedings of IC4T, 201846

Crime Prediction Using Data Analytics Tools; A Review

Shivam Maurya, Shivani Mohan Agarwal, Shivani Prasad and Dr. Jayant MishraDepartment of Computer Science and Engineering, SRMGPC, Lucknow, India

[email protected], [email protected], [email protected] and [email protected]

Abstract— The paper deals with the review done of numbers of research papers on Crime Prediction. With the increasing crime rate, there is a need to control the rate to make people feel safer in their place. Henceforth, various researches are going on to reduce this increasing rate. According to the information gathered, the researchers usually conducted a study to predict the crime rate focusing on various factors which can contribute to crime. And from the patterns and trends observed they have either designed a model or have used some scientific and mathematical techniques to analyse the crime rate and further worked on establishing various methods to have control on the crime rate. It’s a challenging task to get the accurate result because of the increasing number of crimes nowadays. Thus, it’s better to adopt methods of crime prediction which can predict future crime and help in reducing the number of crimes. Here the objective is to provide an approach or methodology which fits the best for predictive analysis by comparatively analyzing current data analytics tools.

Keywords -- Crime, Crime prediction, predictive analysis, analytic tools.

I. INTRODUCTION

The word ‘crime’ can be defined as any illegal act or activity against the rules and regulations defined by the government authorities for which someone can be punished by law. Crime can be categorized into two categories which includes crime of aggression (assaults, homicides and rapes) and crime against properties (burglary, robbery and theft) [1].

Nowadays, crime has become a severe threat to all the nations across the globe. With the increase in development and urbanization of big towns and cities, a significant increase in the graph of crimes can be seen. This phenomenal increase is a matter of concern and can’t be ignored. It is one of the major issues which is proliferating in its intensity as well as in its complexity. Earlier it was used to be handled only by the criminal justice and few of the law enforcement members. But due to the high increase in its rate, it’s becoming a need to get more people indulged in this activity for better and faster results of prediction. The modern-day technologies and inventions are helping criminals to fulfil their desires. According to Crime Records Bureau crimes like burglary,

arson etc have been decreased while crimes like murder, sex abuse, gang rape etc have been increased [1]. Crime cannot be predicted accurately since there is no fixed pattern but with the random pattern obtained various crime hotspots can be predicted. Crime Hotspots are the areas where the crime intensity is higher as compared to other areas or it can also be defined as a place where people have a higher risk of victimization.

Crime Analysis is the systematic analysis for identifying and analyzing various patterns and trends in crime. Due to the significant increase in crime rate, there is a need to predict the pattern to lower down the rate. Since 100% accuracy can’t be predicted from the information provided but it can help in decreasing the crime rate and determining various crime hotspots. For discovering a powerful tool which can help to get better results it is required to perform two main activities of data mining and data analysis.

Crime analysis can be done through both quantitative and qualitative methods [2]. Qualitative approaches to forecasting crime, such as environmental scanning, scenario writing, or Delphi groups, are particularly useful in identifying the future nature of criminal activity [3]. In contrast, quantitative methods are used to predict the future scope of the crime, and more specifically, crime rates [3].

For building a powerful tool it is required to collect data and then perform specific data analysis procedures on that data. The data can be collected from various sites such as Kaggle, Open Government Data, National Crime Bureau of Records (NCRB), and many more.

Since the data sets available on these sites also contain some unnecessary data so there is a requirement for filtering the data according to our needs. The data can be filtered with the help of data mining techniques such as data pre-processing. Data mining can be defined as a process which is used to determine the hidden patterns present in the large dataset. Data preprocessing is a data mining technique which is used to convert the raw data in an understandable format. The data available at various sites contain errors which can be removed with the help of data preprocessing.

Page 65: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Crime Prediction Using Data Analytics Tools; A Review

Proceedings of IC4T, 2018 47

The preprocessing of data consumes a lot of time since it’s a challenging task to identify the correct pattern. The process of data preprocessing goes on in parallel with other processes until the pattern obtained is cross-validated. The data is divided into train data and test data. As the name suggests train data is used to train the data to obtain a specific pattern by fitting the training dataset in the pattern observed. After the training process, the test data is used to test whether the pattern identified is correct or not by fitting the test data into the pattern observed. After detecting a pattern, it is used to predict the crime rate and various measures can be established to prevent crime.

The primary challenge is to design a powerful tool which can be used to predict the crime patterns and trends efficiently and effectively. The problems that can be faced during the work are:

• Increase in crime data and information that needs to be analyzed.

• Limitation on getting the exact crime data records from Law Enforcement Department.

• Incomplete and inconsistent data will make the analysis process difficult.

• 100% accuracy can’t be achieved.

II. LITERATURE REVIEW

Data mining, a process of examining large databases that already exist to find patterns and relationships to generate some new information, can be used to study and analyse crime databases. The primary objective of data mining is to obtain relevant information from the databases and convert it into understandable form so that it can be used in the future. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases [4].

In 1998, D.E. Brown made a software framework called as ReCAP (Regional Crime Analysis Program) for mining data to catch professional criminals using data mining and data fusion techniques. In 2006, De Bruin et. al. introduced a framework

for crime trends using a new distance measurement technique which compares all individuals based on their profiles and then clusters them accordingly. This method also provided a visual clustering of criminal career and identification of classes of criminals. In 2009, Li Ding et. al. [5] proposed an integrated crime detection system called PerpSearch that takes four integrated components as inputs: geographic profiling, social network analysis, crime profile, and physical matching. Geographic profiling determines where the suspects are while other components determine the suspects.

From the literature review, it can be deduced that crime data is increasing continuously to enormous quantities. Due to this the need for more advanced and efficient techniques or methods are required for crime analysis for accurate predictions.

III. DATASETS COLLECTION

Crime data has been systematically recorded and stored by the police over the years and in the recent years, there has been a surge of Open Crime Data [6] and of applications and web-based applications to show the statistics of crime on maps or give a clear view of the crime rate. The crime data can be found on official sources such as data.gov.in website, data.police.uk website, or on unofficial sites like Kaggle website and UCI Machine Learning Repository website which uses the same data sets on the official sites. Some sample data taken from data.gov.uk website is as shown in Table 1 [7]. In this sample data set the given information is as follows:

Crime ID: ID is given to the particular instance of crime

Month: The month and year in which the crime was committed

Falls within: This specifies the police area in which the location of crime falls

Longitude: It specifies the longitude of the area where the crime was committed

Latitude: It specifies the latitude of the area where the crime was committed

Location: It specifies the location of the crime committed

Crime type: This field specifies the type of crime committed

Table 1: Crime Dataset Collected from data.police.uk with selected columns

Page 66: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Shivam Maurya, Shivani Mohan Agarwal, Shivani Prasad and Dr. Jayant Mishra

Proceedings of IC4T, 201848

IV. METHODOLOGY:

With the advancements in digitalization, the size of data in increasing at exponential rates giving birth to Big Data Analytics.

The automation and digitization of FIRs, investigation reports are now generating data that can be used in Data Analytics to predict and hence reduce the crime rates. Today, crime agencies can foresee and prevent a large number of criminal activities. The tools which are trending today make use of algorithms which are dependent on the observation that crime feature tend to cluster in time and space. By using and observing recent crime data they can predict the target variable.

To make informed decisions by finding insights of data, it is required to deal with such massive data intelligently. For this purpose, a brief comparative analysis of some prevalent tools is done.

In this paper, the emphasis is done on Predictive analysis and its methodologies which are being followed globally[8].

Objective Definition: This step includes defining the problem statement and objective clearly. This stage categorizes the goal into the type of analysis to be done like predictive analysis, exploratory analysis, descriptive analysis, inferential analysis. This paper goal is to have a predictive analysis. This step also deals with the logic which is to be implemented further.

Data Collection: This step discusses what data are needed and from where it can be found. The data collected probably contain errors or null values. The most common problem here is to find data in the exact required format because the requirement is itself not explicit at the initial stage.

Visualizing the Data: To have a better understanding of the data, it is better to visualize it before diving in preprocessing. It is correct that processed data is intuitive to understand but for wrangling, the initial idea of data behaviour is necessary which can be acquired through visualization.

Preprocessing the Data for Modeling: More than half of the average time is required for preparing data. This stage begins with the study of data and how its attributes can be adapted for predictive analysis. In further data study, it gets cleared that what format is to be followed, what data are to be corrected, and how to deal with missing values and outliers.

Selecting and Recoding the features: The accuracy of a predictive model depends upon what features are to be calculated against the target variable. To increase the degree of closeness to the target variable, recoding or transformation of attributes is performed in which new attributes are computed from existing attributes.

Training and validation of Model: The ease of processing the model is directly related to the perfection in previous

stages. The formatted data is adaptable to model algorithm. The basic idea of this stage to pass the data in an algorithm and evaluate it. For validation, the model should give the same accuracy on passing hold-out sample data that is test data. This will validate how the model is fitting on data and how it will perform on prediction on different time data.

Implementing and Maintaining the Model: The model created can be deployed to GUI or in the application software of city police so that they can make out best of it. Further, the enhancement in accuracy and more insight finding capabilities can be added to the model by creating more structured data.

Any of the data analytic tools can accompany the above-discussed methodology, but since every tool specializes in some category. So, to fulfil this paper objective, a sort of comparative analysis done. Various proprietary and free data analytic tools are available like Orange, R Project, WEKA, Anaconda, Rapidminer, Knime, SAP, SAS, MATLAB, and many more [9].

The tool worth using and considered to be best if all the facilities like automatic data preparation, predictive modelling, analysis of insights, ease of management, visualization capabilities are given by it.

Orange: It is an open source data analysis and visualization tool. Orange provide Python scripting or visualization methods for data mining. This tool features biometric add-ons, data mining and text mining add-ons and various machine learning algorithms along with data analytics features.

R Project: The R project being free software gives the advanced statistical computation power and graphics. R is freely available on a variety of platforms including UNIX/Linux, Windows, and MacOS. R is a complete packed suite of software features for advanced mathematical computation, data manipulation and visualisation.

WEKA: Weka (Waikato Environment for Knowledge Analysis) can handle all data mining tasks efficiently as it is comprised of many machine learning algorithms. Its features include all machine learning algorithms, data mining capabilities, attribute selection, experiments, workflow, visualization, data preprocessing, and other analytical algorithms for classification, regression, clustering, association rules.

Anaconda: Powered by Python, Anaconda provides high-performance distribution of R and with. It includes almost every necessary package of R, Python and Scala for statistical analysis, deep learning, machine learning, image processing, natural language processing and data mining.

Rapidminer studio: It renders complete predictive analytics workflows along with its visual design where workflows contain different operators that process data in a pipeline fashion. It also features extensive use of libraries for machine

Page 67: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Crime Prediction Using Data Analytics Tools; A Review

Proceedings of IC4T, 2018 49

learning algorithms exploratory analysis, data preparation, model evaluation. Rapidminer being open source is best known for its unified platform, broad connectivity, visual workflow design.

KNIME: The Konstanz Information Miner, is data analytics, integration reporting platform. This software comprises various components for data science and machine learning via its modular data pipelining concept. It also allows the user to data preprocessing, data analysis and visualization and modelling via its intuitive graphical user interface.

A brief comparison of various tools is done in tabular form on the basis of their category, initial release, license, language used to develop the tool and their developers in Table 2 [10]. Various tools discussed below are: Anaconda, KNIME, Orange, R Project, Rapidminer, WEKA.

Table II: Various tools of data analytics and data mining

Sr. No.

Tools Category Initial Release

License Written in

Developed by

1 Orange Data mining software

2009 GNU General Public

License

Python, C, C++, Cython

University of Ljubljana

2 Anaconda Predictive analytics software

July 2012 New BSD License

Python Continuum Analytics

3 Rapidminer Predictive analytics software

2006 AGPL Proprietary

Java Klinkenberg, Mierswa, and

Fischer at University of

Dortmund4 WEKA Data mining

software1993 GNU

General Public

License

Java University of Waikato

5 R Project Predictive analytics software

February 2000

GNU General Public

License

C, Fortran and R

Ross Ihaka and Robert Gentleman,

University of Auckland

6 KNIME Data mining and Predictive analytics software

July 2006 Eclipse Public

License Proprietary

Java University of Konstanz

V. RESULT

In this paper, some known predictive analytic tools have been studied and it is found to that some of them are must-use tools

for the crime dataset discussed above. Comparative analysis of tools is done along with the discussion of a generalised approach to be followed. They can be applied over processed or raw crime datasets to find spatial and temporal results.

VI. CONCLUSION

The research work focuses on the crime analysis tools which can aid in building a model that will enable the law enforcement bodies to predict and find patterns in crime. In the absence of any standard method of crime analysis, the work can be further advanced to the use of

Artificial Neural Network, fuzzy logic or support vector machines. The use of data mining technique can be used to reduce crimes significantly and to make the world a better place to live in.

VII. REFERENCES[1] Sathyadevan, M. S. Devan, G. S. Surya, “Crime analysis and

prediction using data mining”, International Conference on Networks & Soft Computing, pp. 406-412, 2014.

[2] G. B. Vold, “Prediction Methods Applied to Problems of Classification Within Institutions”, Journal of Criminal Law and Criminology, vol. 26, pp. 202-209, 1951.

[3] A. Babakura, M. N. Sulaiman, M. A. Yusuf, “Improved method of classification algorithms for crime prediction”, International Symposium on Biometrics and Security Technologies (ISBAST), pp. 250-255, 2014.

[4] Jiawei Han, Micheline Kamber, Jian Pei, “Data Mining Concepts and Techniques” Third edition.

[5] Li Ding et. al.,“PerpSearch: An integrated crime detection system” 2009.

[6] L. Tompson, S. Johnson, M. Ashby, C. Perkins, and P. Edwards, “UK open source crime data: accuracy and possibilities for research, Cartography and Geographic Information Science” 2015.

[7] https://data.police.uk/data/fetch/7e4d2610-6df8-47b3-9fc6-de3dbf4bb25c/

[8] William M. Bannon Jr., “The 7 Steps of Data Analysis: A manual for Conducting a Quantitative Research Study”, First Edition, StatsWhisperer Press, NewYork,2013.

[9] https://www.predictiveanalyticstoday.com/predictive-analytics-tools/

[10] Kalpana Rangra, Dr. K. L. Bansal, “Comparative Study of Data Mining Tools”, 2014.

Page 68: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Bhoomi Kalavadiya, Pallab Dutta, Vipin Tyagi and Sridharan Balakrishnan

Proceedings of IC4T, 201850

Configurable User Interface: A Perspective from Interoperable Set Top Box

1Bhoomi Kalavadiya, 1Pallab Dutta, 2Vipin Tyagi and 1Sridharan Balakrishnan1Centre for Development of Telematics, C-DOT, Bangalore, India2Centre for Development of Telematics, C-DOT, New Delhi, India

[email protected], [email protected], [email protected] and [email protected]

Abstract—With digitization of cable TV broadcasting, there has been a tremendous growth in the number of Set Top Boxes (STB) installed in the households across the country. There are around thousand MSOs (Multi Service Operators) of varied subscriber base sizes in India. There is substantial subscriber base for DTH (Direct To Home) services as well. With crores of STBs being seeded in the country, there is a compelling need for interoperability of STB across various service providers to reduce e-waste etc. STB being customer premise equipment having very frequent end-user interactions, User Interface (UI)/GUI plays a major role in deciding the success of the product/service in terms of market acceptability & penetration. The comfort and satisfaction of the end user heavily depends on the parameters of the STB UI. Thus STB UI has become a major differentiator across operators as on today. The success of interoperability of STB also depends on how the UI adapts to different operators’ requirements. In this paper, we describe the concept and approach for configurability of UI by the operator in interoperable STB. We analyze the various design & implementation approaches and the implications of those. We also look at further enhancements as future possibilities.

Keywords— Set Top Box; User Interface; Interoperability; Over The Air.

I. INTRODUCTION

In digital broadcast system, the receiver in the customer premise consists of an antenna (in case of DTH), a STB and a detachable Smart Card in most of the cases. Due to various technical, implementation and market driven reasons, presently the STB is tied to the service operators. That is, the same STB cannot be used interchangeably across the service providers of either cable or DTH segments. There are initiatives towards evolving technology solutions for achieving technical interoperability of STB. The objective of STB interoperability is that the standard STB (from any manufacturer) shall be capable of receiving services from different service providers by attaching the corresponding service provider’s Smart Card / CAM in the standard STB or by any other means. That is; the STBs are not to be tied to any specific service provider or conditional access system (CAS) per se. This also implies that STBs can be manufactured independent of any specific operator/CAS.

Generic modules of a STB [1], [3] are the followings as shown in the figure 1.

i. Tuner, Demodulator, Channel Decoder

ii. Demultiplexer

iii. Descrambler

iv. Audio/Video Decoders

v. Smart Card interface, HDMI/RCA interface, IR interface

vi Host Processor and Memory Unit.

Figure 1: Generic STB Architecture

The User Interface application extracts the required information from MPEG2 Transport Stream and renders it to the display interfaces. The UI application runs on the host processor and makes use of the memory modules for execution.

Different aspects and challenges pertaining to interoperable STB are discussed and analyzed in this paper with specific focus on the User Interface design. In section II, we discuss the aspects of interoperable STB, in section III

Page 69: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Configurable User Interface: A Perspective from Interoperable Set Top Box

Proceedings of IC4T, 2018 51

& IV, we discuss the approaches to UI interoperability and analyze the design and implementation challenges. Results of UI interoperability approaches are discussed and analyzed in section V. In section VI, the future trends for UI interoperability are discussed along with concluding remarks captured in section VII of the paper.

II. ASPECTS OF INTEROPERABLE STB

The objective of interoperable STB is that the same STB shall be able to receive and decode signals from multiple operators. In order to meet this broad objective, there are two main aspects which needs to be analyzed [2]. These two aspects are:

i. STBs are at present tightly coupled with specific CAS. For Interoperability, STB shall be independent of CAS.

ii. UI of STBs are presently tied to the specifics of the service provider. In order to achieve effective Interoperability, it is imperative to make the UI configurable by the service provider.

CAS is an integral part of any PayTV system. CAS essentially performs the functionalities pertaining to authorization of channels for the legitimate subscribers as per their subscription details. CAS has two functional counterparts: One part hosts in the Headend equipment, performing the encryption and multiplexing. The other part resides in the STB, performing the decryption and demultiplexing. The CAS implementation, at present is a closed ecosystem and this gives rise to issues towards achieving interoperability. This is proposed to overcome by using smart cards and the scheme of open standard cryptographic algorithms.

The UI of STB is essentially an application that typically runs on the top of middleware. As traditional STBs are resource constrained, the UI is tightly coupled to Middleware implementation and most of the cases implemented as part of middleware itself. This tight coupling of UI with middleware along with operator specific variability in middleware is a major impediment towards UI configurability. This in turn creates hindrances in the design & implementation of interoperable framework. In this paper we discuss on the design and implementation of a modular STB UI that can be upgraded/modified Over-The-Air (OTA) as per the operator’s requirements by using a novel technique. This addresses major requirements w.r.t. interoperability. In this paper we also analyse the various memory footprints requirements and other important aspects pertaining to this configurable UI approach.

III. APPROACHES TO UI INTEROPERABILITY

For UI interoperability of STB, system Architecture is divided into two parts. Modular and Non-modular.

• Modular

It can be modified by the operator. It is reserved only for UI related changes.

• Non-modular

It is the main section of the architecture. It contains core system of STB like kernel and device driver, middleware, different modules that supports to run STB smoothly, default UI and interface that contains functions required to build UI as shown in Fig 2.

Figure. 2. System architecture

Modular section of architecture contains all the UI related changes operator can perform that will be adapted by Non-modular section at run time.

The following section explains the steps to be followed to achieve interoperability in UI. The UI interoperability can be achieved at two levels, at basic level and at design level as described below.

A. Basic level of interoperability

Choosing the right colour scheme, meaningful line of text, and effective image on screen are important in layout designing. Proper combination of colours helps user to read text effortlessly. A line of text should convey the message properly. And image helps user to better understanding.

Using this level of interoperability in UI, the operator can modify the basic features of the UI over the air remotely. In

Page 70: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Bhoomi Kalavadiya, Pallab Dutta, Vipin Tyagi and Sridharan Balakrishnan

Proceedings of IC4T, 201852

this phase, basic aspects like the colour of different elements of the screen, image and text that appears on the screen can be modify. Apart from modification of the existing elements, new ones can be added easily and attached to the modifications on the screen.

Figure. 3.Colour, Text and Image modularity

In above Fig. 3, it can easily be seen that colour of the background as well the panels are different. Through this operator can set his own default theme for all the consumers it serves. As well what image should be appear on the screen for any particular element, can be changed. Operator can also decide a specific element should be seen with what text on the screen.

B. Design level of interoperability

Objects appear on the screen must be informative as well it has its own appropriate place as per semantics. This level of interoperability in UI provides options to customise the design of UI as per the requirements of the operator to a larger extent. It can be achieved by giving the control to decide the placement of various elements on the screen. Here, operator can select the position, alignment, spacing of the elements and can rearrange them in a unique way. This helps the operator to serve the customer in a better way.

Figure. 4. Design modularity

In above Fig. 4, with basic level of interoperability, design level of interoperability is depicted. Comparing with the first screen, in second screen, position, width, height of panels is different. As well in second screen, few more panels are inserted. UI plays a big role in embedded system [4] and designing screen such a way that user can feel it easy to use and understandable is even more challenging. We will discuss various challenges faced to obtain these interoperability in UI.

IV. DESIGN AND IMPLEMENTATION CHALLENGES

Achieving interoperability in UI on any embedded system like set top box is quite new approach and this poses many

challenges. The main challenge lies in the implementation of this modularity at every level of UI granularity. To achieve this modularity, the required data has to be provided by the other modules (Middleware etc) of the system through a standard interface. This data includes information related to display environments so that user screen can be auto adjusted according to customised configuration, system status to control different processes running together, other data details to display on the screen.

The second and very crucial challenge faced is in creating a thin layer between modular UI and non-modular system. Non-modular system has to communicate frequently with the modular UI to exchange the data. But this communication must not expose any internal information about controls and states of the machine. Operation of state machine is highly confidential to any entity other than the STB manufacturer due to various reasons including secure module implementations. And passing data without leaking or giving control to change internal states of machine is required.

V. RESULTS AND ANALYSIS

Implementation of interoperability in UI carried out on STB and the results obtained are depicted in Fig. 5(I) and Fig 5(II).

Figure. 5 (I).Basic and design level of interoperability

In these figures, both level of interoperability are shown. In first screen, design of Programming guide screen is shown. In that different panels like Preview of channel, different channel categories, channel list, day wise channel guide is designed with specific position and alignment. While in second screen same components are present but with different position and alignment.

Figure. 5(II).Basic and design level of interoperability

Page 71: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Configurable User Interface: A Perspective from Interoperable Set Top Box

Proceedings of IC4T, 2018 53

Also the theme, Text and images chosen to present various elements on screen are not same. This way, STB system can adapt to any changes of UI designed by operator easily. Thus without doing any internal configuration change, operator has flexibility to modify appearance of the screen.

Memory required for changes excluding the image is in few KBs. Memory size required for adding any new image to screen depends on that image size which will be in few MBs.

VI. FUTURE SCOPE

So far we have achieved basic and design level of interoperability in UI, but operator should have the flexibility for complete interoperability of UI in which he can design new screen and user can navigate smoothly in using the adaptable UI. To obtain the complete interoperability of UI, control to modify screen sequence, as well, facility to add new screens can be given to operator so that the design for navigation of the screens can be handled by the operator according to their design.

VII. CONCLUSION

There is a huge requirement of STBs in the country and with the forward march of technology, Set Top Box is moving towards a true Smart Top Box, providing ample configurability, rich features etc. Given this huge potential, it is the need of the hour to solve some of the technology issues pertaining to interoperability and move towards total interoperable framework for STB. Modular design of UI is

an important aspects towards achieving interoperability of STB. In this paper, we have discussed various approaches to achieving a modular, configurable UI. We have found that it is very much feasible to implement a modular UI in a STB. In future, we plan to implement a configurable UI that provides more flexibility and further delinked from the default UI. The results obtained so far is indeed a very positive and forms a dominant foundation towards realization of a practical, network adaptable interoperable STB.

REFERENCES

[1] Sabina Shamim, ” The state of Set Top Box Industry in India: Issues and road ahead A Tech-business ecosystem perspective.” IOSR Journal Of Humanities And Social Science (IOSR-JHSS) Volume 21, Issue 1, Ver. I (Jan. 2016) PP 07-17.

[2] TRAI Consultation Paper on solution architecture for Technical Interoperable Set Top Box [Online]. Available: https://www.trai.gov.in/sites/default/files/Consultation_note_on_STB_interoperability_110817.pdf Accessed: Aug 31, 2018.

[3] Laisa C. P. Costa, et. al “Universal Set-top Box: A Simple Design to Provide Accessible Services” IEEE International Conference on Consumer Electronics (ICCE), 2011.

[4] Dutta, Pallab, G. Upendra, E. Giribabu, B. Sridharan, and VipinTyagi. “A Comprehensive Review of Embedded System Design Aspects for Rural Application Platform.” International Journal of Computer Applications 106, no. 11, Jan, 2014.

Page 72: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Anuj Singh Bhadauria, Mansi Nigam, Anu Arya and Vikrant Bhateja

Proceedings of IC4T, 201854

Morphological Filtering based Enhancement of MRI

Anuj Singh Bhadauria, Mansi Nigam, Anu Arya and Vikrant BhatejaDepartment of Electronics and Communication Engineering,

Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, [email protected], [email protected], [email protected] and [email protected]

Abstract—Brain Tumor is a life threatening disease which consists of an abnormal mass of tissue with unrestricted growth inside the skull. It may be cancerous or non-cancerous. Inaccurate detection of brain tumor occurs due to non-homogeneous nature of brain tissues. Therefore, enhancement algorithm is used to improve the visual quality of image without intensifying noise. This paper presents an improved enhancement technique of brain MR images by employing morphological operations. The simulated results portray improved contrast and enhancement of brain MR images.

Keywords—Bottom-Hat Transform, CII, Enhancement, Morphological Filtering.

I. INTRODUCTION

Brain is the most complicated, yet very important part of human body. It suffers from a number of abnormalities amongst which brain tumor has been the most ominous and intractable disease which can even cause death. It is the growth of aberrant cells in the brain that can be classified into Benign (Grade I and Grade II) or Malignant (Grade III and Grade IV) [1, 2, 3] tumors. Due to complex brain structure and varied impact on different individuals, screening of tumor becomes a challenging task with simple imaging techniques. Hence, Magnetic Resonance Image (MRI) which is the most compliant and resourceful technique preferred currently, is used and contains many imaging modalities each having its own contrast characteristics [4, 5]. The complete process of identification and detection of tumor is time taking and laborious task [6]. Various challenges such as low intensity contrast, unclear boundaries and background noise are always associated with MR images which might significantly affect the detection process. Therefore, it is very important to enhance the images before being processed further. A number of enhancement techniques are used amongst which Adaptive Histogram Equalization (AHE) is very common [7]. However, it has a limitation of over-amplifying noise. So, a combination of Contrast Limited Adaptive Histogram Equalization (CLAHE) and Discrete Wavelet Transform (DWT) was deployed by H. Lidong et al. [8]. In their method, only low frequency components were enhanced to avoid noise intensification. The method preserved image details with

noise suppression but did not provide a solution for alteration of image brightness. Histogram techniques tend to change the brightness of image and saturate it. Overcoming this problem, Fuzzy Adaptive Histogram Equalization technique was proposed by V. Magudeeswaran et al. [9]. The method was performed in three stages resulting into an enhanced image retaining original brightness values to maximum extent. A. Kharrat et al. [10] utilized mathematical morphology to increase MR image contrast with a disk shaped structuring element. Benson et al. [11] proposed a method for MR image contrast enhancement and skull stripping using morphological filters. The algorithm was simple and could be applied in various applications such as tumor detection, volume analysis and classification as reported. H. Hassanpour et al. [12] used morphological operations to devise a filter using Top-Hat transform with a disk structuring element. The method was highly potential to enhance poor quality MR images and was simple with less processing time. To combat upon the existing challenges observed for the Histogram and Morphological filters towards contrast enhancement, an improved method based on combination of Top-Hat and Bottom-Hat operations using disk structuring element is presented in this paper. A disk shaped structuring element is used for MR images because it is independent of rotational changes [12]. The rest of the paper is organized in the following sections: Section II describes the general overview of morphological filtering along with proposed methodology; Section III discusses Image Quality Assessment, summarizes the results, performance metrics and discusses the outcomes of this work. Section IV summarizes the conclusion of the work.

II. PROPOSED ENHANCEMENT METHODOLOGY

A. Overview of Morphological Filtering

Morphological filtering [13, 14] is a set of nonlinear filtering techniques that are based on the structural properties of the objects. They are implemented through a combination of morphological operators which are related to the shape of entities in the image. These operators are applied on two inputs which are the input image and the structuring element. Structuring element is a template that designates the pixel

Page 73: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Morphological Filtering based Enhancement of MRI

Proceedings of IC4T, 2018 55

neighborhood and the choice of its shape and size depends upon the application and information being fetched [15]. The common morphological operators are Erosion, Dilation, Opening and Closing [13,16]. Erosion shrinks the image scaling down its area where as Dilation expands the image leaving it with an increased area [17].Opening is Erosion followed by Dilation and Closing is Dilation followed by Erosion. In the same way these operators can further be used in a number of combinations for a variety of filtering operations. Morphological filtering finds a wide range of application in medical imaging including thermographic, magnetic resonance and ultrasound images. In MRI, noise reduction and contrast enhancement are the primary requisites in any of its application for medical diagnosis. Hence, Morphological Filtering techniques which are simple to implement are very useful for MR images. However, Morphological Filtering poses certain limitations which make their usage a challenging task. Firstly, the results of filtering may not be very good in case of highly intricate images and secondly, the type and size of structuring element [18] need to be chosen with utmost care, lack of which may lead to distortion of image features.

B. Proposed Morphological Filter for Enhancement

This paper proposes an improved and computationally simple enhancement technique for brain MRI using Morphological Filtering. The implemented filter is a mathematical combination of Top-Hat and Bottom-Hat operators [12] which in terms are obtained by a combination of image subtraction with Opening and Closing [19,20]. In the first stage, the input image (f) is taken in grayscale format and structuring element (x) is defined. The size of element chosen must be enough to trap tumor region without blurring the images. Disk shaped flat structuring element [21] of size 35, is suitable for this filter as size less than this trapped other small irrelevant features. Then the image is subjected to Top-Hat (That(f)) and Bottom-Hat (Bhat(f)) transformations defined by Eq. (1) and (2) respectively.

(f) f (f )hatT x= − (1)

(f) (f ) fhatB x= • − (2)

Top-Hat transform act as a High Pass filter and highlights the bright objects on a dark background. This transform leads to a distinct visibility of tumor region which originally was merged with the background. The most important utility of Top-Hat is to correct the non-uniformity in intensity which is a pervasive problem in MR images. Bottom-Hat transform is just the converse of Top-Hat transform and is used to highlight the darker regions of the image. Since, it is desirable to minimize the background features that are not relevant for diagnostic purposes, Bottom-Hat transform is subtracted from Top-Hat in the final step denoted by Eq. (3).

(f) (f) (f)hat hatI T B= − (3)

where: I(f) is the output image.

III. SIMULATED RESULTS AND DISCUSIONS

Enhancement refers to the adjustment of image by modifying the values of pixels. However, it may result in amplification of noise level also. Therefore, contrast improvement cannot be considered solely for quality assessment of the enhanced image. Thus, the evaluation is done on the basis of two parameters which are Contrast Improvement Index (CII) and Peak Signal to Noise Ratio (PSNR) [22, 23]. CII is used for calculating the contrast improvement whereas noise level can be measured using PSNR. CII is the ratio of contrast of enhanced image to the contrast of original image which means that higher the value of CII more is the contrast improvement of the enhanced image. PSNR is an absolute quantity used for measuring the noise level in an image. Higher value of PSNR denotes better noise filtering in enhanced image. The Quality Assessment of images has been conducted using CII and PSNR and their values are tabulated in Table I.Table I. Values of CII and PSNR for Proposed Enhancement Method

Images CII PSNR(Enhanced)

Test_Image#A 1.4969 5.7589Test_Image#B 1.2164 5.6138

Fig. 1. shows two MR images Test_Image#A and Test_Image#B and their respective enhanced output images. The original Test_Image#A image consists of two main structures that are the central and the lower oval shape structure on the left half of the image. These structures are visible very distinctly in the enhanced image. Their background is largely suppressed and the large patch on the right hand side which is not very significant is omitted to some extent. In Test_

Fig. 1. (a) Original MRI image Test_Image#A (b) Enhanced Test_Image#A (c) Original Test_Image#B (d) Enhanced Test_Image#B .

Page 74: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Anuj Singh Bhadauria, Mansi Nigam, Anu Arya and Vikrant Bhateja

Proceedings of IC4T, 201856

Image#B, the original image has a high intensity region on the central left and a medium intensity region on the right of the image which are merged with the moderate intensity background. After enhancement, the regions are more accentuated by curtailing the background features. Table I shows good performance metrics with incrementing values of CII and PSNR.

IV. CONCLUSION

A precise study of brain MR image is essential for the early detection of brain tumor. The proposed enhancement algorithm is based on morphological filtering that uses a disk shape structuring element along with top-hat and bottom-hat operators. The value of quality assessment parameters (CII and PSNR) depicts improved image contrast and adequate noise filtering. The proposed algorithm simplifies the further stages in tumor analysis by including only the required features.

REFERENCES

[1] Types of Brain Tumors, https://www.abta.org

[2] A. R. Alankrita, A. Shrivastava, and V. Bhateja, “Contrast improvement of cerebral mri features using combination of non-linear enhancement operator and morphological filter,” Proc. of IEEE International Conference on Network and Computational Intelligence (ICNCI), vol. 4, pp. 182-187, 2011

[3] N. B. Bahadure, A. K. Ray and H. P. Thethi, “Comparative Approach of MRI-Based Brain Tumor Segmentation and Classification Using Genetic Algorithm,” International Journal of Digital Imaging, pp. 1-13, 2018

[4] K. Somasundaram, and T. Kalaiselvi, “Automatic Brain Extraction Methods for T1 Magnetic Resonance Images using Region Labeling and Morphological Operations,” Computers in Biology and Medicine, vol. 41, no. 8, pp. 716-725, 2011

[5] MRI sequences (overview), https://radiopaedia.org/articles/mri-sequences-overview

[6] M. U. Akram, and A. Usman, “Computer aided system for brain tumor detection and segmentation,” IEEE International Conference on Computer Networks and Information Technology (ICCNIT), pp. 299-302, 2011

[7] J. A. Stark, “Adaptive image contrast enhancement using generalizations of histogram equalization,” IEEE Transactions on image processing, vol. 9, no. 5, pp. 889-896, 2000

[8] H. Lidong, Z. Wei, W. Jun, and S. Zebin, “Combination of contrast limited adaptive histogram equalisation and discrete wavelet transform for image enhancement,” IET Image Processing, vol. 9, no.10, pp. 908-915, 2015

[9] V. Magudeeswaran and J. F. Singh, “Contrast limited fuzzy adaptive histogram equalization for enhancement of brain images,” International Journal of Imaging Systems and Technology, vol. 27, no. 1, pp. 98-103, 2017

[10] A. Kharrat, N. Benamrane, M. B. Messaoud and M. Abid, “Detection of brain tumor in medical images,” 3rd IEEE

International Conference on Signals, Circuits and Systems (SCS), pp. 1-6, 2009

[11] C. C. Benson, V. L. Lajish, “Morphology based enhancement and skull stripping of MRI brain images,” IEEE International Conference on Intelligent Computing Applications (ICICA), pp. 254—257, 2014

[12] H. Hassanpour, N. Samadiani and S. M. Salehi, “Using morphological transforms to enhance the contrast of medical images,” The Egyptian Journal of Radiology and Nuclear Medicine, vol. 46, no. 2, pp. 481-489, 2015

[13] R. Verma, R. Mehrotra and V. Bhateja, “A new morphological filtering algorithm for pre-processing of electrocardiographic signals,” Proc. of the Fourth International Conference on Signal and Image Processing (ICSIP), pp. 193-201, 2013

[14] D. K. Tiwari, V. Bhateja, D. Anand, A. Srivastava and Z. Omar, “Combination of EEMD and morphological filtering for baseline wander correction in EMG signals,” Proc. of 2nd International Conference on Micro-Electronics, Electromagnetics and Telecommunications, pp. 365-373, 2018

[15] A. Raj, A. Srivastava and V. Bhateja, “Computer aided detection of brain tumor in magnetic resonance images,” International Journal of Engineering and Technology, vol. 3, no.5, pp. 523-532, 2011

[16] V. Bhateja, S. Urooj, S., R. Mehrotra, R. Verma, A. Lay-Ekuakille and V. D. Verma, “A composite wavelets and morphology approach for ECG noise filtering,” International Conference on Pattern Recognition and Machine Intelligence, pp. 361-366, 2013

[17] V. Bhateja and S. Devi, “A novel framework for edge detection of microcalcifications using a non-linear enhancement operator and morphological filter,” IEEE 3rd International Conference on Electronics Computer Technology (ICECT), vol. 5, pp. 419-424, 2011

[18] V. Bhateja, S. Urooj, R. Verma and R. Mehrotra, “A novel approach for suppression of powerline interference and impulse noise in ECG signals,” IEEE International Conference on Multimedia, Signal Processing and Communication Technologies, pp. 103-107, 2013

[19] A. Chaddad and C. Tanougast, “Quantitative Evaluation of Robust Skull Stripping and Tumor Detection applied to Axial MR Images,” Brain Informatics, vol. 3, no. 1, pp. 53-61, 2016

[20] R. Verma, R. Mehrotra and V. Bhateja, “An integration of improved median and morphological filtering techniques for electrocardiogram signal processing,” IEEE 3rd International Conference Advance Computing, pp. 1223-1228, 2013

[21] Gonzalez, R.C., Woods, R.E.: “Digital Image Processing,” Pearson Education, Chapter 10, pp. 689-794. (2009)

[22] A. Srivastava, A. Raj and V. Bhateja, “Combination of wavelet transform and morphological filtering for enhancement of magnetic resonance images,” Digital Information Processing and Communications, pp. 460-474, 2011

[23] A. Jain, S. Singh and V. Bhateja, “A robust approach for denoising and enhancement of mammographic images contaminated with high density impulse noise,” International Journal of Convergence Computing, vol. 1, no.1, pp. 38-49, 2013

Page 75: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Car Health Monitoring Device Using Raspberry Pi

Proceedings of IC4T, 2018 57

Car Health Monitoring Device Using Raspberry PiSuyash Gautam And Swarnima Shukla

Department of Computer Science and Engineering Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow, India

[email protected] and [email protected]

Abstract - Car health monitoring is a device which uses OBD port of car to monitor the health of the car such as mileage , fuel , oils . The idea behind this device is that a normal user pays very less attention towards the well- being of the car so through this device user can easily know problems with the car on time . This whole data will be available to the user through an application ANDROID and IOS both .Some new functionalities like SOS alert , towing and break in attempt alert , accident alert ,GPS tracking are implemented. We will be using SIM to provide constant internet so that data could be sent to the server , from server the data will be parsed and cleaned and then send to the app of the user .The device will use this data for analysis and through data analysis will predict when your car will require servicing next so the user can schedule car servicing . The data used for analysis will be real time data so the accuracy of the device will be high. We will charge the device using the OBD port or by utilizing the USB port of the car . Devices used are accelerometer , GPS , SIM and bluetooth along with raspberry pi.

Keywords - Raspberry Pi, GPS, Accelerometer, OBD, Data analysis.

I. INTRODUCTION

The internet of things (IOT) is the system of physical objects, devices, vehicles, structures and different things inserted with hardware, programming, sensors and system availability that empower these articles to gather and trade information. At the point when IOT is expanded with sensors and actuators, the innovation turns into an occasion of the more broad class of, digital physical framework which additionally includes advances, for example, smart matrices, smart homes, canny transportation and shrewd urban areas. Everything is exceptionally identifiable through its installed registering framework however can interoperate inside the current internet foundation. Specialists evaluate that the IOT will comprise of right around 50 billion protests by 2020.

English business person Kevin Ashton initially begat the term in 1999 while working at Auto-ID Labs (initially called Auto-ID focuses, alluding to a worldwide system of items associated with radio frequency identification, or RFID). The interconnection of these implanted devices (counting brilliant items), is relied upon to introduce automation in almost all fields, while additionally empowering propelled applications like a smart matrix, and extending to the territories, for example, shrewd urban areas. “Things,” in the IOT sense, can

allude to a wide assortment of devices, for example,well being observing inserts biochip transponders on cultivate creatures, electric mollusks in beach front waters, autos with worked in sensors, DNA investigation devices for natural/sustenance/pathogen checking or field task devices that help firefighters in inquiry and safeguard activities. Lawful researchers recommend taking a gander at “Things” as an “inseparable blend of equipment, programming, information and service”. Current market illustrations incorporate shrewd indoor regulator frameworks and washer/dryers that utilization Wi-Fi for remote observing. The extension of Internet-associated computerization into a plenty of new application regions, IOT is additionally anticipated that would create a lot of information from different areas, with the subsequent need for speedy total of the information, and an expansion in the need to record, store, and process such information all the more adequately. IOT is one of the stages of the present Smart City, and Smart Energy Management Systems.

IOT in Automobile Industry

The innovation in the automotive industry are endless. The modern cars are slowly moving towards the integration of the microprocessors within the vehicular system, which make them not only smarter but also fuel efficient. The

microprocessors in the vehicular system are used to take complex and critical decisions to secure and improve the overall stability. These microprocessors are called as Engine Control Units (ECUs). The ECU gets the data feeds from sensors to take vital decisions. In cars ECU process large amounts of data that are useful for the advancement in the vehicle technologies. The data not only consists of sensor feed information but also consists of significant portion of the diagnostic information. The presence of the diagnostic system is necessary for the health of the vehicle and safety of the drivers and passengers. Diagnostic subsystem is responsible for monitoring the status of the vehicles numerous systems and providing the information to deal with the complex systems [2]. Diagnostic systems are major component in the performance of any vehicle. These systems take ample time to diagnose the issue than to fix it helping to save lot of time. Components diagnosed and observed by OBDII are used by the ECU of vehicle controlling engine’s major operations. Fusion of such systems ensures efficient vehicle operation and safety of the vehicle .On-board diagnostic (OBD) for a

Page 76: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Suyash Gautam and Swarnima Shukla

Proceedings of IC4T, 201858

non-technical individual, is a system on the vehicle which monitors and ensures health of the vehicle. This system was opaque to end user and its functionality was limited at manufacturers end. Embedding of on-board diagnostic system to the vehicle started way back in the 1980’s, when the regulatory body California Air Resources Board (CARB) started making regulations for vehicles sold after 1988, to process on-board diagnostic system for detecting emissions failures. These systems focused on simply monitoring certain essential modules like engine control, exhaust gas ratio (EGR), fuel delivery and the oxygen sensing systems. But this effort by the CARB lacked to bring about standardization between different manufacturers or makes and even different models of a certain make. While another limitation of this era of OBD-I was in a vehicle, such as, missing or abrupt catalytic converter, misfiring of ignition module, or evaporative emission fault. With further advancement CARB developed new generation standard OBD-II in 1996. To support this a 16-pin connector port was made compulsory to be included in the vehicles sold after that period in USA [1].

For large period of time on-board diagnostics has been ensuring the health of the passenger cars and light trucks and has become an essential part of every vehicle. But making the most of this advanced system can be achieved by bringing transparency between OBD system and end user. Inputs from end user and their communication with the OBD can exploited to attain larger accuracy and throughput. OBDII is an enhanced diagnostic monitor, built right into the vehicle’s PCM. Its developed to alert the driver when emission levels are greater than 1.5 times the levels for the car as it was originally certified by EPA. The name on board diagnostics is actually a very accurate description of the system. II defines the second generation of the system, successor to OBD I. OBDII is designed to detect electrical, chemical and mechanical failures in vehicle emission system control system that might affect vehicles emission levels [2].

An OBD port is provided in all vehicles presently. The OBD port provide data from the on board system to user on being polled. This data can be diagnosed using OBD port and scanning tools. Scanners vary greatly in their complexity. Scanner’s ability is to pull codes includes both pulling DTCs (Diagnostic Trouble Code) from ECUs memory and causing the computer to perform self test of the system and display the resulting DTCs. A system connecting to laptop or desktop computer provides graphical user interface and memory. Such systems used by service station or most repair shops. Because of their investment in the equipment required, most repair shop charges a fee may be more as a fraud. So there are chances of cheating the customer with fake issues or excess price. Also there might be unavailability of service station at the nearest location and this overall process of diagnosis is long and time consuming.Fig.1 shows the conventional diagnosis device called Scan tools. A scan tool or PC with an OBDII interface

is the major component. The interface not only lets you to retrieve and erase codes, it stores vital information that can help us diagnose faults and verify repairs [2].

METHODS IN DIAGNOSIS [1]: -

Fig.1 Conventional diagnosis system using scan tool [2]

Code Pulling:

Scanner’s ability to pull codes includes both pulling DTCs (Diagnostic Trouble Code) from ECUs memory and causing the computer to perform self test of the system and display the resulting DTCs.

Functional Tests: Helps the technician quickly narrow down a problem to input side or output side of computer. The best known functional test is active command mode. The vehicle manufacturer usually programs this into the on-board computer.

II. APPROACH OF THE DIAGONSIS SYSTEM

You’re speeding and enjoying your drive, when all of a sudden that most mysterious of indicators turns on: ‘Check Engine’. What does it mean? The engine is a pretty vast and intricate machine, so ‘checking the engine’ isn’t going to produce many answers. That’s where the OBDII reader comes handy. This little device will allow us to pinpoint where that faults are occurring [8]. A tool is thus required to access OBD-II data and provide it to a user interface and by using this one can easily access the diagnostic information of their car. So the main aim of this project is to develop a user friendly Android App which diagnose the data by extracting it from OBD port via Bluetooth and provide diagnosed information, faults if any to user and advice for critical repairing or scheduled servicing of the vehicle by locating nearest service station using GPS interface. An Android app is an interface targeted for this exploitation, thus achieving better diagnostics and prevention of hazards to health of the vehicle and safety of the user with reduction of emissions and maximizing the fuel efficiency [8][9].

Fig.2 Functional block diagram of vehicle health monitoring system [8]

Page 77: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Car Health Monitoring Device Using Raspberry Pi

Proceedings of IC4T, 2018 59

The system shown in Fig.2 gives an overview consisting of various components to identify the faults and perform the diagnosis. The OBDII system as discussed earlier collects the feeds from engine and various other sensors and stores in ECU which in turn is accessed by OBDII ports. The data from the OBDII is accessed by a wired or wireless, to bring out more portability to car owners Bluetooth protocol was implemented which is wireless. For the ease of accessibility an android phone was chosen running the latest API (Application Program Interface) with latest operating system. The processing of the algorithm to perform diagnosis utilizes powerful hardware specifications of the android phone. In order to establish communication link between OBDII and Android application a driver IC ELM327 was used which is major component in the scan tool devices. The vision of the project was to bring out the transparency to the car owners about the happenings in the car and its health aspects. Since the Googles Android Auto is getting popular across major car OEMs, it has major potential to use the Android Auto platform for developers to improve the in-vehicle entertainment system. This project will help the car owners to provide portable diagnosis solutions to with Android integration directly to the any OEM specific car.

III. DIAGNOSTIC TROUBLE CODE ANALYSIS

OBDII is the only port using which the vehicular system is programmed and diagnosed. The OBDII standard provide extensible list of DTCs. In a vehicle there are more than hundreds of DTCs are available.

Diagnostic trouble codes or faulty codes are stored by the on-board diagnostic system. Codes are stored in response to a faulty occurred in the vehicular system. These codes are stored based on sensor feeds in car reports a reading i.e. outside the normal range. DTCs identifies faulty area and provides the technician with an information where a fault is occurred within a car [3][6]. There are 4-types of fault codes for diagnosis called as OBD-II DTCs shown in Fig.3.

Fig.3 Types of diagnostic trouble code [6]

All these 4-types are categorized in mainly two code types i.e. Generic or Global Codes (e.g. normally P0xxx) are defined in the OBDII standard and will be the same for all OEMs. The second digit as “0” indication are common all vehicle manufactures, required for basic emissions fault diagnosis. And Manufacturer or Enhanced Specific Codes (e.g. normally P1xxx) are defined by the manufacturer. If OEMs feel that

certain codes are necessary to define certain faults they have flexibility to form their own codes other than the generic code list which is more then emission specific faults. These are defined as “1” in their second digit to indicate they are unique to a vehicle model. DTCs are easy to read once we understand how they are set up. The Fig.4 shows the explanation of DTCs and how to read a DTC code [6]. For example, If the Check Engine Light is on, it means vehicle has one or more OBDII DTCs. The check engine light defines the faulty in engine performance.

OBDII diagnostic system keeps a track of emission related information which includes fuel, ignition, catalytic converter system and engine misfires etc. during the vehicle driving conditions. OBDII also performs self checks frequently such as fuel vapor leaks. If these self check fails to a certain limit or abnormal operation a faulty code is generated and indicates on Check engine light. Check Engine Light defines the emissions related faults has been occurred. the light doesn’t not give any serious issue in engine such as engine overheat, however it displays using other warning lights [4]. Table.1 shows the basic function of the check engine light. Table.2 gives basic emission related codes.

Fig.4 Diagnostic trouble code in detail [6]

Table I. Check engine light functions

Check Engine Light based on codes

Faulty or Functionality

Remain ON forever Codes not cleared or issue not fixedFlash ON and OFF Engine misfireOFF after a period of time Fault is no longer presentON and trouble codes in PC Vehicle will not pass OBDII

emission test

Page 78: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Suyash Gautam and Swarnima Shukla

Proceedings of IC4T, 201860

Table II. Emission based codesCodes FaultsP0130 through P0140, P0142through P0147,P0150 through P0167

Vacuum leaks, O2 Sensor information

P0171 or P0174 Engine is running lean, Air Fuel ratio is improper

P0172 & P0175 poor fuel economy , engine may idle or surge condition

P0105, P0106, P0107, P0108 &P0109

changes in intake vacuum which determines engine load

There are various such critical DTCs which are used in this project to diagnose the issue and non-critical codes also used [7].

IV. HARDWARE INTERFACE

A. OBDII Port

OBD-II signals are most often sought in response to ‘Check Engine Light /Malfunction Indicator Lamp (MIL)’ appearing on the dashboard or drivability problems experienced with the vehicle. The data provided by OBDII can often pinpoint specific component that has malfunctioned, saving substantial time and cost compared to guess and replace repairs. Scanning OBD-II signals can provide valuable information depending on the condition of the car. OBD-II is basically a 16-pin port. All the 16- pins are not used in each and every communication. The OBD-II connector is also called an SAE J1962 Diagnostic Connector, but also known by DLC/ OBD-II Connector/ OBD-II Port. In India nearly all passenger car and light trucks manufactured after 2009 is equipped with OBD-II as its made as standard. The Fig.5 shows pins used in OBD-II port [7] [10].

Fig.5 Description of OBD II port [10]

B. ELM327 Device

As mentioned earlier a driver IC was used to establish a link between OBDII port and Android application. The most popular among them is ELM series integrated circuits. Another popular and advanced competitor to ELM327 interpreter is

STN1100 interpreter. To extract the codes from OBD-II port, we used ELM327 interpreter that can be plugged into 16 pin OBD port to communicate with the vehicle ECUs and check for faults in any of the vehicle components by transmitting data to mobile app via Bluetooth.

Fig.6 IC description of ELM327[11]

The ELM327 interpreter is based on a PIC18 device .This integrated circuit provides a link between the user and the on-board diagnostic systems of the vehicle by acting as a bridge between OBD port and a standard RS232 interface. It has successfully developed application for automatically interpreting almost nine different protocols of the vehicles and providing the status about the health of the car, using their defined AT commands. In addition, it also provides support for high speed communication, a lower power sleep mode and J1939 truck and bus standard [11]. Fig.6 defines the pin details of the IC.

HARDWARE

Fig.7 Hardware representation of the device

Page 79: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Car Health Monitoring Device Using Raspberry Pi

Proceedings of IC4T, 2018 61

SOFTWARE

Fig.8 Working of the device

V. PROPOSED METHODLOGY

1. In the beginning we will connect the device to the phone through Bluetooth.

2. We will use bluetooth for pairing since it generates unique pairing id which will later help if we have multiple devices so data could be sent to each device.

3. The sim card will have a data plan so that once the device and phone are paired then through the internet from the sim we will send the data to the websockets and through these websocket the data will be sent to the the app as shown in the figure.

4. Since the data wiour ll not be understandable to the user so in the backend the data will be cleaned and parsed into tables and graph so that it will be understandable to the normal user.

5. The messages regarding accident and towing conditions will be checked through an algorithm here we will be using the accelerometer so as to know and differentiate between the two alerts.

6. The output of the algorithm will be used to send messages to user’s phone.

Since a large amount of data is being generated we will use this data for data analysis and we will predict the future dates when the vehicle will need servicing and driver behavior.

VI. EXPECTED RESULT

The app on the users phone will show the data and produce graphs based on the result. It will also show prediction based on the past 30 days data.

REFERENCES

[1] www.sae.org. visited 12/09/2018

[2] “OBD-II and Second Generation Scan Tool”, NAPA Institute of Automotive Technology, 1998.

[3] Al Santini, “OBD-II: Function, Monitors and Diagnostic Techniques”, Cengage Learning, Jun-2010.

[4] J. Bachiochi, “Vehicle Diagnostics”, from the bench, Issue 251/252/253, 2011. http://www.totalcardiagnostics.com visited 12/09/2018 http://www.aa1car.com visited 12/09/201

[5] P. Dzhelekarski and D. Alexive, “Reading And Interpreting Diagnostic Data From Vehicle OBD-II System”, Sozopol Bulgaria Electronics, 2005.

[6] D. Siewiorek. Generation smartphone. In IEEE Spectrum, September 2012.

[7] Martin Geier, Martin Becker, Daniel Yunge, Benedikt Dietrich, Reinhard Schneider, Dip Goswami, Samarjit Chakraborty, “Let’s put the Car in your Phone!”, Institute for Real-Time Computer Systems, TU Munich, Germany, 2013.

[8] Sung-hyunBaek, Jong-WookJang Department of Computer Engineering, Dong-Eui University, Republic of Korea, “Implementation of integrated OBD-II connector with external network”, Nov. 2014.

[9] “On-BoardDiagnostics ICs”, http://elmelectronics.com/obdic.html, 2015 visited 12/09/2018

Page 80: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Deepmala Trivedi and Gopal Singh Phartiyal

Proceedings of IC4T, 201862

A Review on Relay Selection Schemes in Cognitive Radio Networks

Deepmala Trivedi and Gopal Singh PhartiyalDepartment of Electronics and Communication Engineering

Indian Institute of Technology RoorkeeRoorkee, Uttarakhand, India

[email protected] and [email protected]

Abstract— This study discusses some of the major techniques proposed and developed by researchers for relay selection in wireless data communication networks. First, various techniques are grouped together on the basis of network performance parameters they focused on. Further, techniques under the same parameter are discussed one by one. Network performance parameters considered in this study are; distance to/from relays, battery energy, signal to noise ratio (SNR), information security and, channel interference. The study suggests that techniques are application and user specific.

Keywords—Relay selection, wireless networks, SNR, channel interference, information security.

I. INTRODUCTION

Day by day, the number of users utilizing the wireless communication networks are increasing. In result, the congestion in wireless network increases. One of the major issue that arise due to this congestion is inefficient spectrum utilization in these complex wireless systems. Cognitive Radio (CR) technology is the empowered innovation for supporting dynamic spectrum access [1]. This technology addresses the spectrum scarcity issue experienced by numerous nations. CR allows the secondary users (SUs) or the unlicensed users to share the limited spectrum with the primary users (PUs) or the licensed users judiciously with the help of spectrum sharing mechanisms.

There are two types of classification scenarios provided in the literature for spectrum sharing mechanism in CRs [2]. The first classification of spectrum sharing mechanisms is based on the knowledge required to coexist in the primary network. In this classification scenario, three types of spectrum sharing mechanism exist, namely overlay, underlay and interweave [3]. The second classification is based on how SUs accesses the licensed band. If SUs accesses the spectrum opportunistically then it is called dynamic spectrum sharing while another technique called the cooperative spectrum sharing, uses spatial diversity as information to access the spectrum [2]. Each of these spectrum sharing mechanisms have their advantages and disadvantages depending on the

requirements of the network. The choice of spectrum sharing mechanism will affect the choice of relay selection scheme.

At the physical layer of any spectrum sharing scheme, signal is transmitted and received with the help of antennas. Often the distance between transmitter antenna and the receiver antenna is very large than the antenna’s transmission range therefore a direct communication is not feasible. In order to mitigate this issue, a “relay” is used. Relay is a device (amplifier/ modem) which receives a weak and corrupted signal form a transmitter, and retransmits an amplified or uncorrupted signal to a receiver. Using relays between a transmitter and a receiver provides multiple advantages such as; amplified coverage, enlarged capacity, enhanced signal strength, lower transmission power requirements and, quicker network rollout [4]. Further discussion on the advantages of relays is done in session II. Relays are classified as; first, on the basis of relay strategy used [5]. Second, on the basis of mode of operation [6]. Further discussion on these categories is done in session III.

In a CR network, there are multiple relays called ‘nodes’ between a transmitter and a receiver which are ready to participate in ‘signal transmission process’ and become a ‘relay node’. Please note that until a relay is not selected for transmission, it is termed as a node, and if it is selected, it is termed as ‘relay node’. For successful functionality of any spectrum sharing mechanism, relay selection scheme plays an important role [7]. Also, note that later in this study, the term ‘wireless network’ is used instead of ‘cognitive radio networks’ for the ease of explanation and also that relay schemes discussed here work similarly in both the networks. Various relay selection algorithms are discussed in session IV of this study. Various selection criteria such as energy, SNR, capacity enhancement are also discussed in section IV. Session V concludes the study.

II. RELAY FUNCTIONALITY AND ADVANTAGES

Relaying technique promotes the broadcast attributes of wireless signals. The transmitter transmits signal to both receiver and relays. For example, in equation 1 (a and, b),

Page 81: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

A Review on Relay Selection Schemes in Cognitive Radio Networks

Proceedings of IC4T, 2018 63

dsY , and rsY , are the received signals at receiver (destination) and relay nodes respectively .

rsrssrs

dsdssds

xHPY

xHPY

,,,

,,,

η

η

+=

+=

(1a)(1b)

where Ps is the transmitted power at transmitter, x is the transmitted symbol, and Hs,d and Hs,r are the channel coefficients for transmitter-receiver and transmitter-relays

respectively. ds,η and rs,η are the additive white complex Gaussian noises [8] [9].

In this section, core benfits of using relays in communication networks is discussed. For example, using multi-hop relay networks can enhance the transmission range of a base station. Fiqure 1 demonstrates how to enhance the coverage in a network using multi-hop relays [10].

Figure 1: Base station network with relays [10]

Another issue resolved by the use of relaying is weak signal strength when large distance commincation takes place. Communication without relays is prone to interferences and noise. On the other hand, in relay based network, a relay receives the transmiitted signals and amplifies/decodes them. After this, the interference and noise gets suppressed or removed from the signals thereby increasing signal strengths and signal to noise ratios (SNR).

Further, it is studied in [11], that decreasing the distance between transmitter and recevier increases channel capacity. Since it is not possibe to bring the transmitter and recevier close to each other in long distance communications,

therefore, relays are used for keeping the channel capacity intact at long distance communications. Futher, use of relay networks, removed the need of supplementry base stations, hence reducing the maintenance costs.

Finally, through relay networks, high data rates are achieveable because users are present near to the mini RF entrance node [12]. Decrease in transmission power and amplified capacity are also the benefits of employing relays in communication networks.

III. TYPES OF RELAYS

Relays are classified based on two criteria. First classification is based on the type of relay strategy used in relay networks. This classification is discussed in part A. Second classification which is discussed in part B is based on the mode of operation of relay.

A. Relay Strategy based Classification:

In this classification, three relaying strategies are discussed namely; amplify and forward (AF), decode and forward (DF) and, compress and forward (CF).

Amplify and Forward (AF) strategy: In this strategy, relays receive the signals from transmitter and amplify it. The amplified version of signals is forwarded towards the receiver [13]. These type of relays are sometimes called as ‘repeaters’ by the communication community. The amplification in AF relay strategy is based on equation 2.

rsrsAF YYf ,, )( β= (2)

[ ] [ ]2,22

, rsrs

r

ExEH

P

ηβ

+≤

(3)

Where, β is the relay transmit average power constraint

coefficient and rP is the average transmit power at the relay. β is derived as suggested in [14].

With the use of this strategy, signal strength can be improved whenever needed. But, the major disadvantage of AF strategy is ‘noise amplification’ along with the signal. Therefore, SNR does not improves significantly as signal strength does. Despite having the above issue, it is an attractive strategy for relay networks due to its simplicity.

Decode and Forward (DF) strategy: In this strategy, relay receives the signals from transmitter, decodes it, removes the noise and, re- encodes it. These re-encoded signals are forwarded towards the receiver. Decoding and encoding capability in relays compounds the operational complexity

Page 82: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Deepmala Trivedi and Gopal Singh Phartiyal

Proceedings of IC4T, 201864

of relay networks. Still, this strategy is utilized because it improves the SNR significantly [15].

Compress and Forward (CF) strategy: In this strategy, relay receives the signals from transmitter, quantizes it and, re-encodes it. These re-encoded signals are forwarded towards receiver. This scheme is helpful when relays cannot decode the signals.

B. Mode of Operation based Classification:

Under this classification, two relaying techniques are explained namely half duplex relaying and full duplex relaying.

Half Duplex Relaying: In this mode of relaying, receiving the signals from transmitter and forwarding it towards receiver is either done at different frequencies or in different time slots with the help of a multiplexer. This mode of operation is simple and less complex but spectral efficiency is low in this case.

Full Duplex Relaying: In this mode of relaying, receiving the signals from transmitter and forwarding it towards receiver is done at the same frequency and in the same time slot. Full duplex relaying is very hard to implement in practice because of the ‘self-interference’ phenomena occurring in the network. Many interference cancellation schemes were proposed following which full duplex got popular [16][17].

This section summarizes how a relay may perform once it is used for relaying purpose. But, in a generic wireless network, there are numerous relays available for a transmitter to send its signals to. Out of these, only one, or few are to be selected for relaying at a particular frequency and/or time. A relay selection scheme selects the best relay out of multiple available relays to transmit the signal further. Hence, relay selection becomes crucial for efficient performance of spectrum access mechanisms or in turn for wireless networks. In the next sections, various relay selection schemes are explained and discussed in brief. The discussion is categorically divided on the basis of various network performance parameters.

IV. RELAY SELECTION SCHEMES

There are various ways to choose the best relay. Major relay selection schemes (RSS) present in the literature are discussed here in this section. For example, RSS based on optimization of different parameters like energy efficiency, SINR, interference, capacity enhancement, data rate, security and distance are discussed. Please note here that further in this section, a relay is termed as a ‘node’ until it is selected for relaying. This will help in separating selected and unselected nodes.

A. Distance based RSS:

In general, these schemes select the relays based on distances between transmitter, and nodes. In [18], authors proposed a

distance based forwarding node selection algorithm to set the value of the timer to overcome large end-to-end delay problem which occurred in previous reports [19]. Lower values of end-to-end delay helps reliable and rapid packet delivery over an unreliable wireless network. In the previous reports, random value-based forwarding node selection algorithms are used. The major idea in this paper is to control the value of distance between two adjacent reference nodes (r = 0.5). In [20], authors proposed a unique multilayer scheme for virtual multi-input single-output (MISO) links in ad hoc networks. The main advantage of this scheme is the increment in transmission range at the cost of slight channel interference. Using this scheme, authors also reduced the end-to-end delays between nodes. In [21] and [22], authors proposed geographic random forwarding (GeRaF) RSS for ad hoc and sensor networks. In these schemes, selection of the relaying nodes is random. Every node has the knowledge of its own location and the location of final destination. The study suggested that there is no need of topology information, or routing tables at each node. Instead knowledge of positions is enough for selection of nodes.

B. Energy efficiency based RSS:

These schemes focus on optimal utilization of transmitter/node energy sources.

In [23], authors designed an energy efficient relay selection scheme for DF relay strategy based cooperative transmission. In this scheme, authors proposed an adaptive power allotment approach where power is distributed non-uniformly between various cooperative transmitters/nodes. The total power consumption of network reduces considerably using thig this approach. In [24], authors proposed an energy efficient relay selection scheme. In this scheme, at first, they framed various cooperative sets of nodes for a source according to its transmission range. Then, they proposed the energy efficient cooperative-sets based maximum weighted matching (EECS-MWM) algorithm for relay selection. But, utilization of this scheme needs precaution because energy efficiency of system decreases with the increase of SNR threshold. In [25], authors proposed two refinements in the basic framework of cooperative relaying based communication network. First refinement suggested by them is the introduction of the concept of ‘relay selection on demand arrangement’. It means that when receiver need relays, then and only then the relays are selected otherwise relays are in sleep mode. The second refinement suggested by the authors is the introduction of ‘early retreat arrangement’ concept in cooperative relaying. In this arrangement, those relays will not take part in a relay selection process which have poor channel conditions.

C. Security based RSS:

Due to the broadcast nature of wireless channels, the issues of privacy and security in wireless networks have gone up,

Page 83: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

A Review on Relay Selection Schemes in Cognitive Radio Networks

Proceedings of IC4T, 2018 65

particularly in military and country-wide security applications.

In [26], authors proposed, a paired-relays selection scheme for DF protocol. In this scheme, the first relay transmits a confidential message to the legitimate receiver. This relay is called the ‘helper’. The second relay transmits a jamming signal to corrupt the signals received by the eavesdropper. Authors, suggested four strategies for deploying this scheme. These are, random relay and random jammer (RRRJ), random jammer and best relay (RJBR), best relay and best jammer (BRBJ), and best relay and no jammer (BRNJ). This study also demonstrates the tradeoff between secrecy performance and implementation cost. In [27]. authors formulated a two-step strategy for relay selection in DF protocol based relay network. Here, authors assume that transmitter and nodes are located in the one single cluster, while receiver and eavesdroppers are at faraway locations outside this cluster. In step 1, the transmitter transmits its message locally within the cluster with small power. These transmissions are assumed to be secure. Further, in step 2, relays decode the received message and then, the transmitter and the relay cooperatively transmit a weighted version (with knowledge of channel state information (CSI)) of the message to the receiver.

D. SNR based RSS:

In AF protocols, when a relay amplifies a signal, then the noise associated with the signal also gets amplified. Therefore, SNR goes down. Whereas in DF protocols, inefficient decoding causes degradation in SNR. Obtaining optimal SNR using relay selection is discussed in this section.

In [28], the authors presented a scheme based on artificial bee colony (ABC) algorithm. This algorithm, first searches all possible relay paths (using employed bees) and calculates corresponding SNRs. Then, shares this information with their coworkers (onlooker bees and scout bees). Through this iterative process, the best relay route (food source) is identified. So the main purpose of the whole bee colony is to find a relay route that maximizes the SINR. In [29], authors proposed a DF based cooperative relaying scheme. Due to inefficient decoding in conventional DF protocol, degradation in SINR and error propagation occurs. In the proposed scheme, authors suggested an optimum threshold value based relay selection algorithm over the Nakagami-m fading channels. Authors suggested that optimum threshold based relay selection schemes are effective in minimizing the error propagation issue which arises due to detection errors. They have considered virtual noise (VN), multiple relay selection (MRS), and cooperative MRS (C-MRS) detection schemes for evaluation of optimal threshold based relay selection scheme.

E. Data Rate (Channel Capacity) based RSS:

The demand of high data rates from wireless network users such as mobile users is rapidly increasing due to new

applications such as video streaming services, online gaming etc. Therefore, communication networks are required to provide high data rates and improved channel capacity. The relay selection schemes discussed here in this section are focused on developing algorithms that can achieve high data rates and improved capacity.

In [30], authors proposed a reverse auction mechanism to select a node from among the available nodes to act as a relay for a transmission . This auction mechanism is employed for three cases: first, when nodes are allocated a fixed transmission power, second, when nodes are allocated the transmission power required to achieve a specific data rate, and third, when the transmission power of nodes is set so as to maximize the base station’s utility. Auction scheme proposed by authors performed better than the auction scheme based on the widely used Vickrey-Clarke-Groves (VCG) mechanism in terms of the data rate achieved by the destination node.

F. Interference based RSS:

In most of the early reported work related to relay selection schemes, multi-user interference is ignored. The reason being; first, less number of users utilizing the network and second; including multi-user interference study with other issues would have made the soloution move complex and less interpretable. Previously reported schemes become inefficient in presence of multi-user interference because interference limits the diversity gain and also bounds the channel capacity. But, now with advanced, reliable, and sustainable networks, such studies are feasible. For example, in [31], authors studied effect of multi-user interference in relay selection schemes with AF protocol. In this study, authors proposed a Max-Min relay selection for legacy AF systems while considering multi-user interference.

V. CONCLUSION

The current study briefly discussed the role and importance of relays in wireless data communication networks. Relays are classified on the basis of mode of operation and relaying strategy. Relay selection technique is pivotal to efficient relaying mechanism. Popular as well as recent schemes are discussed and critically analyzed in this study. There are parameters based on which only few relay selection schemes are developed. These articles are not discussed in this study. In recent literature, battery energy and information security, based relay selection schemes are trending. Extensive reviews focused on one particular network performance parameter can provide in-depth information on the current state of the art.

REFERENCES

[1] Min Song, Chunsheng Xin, Yanxiao Zhao, and Xiuzhen Cheng, “Dynamic spectrum access: from cognitive radio to network radio,” IEEE Wirel. Commun., vol. 19, no. 1, pp. 23–29, Feb. 2012.

Page 84: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Deepmala Trivedi and Gopal Singh Phartiyal

Proceedings of IC4T, 201866

[2] P. Thakur, G. Singh, and S. N. Satasia, “Spectrum sharing in cognitive radio communication system using power constraints: A technical review,” Perspect. Sci., vol. 8, pp. 651–653, Sep. 2016.

[3] B. Kumar, S. Kumar Dhurandher, and I. Woungang, “A survey of overlay and underlay paradigms in cognitive radio networks,” Int. J. Commun. Syst., vol. 31, no. 2, p. e3443, Jan. 2018.

[4] C. Cai, Y. Cai, X. Zhou, W. Yang, and W. Yang, “When Does Relay Transmission Give a More Secure Connection in Wireless Ad Hoc Networks?,” IEEE Trans. Inf. Forensics Secur., vol. 9, no. 4, pp. 624–632, Apr. 2014.

[5] M. Rizinski and V. Kafedziski, “Outage probability of AF, DF and CF cooperative strategies for the slow fading relay channel,” in 2013 11th International Conference on Telecommunications in Modern Satellite, Cable and Broadcasting Services (TELSIKS), 2013, pp. 609–612.

[6] Kanghee Lee, H. M. Kwon, E. M. Sawan, Hyuncheol Park, and Y. H. Lee, “Half-duplex and full-duplex distributed relay systems under power constraints,” in 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2013, pp. 684–689.

[7] Tao Jing, Shixiang Zhu, Hongjuan Li, Xiuzhen Cheng, and Yan Huo, “Cooperative relay selection in cognitive radio networks,” in 2013 Proceedings IEEE INFOCOM, 2013, pp. 175–179.

[8] A. Sendonaris, E. Erkip, and B. Aazhang, “User cooperation diversity-part I: system description,” IEEE Trans. Commun., vol. 51, no. 11, pp. 1927–1938, Nov. 2003.

[9] A. El Gamal and T. M. Cover, “Multiple user information theory,” Proc. IEEE, vol. 68, no. 12, pp. 1466–1483, 1980.

[10] B. Razeghi, G. A. Hodtani, and S. A. Seyedin, “On the coverage region of MIMO two-hop amplify-and-forward relay network,” in 7’th International Symposium on Telecommunications (IST’2014), 2014, pp. 1035–1039.

[11] V. Chandrasekhar, J. Andrews, and A. Gatherer, “Femtocell networks: a survey,” IEEE Commun. Mag., vol. 46, no. 9, pp. 59–67, Sep. 2008.

[12] Iwamura Mikio; Takahash Hideakii; Nagata Satoshi, “LTE-Advanced Relay Technology Self-backhauling,” NTT DOCOMO Techanical J., vol. 12, pp. 29–36, 2011.

[13] W.-C. Choi, S. Kim, S.-R. Jin, and D.-J. Park, “Average approach to amplify-and-forward relay networks,” in 2009 9th International Symposium on Communications and Information Technology, 2009, pp. 1195–1196.

[14] J. N. Laneman and G. W. Wornell, “Energy-efficient antenna sharing and relaying for wireless networks,” in 2000 IEEE Wireless Communications and Networking Conference. Conference Record (Cat. No.00TH8540), vol. 1, pp. 7–12.

[15] K. Wang, C. Cai, and Y. Cai, “Performance analysis of selection decode-and-forward relay networks in generalized fading channels,” in 2011 International Conference on Wireless Communications and Signal Processing (WCSP), 2011, pp. 1–5.

[16] Z. Tong and M. Haenggi, “Throughput Analysis for Full-Duplex Wireless Networks with Imperfect Self-interference Cancellation,” Notre Dame, USA, 2015.

[17] D. Korpi et al., “Advanced self-interference cancellation and multiantenna techniques for full-duplex radios,” in 2013 Asilomar Conference on Signals, Systems and Computers, 2013, pp. 3–8.

[18] M. Chen, X. Liang, V. Leung, and I. Balasingham, “Multi-Hop Mesh Cooperative Structure Based Data Dissemination for Wireless Sensor Networks,” in International Conference on Advanced Communication Technology, 2009, pp. 102–106.

[19] Qiang Dou et al., “An energy efficient routing protocol for Wireless Sensor Network,” in 2012 18th Asia-Pacific Conference on Communications (APCC), 2012, pp. 823–827.

[20] G. Jakllari, S. V. Krishnamurthy, M. Faloutsos, P. V. Krishnamurthy, and O. Ercetin, “A Cross-Layer Framework for Exploiting Virtual MISO Links in Mobile Ad Hoc Networks,” IEEE Trans. Mob. Comput., vol. 6, no. 6, pp. 579–594, Jun. 2007.

[21] M. Zorzi and R. R. Rao, “Geographic random forwarding (geraf) for ad hoc and sensor networks: multihop performance,” IEEE Trans. Mob. Comput., vol. 2, no. 4, pp. 337–348, Oct. 2003.

[22] M. Zorzi and R. R. Rao, “Geographic random forwarding (geraf) for ad hoc and sensor networks: energy and latency performance,” IEEE Trans. Mob. Comput., vol. 2, no. 4, pp. 349–365, Oct. 2003.

[23] Z. Sheng, J. Fan, C. H. Liu, V. C. M. Leung, X. Liu, and K. K. Leung, “Energy-Efficient Relay Selection for Cooperative Relaying in Wireless Multimedia Networks,” IEEE Trans. Veh. Technol., vol. 64, no. 3, pp. 1156–1170, Mar. 2015.

[24] Yun Li, Chao Liao, Xue Zhu, M. Daneshmand, and Chonggang Wang, “Optimal energy-efficient relay selection in cooperative cellular networks,” in 2013 19th IEEE International Conference on Networks (ICON), 2013, pp. 1–5.

[25] H. Adam, C. Bettstetter, and S. M. Senouci, “Adaptive relay selection in cooperative wireless networks,” in 2008 IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications, 2008, pp. 1–5.

[26] Y. Liu, L. Wang, T. T. Duy, M. Elkashlan, and T. Q. Duong, “Relay Selection for Security Enhancement in Cognitive Relay Networks,” IEEE Wirel. Commun. Lett., vol. 4, no. 1, pp. 46–49, Feb. 2015.

[27] Lun Dong, Z. Han, A. P. Petropulu, and H. V. Poor, “Secure wireless communications via cooperation,” in 2008 46th

Page 85: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

A Review on Relay Selection Schemes in Cognitive Radio Networks

Proceedings of IC4T, 2018 67

Annual Allerton Conference on Communication, Control, and Computing, 2008, pp. 1132–1138.

[28] J. Wang, X. Li, and K. Ma, “Multiple Relay Selection Scheme Based on Artificial Bee Colony Algorithm,” in 2013 Third International Conference on Instrumentation, Measurement, Computer, Communication and Control, 2013, pp. 1055–1059.

[29] E. Aydin and H. Illian, “SNR-based relay selection scheme for cooperative relay networks,” in 2015 International Wireless Communications and Mobile Computing Conference (IWCMC), 2015, pp. 448–453.

[30] M. V. S. Aditya, P. Priyanka, and G. S. Kasbekar, “Truthful reverse auction for relay selection, with high data rate and base station utility, in D2D networks,” in 2017 Twenty-third National Conference on Communications (NCC), 2017, pp. 1–6.

[31] I. Krikidis, J. Thompson, S. Mclaughlin, and N. Goertz, “Max-min relay selection for legacy amplify-and-forward systems with interference,” IEEE Trans. Wirel. Commun., vol. 8, no. 6, pp. 3016–3027, Jun. 2009.

Page 86: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Nikita Gupta

Proceedings of IC4T, 201868

Modelling and Performance Analysis of Distributed Generation Systems integrated with

Distribution GridNikita Gupta

Electrical Engineering, Babu Banarasi Das Universit , Lucknow, India Email : [email protected]

Abstract— Need for smart electrical system with less technical losses and less environmental pollution has led to the advent of Distribution Generation (DG). DGs are variety of technologies that produce electrical energy near the load centers. Mostly utilized technologies are solar panels and Combined Heat Power (CHP). DGs are used as backup power devices so as to increase reliability, reducing network costs and transmission losses. In this paper, the impact of solar photovoltaic cells on the operation of grid has been discussed. For analyzing the performance characteristics, the model has been developed in MATLAB/SIMULINK.

Keywords—Distributed generation, Power quality, Islanding, Wavelet, Photovoltaic Array.

I. INTRODUCTION

Now-a-days DG has become popular alternate source of energy. Different type of DG technologies are available that are based on renewable or renewable resource types. Renewable resources are solar, wind, hydropower and geothermal [1]. In non-renewable resources CHP, fuel cells, micro turbines and internal combustion engines are used. DGs can include energy storage devices too. Integration of DGs in grid creates various issues that affect whole system technically and commercially [2]. In this paper, integration of DG i.e. photovoltaic array installed at load has been used to analyze its impacts on grid. In the introductory part the role of DG integration and its impacts on grid operation is discussed. DG units are much smaller generating units as compared to conventional power plants. So they consist of small capacity generators and produce electricity near load centres [3]. DGs can work in two modes - interconnected and islanded. In interconnected mode, photovoltaic cell gives power to load and source both. In case of islanded mode, the voltage at source gets interrupted and photovoltaic cell provides power only to load. DG integration gives many benefits such as:• It has modular and flexible technologies.• It can be located near the load they serve.• It can also comprise of multiple power generation and

storage components.

• It can decrease the unit manufacturing and transportation cost (Cost- effective).

• It can improve energy efficiency and decreases line losses.

• It can help in social welfare and profit maximization.So, DG integration can play a vital role for fulfilling the increased energy demand having above mentioned benefits. But the integration of DGs into the distribution grid creates planning and operational issues. It can impact the control of voltage level, power quality, stability, and protection philosophy. Due to the integration of DGs voltage level rise or fall may occur which is dependent on the power generated by the DG. Power quality related issues e.g. voltage flickers, sag, swells, harmonics and interruption also become existent after DG integration in the grid. This can lead to instability in the system whether it be of voltage, transient or small signal. Instability creates a negative impact on protection philosophy which may lead to false tripping, blinding of protection, unsynchronized reclosing, co- ordination issues, islanding problems and fault level issues. The influence of DG system is dependent on the size, kind, and nature of DG units and effect DG static and dynamic aspects [4]. The static issues are voltage level control and dynamic issues are power quality, stability and protection..

Section 2 deals with the modelling of DG integrated grid using MATLAB/SIMULINK to analyze integration effects on DG. Section 3 describes comparison result and last section conclusion and future scope.

II. PHOTOVOLTAIC SYSTEM INTEGRATED WITH GRID

Among all DG technologies solar photovoltaic array energy is the most popular DG technology that fulfils bulk of the energy requirements [5]. In this section structural representation of photovoltaic system integrated with grid is shown in Fig. 1.

It comprises of photovoltaic arrays and DC/DC converter. Photovoltaic arrays provide electrical energy and converter is used for controlling the voltage of arrays with respect to

Page 87: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Modelling and Performance Analysis of Distributed Generation Systems integrated with Distribution Grid

Proceedings of IC4T, 2018 69

Fig. 1. Integration of PV arrays

the grid voltage and also for extracting the maximum power from arrays [6]. By controlling the duty cycle of the converter switch, Maximum Power Point Tracking (MPPT) is obtained. MPPT can also be achieved with the help of various techniques such as solving the voltage-power equation of solar cell mathematically, using the algorithm called Perturbation and Observation (P and O), using Incremental Conduction method (IncCon), using Maximum power point forecasting based on the current method and also using Maximum power point forecasting based on the voltage method.

In this, P and O type algorithm is utilized for maximizing the output voltage and efficiency. In this method, a short perturbation is applied on the output voltage of array and voltage is measured. If this perturbation causes an increment in output power will produce an increment in the output power, then the next perturbation is given in the same direction. If there is a reduction in the output power, the next perturbation is given in the reverse direction as shown in Fig. 2. This process continues until the point of maximum power of the photovoltaic arrays is achieved.

Fig. 2. Flow chart of algorithm (P and O)

III. SIMULATION MODELING

The modeling has been done with and without integration in MATLAB/SIMULINK platform.

A. Simulation model without Photovoltaic array

The simulation model of distribution grid integrated without photovoltaic array is shown in Fig. 3. It consists of 3 phase source, line impedances, transformer, non-linear load and circuit breaker.

B. Simulation model with Photovoltaic array

The simulation model of photovoltaic system integrated with grid is shown in Fig. 4. This model contains photovoltaic array which is connected at 25kV grid. This model has source, line impedances, non linear loads, solar array module with MPPT. MPPT is done via DC/DC converter such as buck converter using P & O algorithm. The P and O control is used to gain the maximum power point. A DC link is used in between the photovoltaic array and the inverter to ensure ripple free input current from solar panel. Buck converter is used to ensure maximum power tracking. Variables or parameters at normal condition used in modeling with its ratings are specified in Table I.

Fig. 3. Model without integration

Table I. Variables for ModelPARAMETERS VALUESSupply frequency 50 Hz3 phase coupling transformer 100 kVA 220V/25 Kilo-VoltsSeries and Shunt resistances 0.37152 and 269.59 ohmMaximum power in array 305.226 WattPhotovoltaic array rating 100kiloWattsParallel strings in array 66 with 5 series connected

modules per stringOpen circuit voltage 64.2 VoltsShort circuit current 5.96 AmperesIrradiance 1000 W/m2

Page 88: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Nikita Gupta

Proceedings of IC4T, 201870

Fig. 4. Model with integration

The detailed model contains following elements such as:

PV Arrays

In this model, PV arrays are used to generate electrical power. The amount of power from single panel is not sufficient so to gain higher voltage hence we have connected a number of panels in series and to gain high current, in parallel [7]. By connecting different panels we can make PV arrays that help in providing desired output. In this model arrays are used to produce power of 100kW.

Boost Converter

It is a converter that increases the voltage from low level DC to high level DC. The amount of voltage at the output depends on the input voltage, the rating of inductor, the storage rating of capacitor and switching speed [8]. In this model 5 kHz boost converter has been used to increase voltage up to 500V.

Voltage Source Inverter (VSI)

The voltage conversion from DC to AC is achieved through inverters. [9-10]. Here it has been used to change DC voltage of 500V into AC voltage of 260V.

IV. RESULT AND ANALYSIS

A. Inverter DC Voltage Characteristics

With integration of photovoltaic array the voltage at DC inverter becomes 500V as shown in Fig.5.

Fig. 5. Characteristic of DC voltage w.r.t time

B. Power Characteristics at bus

With the integration of photovoltaic array the power at bus becomes 106.13 W as shown in Fig. 6.

Fig. 6. Characteristics of bus power w.r.t time

C. Voltage Characteristics at bus

With the integration of photovoltaic array the characteristics of voltage at bus is shown in Fig. 7.

Fig. 7. Characteristics of bus voltage w.r.t time

Page 89: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Modelling and Performance Analysis of Distributed Generation Systems integrated with Distribution Grid

Proceedings of IC4T, 2018 71

D. Current Characteristics at bus

With the integration of photovoltaic array the characteristics of current at the bus is shown in Fig. 8.

Fig.8. Characteristics of bus current w.r.t time

V. CONCLUSION

In this paper, the integration of solar arrays with distribution grid has been carried out to analyze the impacts before and after integration. Various factors like bus voltages, line voltages and load current have been found out and plotted with respect to time. This study leads to believe that this integration is technically feasible and can help to fulfil the ever increasing demand of energy and can also save the environmental degradation being the clean energy resource. But there are some technical issues that are evident from the characteristics and hence DG integration with proper control techniques is required for its proper utilization.

REFERENCES

[1] Khoan Tran, Vaziri. M., “Effects of dispersed generation (DG) on distribution systems,” Power Engineering Society General Meeting, IEEE , pp. 2173- 2178 vol. 3, 12-16 June 2005.

[2] Hussain, B., Sharkh. S.M, “Impact studies of distributed

generation on power quality and protection setup of an existing distribution network,” Power Electronics Electrical Drives Automation and Motion (SPEEDAM), pp.1243-1246, 14-16 June 2010.

[3] H. Falaghi, M.R. Haghigam, “Distributed Generation Impacts on Electric Distribution Systems Reliability: Sensitivity Analysis”, Proc. of International Conference on “Computer as a tool”, November , pp. 22-24, 2005.

[4] Edward J. Coster, Johanna M. A. Myrzik, Bas Kruimer, “Integration issues of distributed generation in distribution grids,” Proceedings of the IEEE, vol 99,pp. 29-39, 2011.

[5] D. Lu, T. Zhou, H. Fakham, B. Francois, “Design of a power management system for a PV station including various storage technologies“,13th International Power Electronics and Motion Control Conference, EPE-PEMC, Poznan, 1-3 september 2008.

[6] A.P.Yadav, S. Thirumaliah and G. Harith. “Comparison of MPPT Algorithms for DC-DC Converters Based PV Systems” International Journal of Innovative Research in Science, Engineering and Technology, vol. 6, issue 6, pp. 12103-12110, June 2017.

[7] K. Nishioka, N. Sakitani, Y. Uraoka, and T. Fuyuki, “Analysis of multicrystalline silicon solar cells by modified 3-diode equivalent circuit model taking leakage current through periphery into consideration,” Solar Energy Mater. Solar Cells, vol. 91, no. 13, pp. 1222–1227, 2007.

[8] Hyun-Lark Do, “A Soft-Switching DC/DC Converter With High Voltage Gain,” IEEE Transactions On Power Electronics, vol. 25, no. 5, pp.1193-1200, May 2010.

[9] Lindberg A., Larsson T., “PWM and Control of Three Level Voltage Source Converters in an Hvdc Back-to-back Station”, Sixth International Conference on AC and DC Power Transmission,, pp. 297-302, 29 Apr-3 May, 1996.

[10] Vaziri M. Vadhva, S. Oneal. T., Johnson, M.; , “Smart grid, Distributed Generation, and standards,” Power and Energy Society General Meeting, IEEE , pp.1-8, 24-29 July 2011.

Page 90: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Dr. Nandita Kaushal

Proceedings of IC4T, 201872

A Peek into Renewable Energy Systems of IndiaDr. Nandita Kaushal

Department of Public Administration, University of LucknowEmail: [email protected]

Abstract-Energy is highly crucial input for sustenance of life. At the global level energy finds a marked place among the Sustainable Development Goals. In the global scenario India is one of the leading global energy producers and consumers. Its power sector makes use of both conventional and non-conventional or renewable energy resources. However, despite vast availability of latter resources their share in total power generation is quiet less.

Keywords-Energy, Electricity, Socio-Economic Development, Power Generation, Renewable Energy Resources.

I. INTRODUCTION

Energy is such a basic and highly crucial input for sustenance of life and well-being of all living beings that it is inconceivable to think about life and development of any sort without energy. One of the pivotal catalysts to drive the process of socio-economic development is the availability of dependable energy supplies. The demand for energy is increasing unabated globally and considering its over-riding importance for development and prosperity of nations it is imperative that all issues related to energy sector should be approached judiciously and in a responsible manner.

At the global level energy finds a marked place among the Sustainable Development Goals, adopted by the United Nations in the year 2015, as goal number 07. It is one such goal which is profoundly related to all other goals as their achievement is closely linked to access to energy. The policy document reads that for securing universal access to cost-effective energy by the year 2030 more investments are need to be made in clean energy sources. A key goal before the policy scientists is to enlarge infrastructure and advance technology so as to ensure provision of clean energy in each developing country. This can help to spur growth and protect environment (United Nations Development Programme, Sustainable development goals).

II. NATIONAL POWER SECTOR SCENE

India is one of those countries in the world which holds a distinguished position in terms of production and consumption of energy. The country’s power domain is among the most diversified ones in the world and utilizes both conventional and non-conventional or renewable energy resources for power generation. The demand for electricity is increasing fast

mainly owing to continuous economic development and this is necessitating acceleration of installed generating capacity of power stations. In the month of April 2018 total installed capacity of country’s power stations was 343.79 gigawatts (GW). The most significant power source in the country is the thermal power followed by renewable energy, hydro power and nuclear power. In the financial year 2018 the production of electricity is 1,201.543 billion units and this is the growth of about 55.72% over the previous financial reserves. The electricity production in the country has advanced at the compound annual growth rate of 05.69% during the financial years 2010-2018 (India Brand Equity Foundation, Power sector in India).

III. CHIEF NATIONAL RENEWABLE ENERGY SYSTEMS

Renewable energy sources are in plenty in India and the energy from these sources is supplementing the energy obtained from conventional sources. India is among those countries in the world which are having largest renewable energy generation. As per the data from the union Ministry of New and Renewable Energy grid-interactive power capacity of renewable energy sources was 71,325.42 megawatts (MW) while off-grid/captive power capacity of these sources was 1,073.37 MW up to the month of June 2018 (Ministry of New and Renewable Energy, Programme/Scheme wise physical progress).

National Institute of Wind Energy has estimated that the country has wind potential in excess of 300 GW at a hub height of 100 meter, solar potential of ~750 GW and small hydro potential of ~ 20 GW. The renewable power installed capacity in cumulative terms was 62.84 GW in cumulative terms in the month of December 2017 (Ministry of New and Renewable Energy, Annual report 2017-18).

The Central Electricity Authority has presented estimates of production of electricity from renewable which for the month of May 2018 are as follows:

The above table shows that electricity generation from renewable sources at all-India level in the month of May 2018 was 09.26 billion units. In terms of source maximum electricity was generated from wind power followed by solar power, bagasse, small hydro, biomass and others.

Page 91: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

A Peek into Renewable Energy Systems of India

Proceedings of IC4T, 2018 73

Table I: Source-wise all india electricity generation from renewable sources(may, 2018)

Sl. No. Renewable Energy Source

Electricity Generation(Billion Units)

1. Wind 04.442. Solar 03.323. Biomass 00.254. Bagasse 00.735. Small Hydro 00.476. Others 00.047. Total 09.26

Source: Ministry of Powers’ Central Electricity Authority, New Delhi, Executive summary on power sector, 1B, June 2018

1. Wind power

India’s position is fourth in the world in terms of wind power production and is next to China, the United States of America and Germany. As per annual report of the year 2017-18 of union Ministry of New and Renewable Energy 32,848.46 megawatt (MW) of wind power was created up to the month of December 2017(Ministry of New and Renewable Energy, Annual report 2017-18). The installed capacity varies among the states.

2. BiomassBiomass is a preeminent renewable energy source in India. In this source energy is extracted from biomass and residues from the farm sector. The state-wise break up of its installed capacity is given in table II.

Table II: grid connected biomass / bagasse power plants’ installed capacity

(Up till december 2017)

Sl. No. State Installed Capacity(MW)

1. Maharashtra 2,065.02. Uttar Pradesh 1,957.53. Karnataka 1,604.64. Tamil Nadu 893.05. Andhra Pradesh 378.26. West Bengal 300.07. Chhattisgarh 228.08. Punjab 194.09. Telengana 158.110. Haryana 121.411. Rajasthan 119.312. Bihar 113.013. Madhya Pradesh 93.014. Uttarakhand 73.015. Gujarat 65.316. Odisha 50.4

Total 8,414Source: Union Ministry of New and Renewable Energy, New Delhi, Annual report 2017-18, Chapter 3: Power from renewables, biomass power and bagasse co-generation programme, Achievements, Page no. 23

As evident from the above table the highest installed capacity of biomass / bagasse power plants is in Maharashtra followed by Uttar Pradesh, Karnataka, Tamil Nadu, Andhra Pradesh, West Bengal, Chhattisgarh, Punjab, Telengana, Haryana, Rajasthan, Bihar, Madhya Pradesh, Uttarakhand, Gujarat and Odisha.

3. Hydroelectric power

Hydroelectric energy is a prominent renewable energy in India. Central Electricity Authority has estimated hydro electric potential development in the country. According to its estimates identified capacity of hydro electric power up to March 31, 2017 was 1,48,701 megawatt and of which 39,692.8 megawatt was under operation and 10,848.5 megawatt was under construction. Region-wise estimates of hydro electric potential development are presented in table III.

An assessment of the above table shows that at all-India level of the total hydro electric potential only 27.31 percentage of total installed capacity was under operation while only 07.47 percentage of total installed capacity was under construction up to March 31, 2017. This makes the under-utilization of hydro electric potential quiet evident. Among the regions highest potential was in north-eastern region followed by northern, southern, eastern and western regions. In all the regions the potential is under-utilized. However, highest operation of installed capacity was in western region and lowest in north-eastern region.

An assessment of the above table shows that at all-India level of the total hydro electric potential only 27.31 percentage of

Table III: region-wise status of hydro electric potential development

(In terms of installed capacity – above 25 megawatts)(As on 31.03.2017)

Sl. No.

Region IdentifiedCapacity

as perReassessment

Study(MW)

Capacityunder

Operation(MW){%}

Capacityunder

Construction(MW){%}

1. Northern 53,395 18,527.3{35.45}

4,898.5{09.37}

2. Western 8,928 5,552.0{68.28}

400.0{04.92}

3. Southern 16,458 9,653.1(60.75}

1,150.0{07.24}

4. Eastern 10,949 4,718.5{44.18}

1,446.0{13.54}

5. North-eastern 58,971 1,242.0{02.13}

2,954.0{05.06}

6. All-India 1,48,701 39,692.8{27.31}

10,848.5{07.47}

Source: Ministry of Power’s Central Electricity Authority, Government of India, New Delhi, Status of hydro electric potential development, as on 31.03.2017

Page 92: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Dr. Nandita Kaushal

Proceedings of IC4T, 201874

total installed capacity was under operation while only 07.47 percentage of total installed capacity was under construction up to March 31, 2017. This makes the under-utilization of hydro electric potential quiet evident. Among the regions highest potential was in north-eastern region followed by northern, southern, eastern and western regions. In all the regions the potential is under-utilized. However, highest operation of installed capacity was in western region and lowest in north-eastern region.

The major hydro-electric power plants in the country are Tehri dam (Uttarakhand), Koyna dam (Maharashtra), Srisailam dam (Andhra Pradesh), Nathpa Jhakri dam (Himachal Pradesh), Sardar Sarovar dam (Gujarat), Bhakra Nangal dam (Himachal Pradesh), Chamera I (Himachal Pradesh), Sharavathi dam (Karnataka), Indira Sagar dam (Madhya Pradesh), Karcham Wangtoo hydroelectric plant (Himachal Pradesh), Pandoh dam (Himachal Pradesh), Nagarjuna Sagar dam (Andhra Pradesh), Purulia hydroelectric project (West Bengal), Idukki dam (Kerala), Salal dam (Jammu and Kashmir), Indravati dam (Odisha), Ranjit Sagar dam (Punjab), Omkareshwar dam (Madhya Pradesh), Belimela dam(Odisha) and Teesta dam (Sikkim) (Jagran Josh, List of Hydro Power Plants in India).

4. Solar power

Solar power sector is an emerging sector and India’s position is sixth in the world in terms of solar power installed capacity (Press Information Bureau, Year End Review 2017 – MNRE). Solar power is being used for heating, lighting, water pumping, rainwater harvesting, refrigeration, air conditioning and various other purposes. The main solar photovoltaic power stations in the country are Kurnool solar park (Andhra Pradesh, 1,000 MW – largest solar park of the world); Kamuthi solar power project (Tamil Nadu); Charanka solar park (Gujarat); Welspun solar M.P. project (Madhya Pradesh); ReNew Power, Nizamabad (Telangana); Sakri solar plant (Maharashtra); Manamunda solar project (Odisha); Tata Power solar systems, Rajgarh (Madhya Pradesh); Welspun Energy, Phalodhi (Rajasthan); Jalaun solar power project (Uttar Pradesh); Bitta solar power plant (Gujarat); Dhirubhai Ambani solar park, Pokharan (Rajasthan); Rajasthan photovoltaic plant (Rajasthan); Welspun, Bathinda (Punjab); Moser Baer solar farm, Banaskantha (Gujarat); Lalitpur solar power project (Uttar Pradesh); Mithapur solar power plant (Gujarat); Kadodiya solar park (Madhya Pradesh); Bolangir solar power project(Odisha); Waa solar power plant, Surendranagar (Gujarat); and solar power plants of NTPC (Madhya Pradesh, Oddisha) (Wikipedia, Solar power in India).

IV. CONCLUSION

The demand for energy has been continuously growing in India owing to reasons like increasing economic development, industrialization, urbanization, population etc. and these are putting strain on conventional energy resources. Considering the vast availability of reserves of renewable energy in the

country their share in total generation of power is quiet less. The under-utilization of these resources is contributing in rising energy deficit in the country. In the interest of all these resources should be duly harnessed and utilized.

REFERENCES

[1] Central Electricity Authority, Ministry of Power, Government of India, New Delhi, “Executive summary on power sector”, June 2018, http://www.cea.nic.in/reports/monthly executivesummary/2018/exe_summary-06.pdf accessed on 14.08.2018

[2] Central Electricity Authority, Ministry of Power, Government of India, New Delhi, “Status of hydro electric potential development, as on 31.03.2017”, http://www.cea.nic.in/reports/monthly/hydro/2017/hydro_potential_region-03.pdf accessed on 14.08.2018

[3] Jagran Josh, “List of hydro power plants in India”, https://www.jagranjosh.com/general-knowledge/list-of-hydro-power-plants-in-india-1501159366-1 accessed on 14.08.2018

[4] Ministry of Finance, Government of India, New Delhi, “Economic survey 2017-18, Chapter 05: Sustainable development, energy and climate change”, http://mofapp.nic.in:8080/economicsurvey/pdf/068-079_Chapter_05_Economic_Survey_2017-18.pdf accessed on 30.07.2018

[5] India Brand Equity Foundation, Industry, “Power sector in India”, https://www.ibef.org/industry/power-sector-india.aspx accessed on 30.07.2018

[6] Ministry of New and Renewable Energy, Government of India, New Delhi, “Annual report 2017-18, Chapter 3: Power from renewables”, https://mnre.gov.in/file-manager/annual-report/2017-2018/EN/pdf/chapter-3.pdf accessed on 30.07.2018

[7] Ministry of New and Renewable Energy, Government of India, New Delhi, “Programme/scheme wise physical progress in 2018-19 in 2018-19 and cumulative up to June 2018”, https://mnre.gov.in/physical-progress-achievements accessed on 30.07.2018

[8] Press Information Bureau, Government of India, Ministry of New and Renewable Energy, New Delhi, “Year-end review 2017 – MNRE”, December 27, 2017, http://pib.nic.in/newsite/PrintRelease.aspx?relid=174832 accessed on 16.08.2018

[9] United Nations Development Programme, “Sustainable development goals”, http://www.undp.org/content/undp/en/home/sustainable-development-goals.html accessed on 30.07.2018

[10] United Nations Development Programme, “Sustainable development goals, Goal 7: Affordable and clean energy”, http://www.undp.org/content/undp/en/home/sustainable-development-goals/goal-7-affordable-and-clean-energy.html accessed on 30.07.2018

[11] Wikipedia, “Renewable energy in India”, https://en.wikipedia.org/wiki/Renewable_energy_in_India accessed on 30.07.2018

[12] Wikipedia, “Solar power in India”, https://en.wikipedia.org/wiki/Solar_power_in_India#cite_note-ach-1 accessed on 16.08.2018

Page 93: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Cognitive Trading Framework

Proceedings of IC4T, 2018 75

Cognitive Trading Framework1Yuan Huang, 2Kamal Kumar Rathinasamy and 3Neelabh Srivastava

1Research Center, Chicago, IL (USA)2Architecture & Consulting, Bangalore, India

3Centre of Excellence - Oil & Gas Domain, London, United KingdomEmail: [email protected], [email protected] and [email protected]

Abstract – Artificial Intelligence techniques are being applied across various industries to mimic cognitive functions associated with humans such as comprehension, problem solving etc. As machines become increasingly capable some of the earlier capabilities such as Optical Character Recognition (OCR) is now being excluded from the scope of AI. In the new world of AI, capabilities like interpreting text and voice commands, algorithmic actions such as playing chess or autonomously operating cars and other simulations are classified as AI.

Industry 4.0 is about leveraging Cognitive technologies like Natural Language Processing and Advanced Machine Learning to harmonize a smart environment – from smart factories to smart transportation to smart cities and even smart agriculture and health care.

In the oil and gas industry, there is a lot of potential for AI based applications by allowing business to perform real time operations to capture new market opportunities by deriving insights from the transactional information for existing assets/inventory, supply chains, etc. An integrated cognitive platform is key for O&G companies looking to find value by leveraging Artificial Intelligence and Algorithm based trading. As part of this research various NLP and Neural Network based deep learning models are evaluated for a futuristic commodity trading platform.

Keywords – Artificial Intelligence, Natural Language Processing, Advanced Machine Learning, Neural Networks, Supervised Learning, Unsupervised Learning, Real Time Analytics

I. INTRODUCTION

In an industry as diverse as Oil & Gas, there are multiple business objectives to apply AI based solution in the energy industry; the key aspects being: improving reliability, optimizing operations, and creating new value.

In context of commodity trading, huge volumes of transactional data are generated every second requiring interpretation and number crunching for informed decision making.

Extracting details from large and unstructured text demands machine reading systems, which can read natural language documents, and perform reading comprehension and automated question answering.

This paper aims at a Smart Trading Solution using Cognitive techniques like Natural Language Processing and Advanced Machine Learning thereby helping O&G companies become operationally more efficient.

II. HOW THINGS WORK TODAY AND ITS CHALLENGES

The business process (depicted below) as it exists today, consists of multiple steps and is highly inefficient.

What you see above is an activity that poses financial and reputational risks for any organization. Key challenges being:

• Duplication of Effort

• Risk of losing/misinterpreting information

• Unavailability of real time data

III. DESIGN CONSIDERATIONS

The above challenges posed following questions –

• How to capture information from various channels (e.g. e-mails, phone conversation, chat messages etc.) to avoid data loss?

• How to make machine comprehend the context and extract relevant data to avoid data misinterpretation?

• How to build a framework for real time information and analytics?

• Natural Language Processing (NLP)1 is a field of

Fig. 1 – Trade Capture Process

Page 94: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Yuan Huang, Kamal Kumar Rathinasamy and Neelabh Srivastava

Proceedings of IC4T, 201876

artificial intelligence that enables computers to analyze and understand human (natural) language.

• Traditional NLP models make heavy use of linguistic annotation, structured world knowledge, semantic parsing and similar NLP pipeline outputs to solve question answering.

• Neural networks, such as Sequential Models, have successfully been applied to a range of tasks in NLP including sentiment analysis, language modelling, machine translation, reading comprehension and question answering.

NLP models using Deep Learning and Neural Network techniques were recommended for further evaluation to build the framework of a potential solution, as they have been preferred for solving question answering problems.

IV. ASSESSMENT OF NLP MODELS

NLP Centric Models -

• Frame-Semantic Parsing: Frame-semantic parsing attempts to identify predicates and their arguments, allowing models access to information about “who did what to whom”. This kind of annotation lends itself to being exploited for question answering.

This parser has a poor degree of coverage with many relations not being picked as they do not adhere to the default predicate-argument structure. It does not trivially scale to situations where several sentences, are required to answer a query and hence not fit for purpose in this context.

• Word Distance Model: This model aligns the placeholder of the cloze form question with related data/attribute in the context document, calculates a vector measure with reference to the question and the context in the form of distance around the aligned entity. The score is calculated by adding the vectors of every word in query to the closest match in document

Machine reading data where questions rather than cloze queries are used, this model performs significantly worse. However, in the context of bi-directional conversation

between a trader and a counterparty/broker, we can have all queries as cloze queries. Hence, this limitation can be ignored.

Neural Models3 -

• Deep LSTM Reader: Long short-term memory (LSTM) models have been considerably successful in tasks such as machine translation and language modelling. With context to data extraction, Deep LSTMs have shown a high success rate to embed long sequences into a vector representation to perform end to end translation in another language. Neural model for reading comprehension tests the ability of Deep LSTM encoders to handle significantly longer sequences.

Text transcript documents are fed one word at a time into a Deep LSTM encoder, after a delimiter the query is also fed into the encoder. Alternatively, the document can be fed following the query. Given the embedded document and query, the network predicts which token in the document answers the query.

The Deep LSTM Reader must propagate dependencies over long distances in order to connect queries to their answers. Though a significant improvement over NLP centric Word Distance Model, the fixed width hidden vector forms a bottleneck for information flow between segments of information within a context.

• Attentive Reader: The Attentive Reader employs an attention mechanism by tokenizing the parts of context document to correlate pre and post fact information from the context in the input document. The Attentive Reader is able to focus on the passages of a context document that are most likely to inform the answer to the query.

• Impatient Reader: This model equips the Attentive Reader model with the ability to trace and refer back to sections from the document as each query token is read. This model allows to incrementally accumulate information from the document. At the end it produces a final context document to extract/predict the related data/attribute as the answer to the query.

Attentive Reader and Impatient Reader models resolve

Page 95: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Cognitive Trading Framework

Proceedings of IC4T, 2018 77

cloze-style queries where the answers are single words or entities. However, these models are often slow for both training and inference due to the sequential nature of Recurrent neural networks (RNN).

• Bi-Directional Attention Flow Model (BiDAF): BiDAF consists of multiple layers of process that represents the context at different levels of granularity. The attention flow mechanism used is best used for a conversation between two or more entities. This model outperforms Attentive Reader and Impatient Reader models.

Upon detailed assessment of various models BiDAF model was considered as the core engine given a higher accuracy in data comprehension and extraction.

V. CONCEPTUAL ARCHITECTURE

An AI based framework is proposed with following solution elements:

• Machine reading of unstructured communications from various sources (a combination of following forms a text corpus)5

• Text Transcripts

• Voice calls / instructions

• IMs

• E-mails

• Applying NLP and Neural model (Bi-Directional Attention Flow) to extract specific attributes from the text corpus.

Fig. 2 – Data Contextualization Layers in BiDAF Model2

• Transforming extracted data to enterprise vocabulary (business keys) using Predictive Text Translation.

• Automated data persistence using enterprise application APIs.

VI. CONCLUSION

The research indicates that Automated Trading is feasible by leveraging Cognitive techniques like NLP and Advanced Machine learning.

The proposed framework addresses the challenges/issues identified in the business process and helps reduce operational inefficiencies to an extent of 50-60%. The model needs to be further fine-tuned, like any other machine learning solution, to achieve higher accuracy.

Industry is beginning to see the importance of Artificial Intelligence towards future growth and success4. However, it is not as simple as replacing humans with machines. Creating and capturing value from AI based solutions requires clearly identified primary business objectives before implementing any of the Cognitive techniques.

REFERENCES [1] https://www.businessinsider.com/google-meredith-whittaker-

ai-mindreading-2018-9

[2] https://web.stanford.edu/class/cs224n/reports/2760988.pdf

[3] https://www.tensorflow.org/

[4] http://www.openenergi.com/artificial-intelligence-future-energy/

[5] http://fortune.com/2016/09/14/data-machine-learning-solar/

Fig. 3 – Cognitive Trading Framework - Architecture Diagram

Page 96: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Gaurav Sharma, Nidhi Agarwal and Vivek Mishra

Proceedings of IC4T, 201878

An overview on Millimeter Wave MIMO for 5G Communication systems

Gaurav Sharma, Nidhi Agarwal and Vivek MishraDepartment of Electronics and Communication Engineering, Shri Ramswaroop Memorial University

[email protected], [email protected] and [email protected]

Abstract – Communication at millimeter wave (mmWave) is emerging as a promising technology in this new era of wireless communication. The enormous unused mm wave raw spectrum provides higher bandwidth communication channels in order to increase mobility both in local and personal area networks. There are numerous applications such as designing of 5G cellular systems, vehicular area networks, ad hoc networks, cognitive radio networks and several wearable devices. The service quality of 5G systems can be improved in terms of high throughput, low latency, peak data rates along with high spectral and energy efficiency. In order to achieve them, large antenna arrays at transmitter and receiver incorporates multiple inputs multiple outputs signal processing techniques. The efficient design of mm wave MIMO still remains an open challenge under present scenario. This paper reviews the concepts behind mm wave MIMO and challenges involved in signal processing, considering the hurdles faced by using conventional MIMO at high carrier frequencies.

Index Terms- millimeter-wave, spectral efficiency, energy efficiency, throughput, latency, beam forming, beam combining, Path loss, RF Chains, Channel estimation, Channel state information.

I. INTRODUCTION

Nowadays, mmWave has defined a new paradigm attracting large number of consumers for meeting there extreme data demands [1]. It utilizes spectrum from 30GHz to 300GHz while present wireless systems are operating at carrier frequencies just below 6GHz. The most significant benefit of mmWave carrier frequencies is that it provides large spectral channels. For instance 2GHz channels are ideally suited for system being operated in 60GHz unlicensed mmWave band. Large spectral channels itself brings higher data rates.

Although mmWave spectrum provides promising aspect in increasing system capacity, an intense research needs to be done in channel propagation characteristics for mobile access networks in largely populated urban areas. As compared to present cellular bands, smaller wavelength at mmWave frequencies result in large attenuation, largely due to presence of atmosphere at certain bands: 60,180 and 380GHz.This free space attenuation is prevalent only for few meters distance from transmitting antenna. It encounters a fraction of a few dB losses for around 1km distance [2], [3]. Therefore, these mmWave bands promises to be well suited for local or

personal area networks with distances of few meters from small transmitting antennas [2], [4].

In 60GHz unlicensed band, consumer radios were first standardized and it is the most widely used researched mmWave band. It is primarily used for wireless HD [5] and WiGig WLAN devices [6]. It carries highly compressed high definition videos at enormous data rates. It incorporates beam forming techniques for up to four antenna arrays.

On account of limited spectrum in sub-6GHz bands, mmWave has garnered deep interest in people from different fields, such as academics, R&D centers and several government agencies for up gradation of 5G cellular system [7]-[12]. Way back in 2014, Federal Communication Commission of USA was the first to issue a notice of inquiry (NOI) to develop better understanding of spectral bands beyond 24GHz for commercial cellular mobile applications [13],[14]. It asked several questions regarding specifications, allocation of bandwidths, health effects and many more, to get deep knowledge regarding it’s feasibility in future wireless communication [15].

MmWave provides a significant wireless backhaul. Conventional wireless communication requires expensive directional antennas at 60GHz [1]. On the other hand, low cost adaptive antennas arrays at mmWave are being developed to providing coverage in densely distributed urban areas. Although it covers small distances, avoid large expenditures in employing optical fiber cables [16], [17].

There are several other applications as well. For instance, it plays a vital role in providing high data rates between vehicles so several autonomous vehicles can be connected with each other in future. It promises low latency (<1ms) to allow driver less movement of vehicles. MmWave has other benefits too such as it can connect cellphones, smart watches and virtual reality headsets with high speed wearable networks [18].

II. CONSTRAINTS IN SIGNAL PROCESSING

The most important aspect to be looked after in mmWave cellular systems is signal processing. It is different from that used at lower frequencies [19], [20]. The practical differences are: (i) Involvement of parameters in hardware design on account of high frequency and larger number of spectral

Page 97: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

An overview on Millimeter Wave MIMO for 5G Communication systems

Proceedings of IC4T, 2018 79

channels. (ii) Different channel modelling needs to be done. (iii) Large antenna arrays to be employed at both transmitter and receiver.

Several hardware challenges occurs due to these large antenna arrays, such as overall power consumption and circuit design. The proposed solution is partitioning various signal processing tasks between analog and digital domains [21]-[24]. It leads to less complexity in the circuit design as the number of RF chains along with large number of analog-to-digital covertures is considerably reduced.

Smaller wavelengths at mmWave frequencies affect the channel models because signal propagation characteristics are different for small wavelength signals [1]. The key differences observed are: (i) lower diffraction due to reduced Fresnel zone, (ii) severe penetration losses. Therefore, designing mmWave communication systems is affected due to combined impacts of hardware challenges, channel models and large antenna arrays. However, relays and co-operative diversity can eventually enhance coverage and improve overall signal strength in mmWave cellular systems.

III. CHANNEL MODELS AND PROPAGATION CHARACTERISTICS

Propagation characteristics are crucial at mmWave mainly due to small wavelength of signals being propagated. Therefore, knowledge of these characteristics is important prior to developing signal processing algorithms.

A. Effect of Path Loss

Friis’ law relates transmitted power Pt and received power Pr under far field conditions as: [25]

(1)

Where Gt and Gr are transmit and receive antenna gains, d is separation distance between transmitter and receiver and λ is the wavelength. It is clearly evident from Frii’s law that isotropic path loss (i.e. ratio of Pt/Pr under unity antenna gains Gt=Gr=1) is proportional to λ-2. Therefore, for mmWave frequencies high path loss is experienced. This higher path loss can be overcome by employing more antenna elements within same area. The average path loss can be modeled using a linear model in the form:

(2)

Where α and β are parameters of linear model, d is the distance and ξ indicates lognormal term considered for variances in shadowing. On taking β=2, Friis’ law (1) is a special case for model (2).

B. Blockage and Unwanted Outages

Under mmWave frequencies, there is a severe problem of signal blockages due to some unavoidable factors. For example, human bodies can alone cause attenuation in between 20 to 35dB whereas building materials may lead even much higher attenuation between 40 to 80dB [7], [26], [27]-[29]. Several other common factors like humidity and rain can also cause signal loss in long range mmWave frequencies. At such higher frequencies, [30] diffraction is not capable of communicating signals over large distances.

In order to measure the effects of blocking, communication system use two-stage model (LOS and NLOS) or even a three stage model (LOS, NLOS and signal outages).

C. Channel estimation

Channel estimation are used for incorporating analog and digital beamformers in a mmWave regime. Traditional MIMO Channel estimation technique doesn’t support mmWave systems which utilizes analog precoding and combining because channel measurements are done digitally by convolving analog precoding and combining vectors.As a result, entries in channel matrix are not accessible. Moreover, traditional channel estimation requires training of several channel coefficients and large training sequences. This methodology is not feasible if channel characteristics change frequently with time and needs estimation repeatedly. Although beamtraining can reduce the efforts of channel estimation, it is incapable of providing platform for implementing large algorithms at both transmitter and receiver.For instance, interference cancellation and accessibility of multiuser MIMO are complicated in repeated cycles.

Since mmWave spectral channels are sparse in time and angular dimensions [31].So,Several Compressive sensing techniques utilizes channel sparsity property of mmWave systems and eradicate the effects of beamtraining.In this technique, random beamforming is used at both base station and end-users for estimating downlink channel parameters. By employing random compressive sensing, all mobile equipments can estimate their random channels simultaneously due to random nature of transmitted beams . These techniques help in channel estimation using small number of RF measurements. Therefore, overall system complexity is reduced.

D. Beam Combining

With the advancement of mmWave wideband, large numbers of directional antennas will be incorporated as the RF carrier frequency rises in order to get sufficiently high SNR. In such large gain antennas, beam combining methodology will be able to actively detect the strongest outgoing and incoming beams at both transmitter and receiver respectively. In this way, multiple antennas can enhance the SNR of incoming signals and path loss will be reduced drastically at any

Page 98: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Gaurav Sharma, Nidhi Agarwal and Vivek Mishra

Proceedings of IC4T, 201880

antenna angle position [32], [33]. There are two ways of performing beam combining: coherent beam combining and non-coherent beam combining. In coherent beam combing, signal components from different paths at various angles can be aligned on time basis and computed on taking square root of strongest individual total received powers, equivalent voltages being summed together and finally squaring this result. In order to avoid same beam to be considered twice, selected unique angle combinations are used.

(3)

Where Pi indicates individual strongest received powers from particular antenna pointing angle combinations. In non-coherent beam combining, each unique beam is detected and it sums the received powers of strongest beam combinations.

(4)

IV. VARIOUS ARCHITECTURES USING MIMO SCHEME

MIMO scheme employs only a fewer number of antennas (up to eight maximum) whereas mmWave have significantly large number of antenna elements (32 to 256) within a small physical area due to its very small wavelength.

There are distinctive architectural differences between a conventional MIMO system at 6GHz frequencies and mmWave band frequencies. There are practical limitations at high carrier frequencies, which include complexity involved due to hardware constraints and increased cost as each practical antenna needs a separate RF chain [34], [35]. Moreover, power amplifier along with RF chain have to be implanted behind each antenna which itself is quiet challenging and large amount of power is drawn on account of large RF chains.

A. Analog Beamforming

It is easiest way of employing MIMO in mmWave cellular systems. It is applicable at both transmitter and receiver.

Fig. 1 Traditional MIMO architecture for sub-6GHz.

Fig. 2 Analog beamforming in mmWave MIMO system.

Analog beamforming uses a series of digital monitored phase shifters. In figure (2), several antennas are connected to single RF chain through various phase shifters at both transmitting and receiving end. The phase shifters are weighted adaptively to steer the beam in order to maximize received signal power.

The major drawbacks associated with this beam forming are losses due to weighted phase shifters, noise and non-linearity. It is practically difficult to finely tune beams and avoid unwanted nulls. By employing single beamformer, analog beam forming provide single-stream transmission for single user only. So, benefits of MIMO technology are not utilized fully to support multi-users.

B. mmWave Hybrid Precoding and Combining

This hybrid structure fully utilizes the benefits of MIMO technology at mmWave frequencies. In this methodology, entire MIMO optimization process is divided among analog and digital domains. Moreover, analog precoder/combiner takes benefits of beam forming gains whereas digital pre-coder/combiner utilizes benefits of multiplexing gains.

In figure (3), BS comprises of Nt antennas and Lt RF chains, and the mobile station has Nr antennas and Lr RF chains such that Nt>Lt and Nr>Lr. for analog beamforming, Ns=Lt=Lr=1. [5] This hybrid technique provides spatial multiplexing by incorporating multi-user MIMO. Suppose FRF and FBB represents analog and digital baseband processing matrices at base station and WRF and WBB represents analog and digital (baseband) combining matrices at end users. H represents channel matrix, s and n being transmitted and received signal noise vectors respectively. After combination at receiver, received signal is represented as:

(6)

Hybrid precoding provides a trade-off between system’s hardware complexity and overall performance gains. Owing to the less number of RF chains needed, hybrid structure provides cost and power efficient alternatives. In comparison

Fig. 3 mmWave MIMO system using Hybrid Precoding and Combining.

Page 99: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

An overview on Millimeter Wave MIMO for 5G Communication systems

Proceedings of IC4T, 2018 81

to digital pre-coders, they ensure near-optimal data rates along with more liberty in designing pre-coding matrices. Therefore, it can perform far more complicated processing in addition to supporting multi-stream transmission and multiplexing.

Conversely, due to limited number of RF chains on both pre-coding and combining stage, multiplexing gain is limited. Since the spectral channels are sparse in nature for mmWave systems, so overall performance can be improved as compared to their digital counterpart. The main problem arises in the complex design of analog (FRF) and digital (FBB) pre-coder matrices. This creates difficulty in channel estimation for mmWave systems. Estimation of entries for large arrays of channel matrices causes a lot of training overhead.

C. Low Resolution Receivers

Another efficient method which provides alternative to hybrid architecture at receiver is to reduce the resolution along with power Consumption of the ADCs. Here, a pair of low resolution ADCs is employed in order to sample the in-phase and quadrature components of the demodulated signal at the output of each RF Chain. This facilitates more RF Chains and less ADCs to reduce Complexity of the conventional hybrid architecture. In particular, one-bit ADC has negligible power consumption in comparison to other front-end components whereas Interfacing circuits used for connecting digital components to DAC/ADCs have considerable power consumption ability [36].

There are several signal processing constraints on using few bits and one-bit ADCs because channel state information (CSI) plays a different role in this case. It might result in different precoding optimizations not suitable for one-bit ADCs.Moreover, getting CSI is more difficult [37].

V. CONCLUSION

Communication at mmWave systems has to deal with a large number of constraints apart from just change in overall carrier frequency. Design efficient pre-coding and receiver is going to be a big challenge for future researchers and academicians in the development of future mmWave cellular systems. Moreover, signal processing is going to play a critical role

Fig. 4 One-bit low resolution reciever.

in its overall advancement. The hardware challenges can be overcome by using optimal beam forming, pre-coding/combining and efficient channel estimation algorithms. Furthermore, problems related to channel estimation and modeling has to be dealt efficiently in order to utilize mmWave MIMO systems for local area networks, vehicular networks, personal area networks and much more advanced future applications like smart homes, smart cities, and Internet of Things (IoT) etc. Overall, mmWave wireless systems have a huge potential ahead in years to come.

REFERENCES

[1] T. S. Rappaport, R. W. Heath Jr., R. C. Daniels, and J. Murdock, Millimeter Wave Wireless Communications. Prentice-Hall, September 2014.

[2] T.S. Rappaport, R. W. Heath, Jr., R. C. Daniels, and J. N. Murdock, Millimeter Wave Wireless Communications. Englewood Cliffs, NJ, USA: Prentice Hall, 2015.

[3] F. Gutierrez, S. Agarwal, K. Parrish, and T. S. Rappaport, “On-chip integrated antenna structures in CMOS for 60 GHzWPAN systems,” IEEE J. Sel. Areas Commun., vol. 27, no. 8, pp. 1367–1378, Oct. 2009.

[4] T.S.Rappaport et al., “Millimeter wave mobile communications for 5G cellular: It will work!” IEEE Access, vol. 1, pp. 335–349, May 2013.

[5] “WirelessHD Specification Overview,” Tech. Rep., 2010.

[6] “ISO/IEC/IEEE International Standard for Information technology– Telecommunications and information exchange between systems– Local and metropolitan area networks–Specific requirements-part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 3: Enhancements for Very High Throughput in the 60 GHz Band (adoption of IEEE Std 802.11ad-2012),” ISO/IEC/IEEE 8802-11:2012/Amd.3:2014(E), pp. 1– 634, March 2014.

[7] Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband systems,” IEEE Commun. Mag., vol. 49, no. 6, pp. 101–107, 2011.

[8] T. S. Rappaport, S. Shu, R. Mayzus, Z. Hang, Y. Azar, K. Wang, G. N. Wong, J. K. Schulz, M. Samimi, and F. Gutierrez, “Millimeter wave mobile communications for 5G cellular: It will work!” IEEE Access, vol. 1, pp. 335–349, 2013.

[9] M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S. Rappaport, and E. Erkip, “Millimeter wave channel modeling and cellular capacity evaluation,” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1164–1179, June 2014.

[10] W. Roh, J.-Y. Seol, J. Park, B. Lee, J. Lee, Y. Kim, J. Cho, K. Cheun, and F. Aryanfar, “Millimeter-wave beamforming as an enabling technology for 5G cellular communications: theoretical feasibility and prototype results,” IEEE Commun. Mag., vol. 52, no. 2, pp. 106–113, 2014.

[11] T. Bai and R. W. Heath Jr., “Coverage and rate analysis for

Page 100: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Gaurav Sharma, Nidhi Agarwal and Vivek Mishra

Proceedings of IC4T, 201882

millimeterwave cellular networks,” IEEE Trans. Wireless Commun., vol. 14, no. 2, pp. 1100–1114, Feb 2015.

[12] Federal Communications Commission, “FCC 15-138 notice of proposed rule making,” Oct 2015.

[13] “Before the federal communications commission,” Federal Communications Commission, Washington, DC, USA, FCC 14-154, Oct. 2014. [Online]. Available: https://apps.fcc.gov/edocs_public/attachmatch/FCC- 14-154A1.pdf

[14] Federal Communications Commission, Washington, DC, USA, “FCC 14- 177,” Jan. 2015. [Online]. Available: http://apps.fcc.gov/ecfs/proceeding/view?name=14-177

[15] “Spectrum above 6 GHz for future mobile communications,” Ofcom, London, U.K., Feb. 2015. [Online]. Available: http://stakeholders.ofcom.org.uk/binaries/consultations/above-6ghz/summary/spectrum_above_6_GHz_CFI.pdf

[16] S. Hur, T. Kim, D. J. Love, J. V. Krogmeier, T. A. Thomas, and A. Ghosh, “Millimeter wave beamforming for wireless backhaul and access in small cell networks,” IEEE Trans. Commun., vol. 61, no. 10, pp. 4391–4403, Oct. 2013.

[17] C. Dehos, J. L. Gonz´alez, A. De Domenico, D. Kt´enas, and L. Dussopt, “Millimeter-wave access and backhauling: the solution to the exponential data traffic increase in 5G mobile communications systems?” IEEE Commun. Mag., vol. 52, no. 9, pp. 88–95, September 2014.

[18] A. Pyattaev, K. Johnsson, S. Andreev, and Y. Koucheryavy, “Communication challenges in high-density deployments of wearable wireless devices,” IEEE Wireless Commun., vol. 22, no. 1, pp. 12–18, February 2015.

[19] A. Alkhateeb, M. Jianhua, N. Gonz´alez-Prelcic, and R. W. Heath Jr., “MIMO precoding and combining solutions for millimeter-wave systems,” IEEE Commun. Mag., vol. 52, no. 12, pp. 122–131, December 2014.

[20] J. Brady, N. Behdad, and A. M. Sayeed, “Beamspace MIMO for millimeter-wave communications: System architecture, modeling, analysis and measurements,” IEEE Trans. Antennas Propag., vol. 61, no. 7, pp. 3814–3827, July 2013.

[21] A. M. Sayeed and N. Behdad, “Continuous aperture phased MIMO: Basic theory and applications,” Proc. 2010 Annual Allerton Conference on Communications, Control and Computers, pp. 1196–1203, Sep. 2010.

[22] O. El Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, and R. W. Heath Jr., “Spatially sparse precoding in millimeter wave MIMO systems,” IEEE J. Sel. Areas Commun., vol. 13, no. 3, pp. 1499–1513, March 2014.

[23] A. Alkhateeb, O. El Ayach, G. Leus, and R. W. Heath Jr., “Channel estimation and hybrid precoding for millimeter wave cellular systems,” IEEE J. Sel. Topics Signal Process., vol. 8, no. 5, pp. 831–846, Oct 2014.

[24] S. Han, C.-L. I, Z. Xu, and C. Rowell, “Large-scale antenna

systems with hybrid analog and digital beamforming for millimeter wave 5G,” IEEE Commun. Mag., vol. 53, no. 1, pp. 186–194, January 2015.

[25] T. S. Rappaport, Wireless Communications: Principles and Practice, 2nd ed. Upper Saddle River, NJ: Prentice Hall, 2002.

[26] C. R. Anderson and T. S. Rappaport, “In-building wideband partition loss measurements at 2.5 and 60 GHz,” IEEE Trans. Wireless Commun., vol. 3, no. 3, pp. 922 – 928, May 2004.

[27] K. C. Allen, N. DeMinco, J. R. Hoffman, Y. Lo, and P. B. Papazian, Building penetration loss measurements at 900 MHz, 11.4 GHz, and 28.8 MHz, ser. NTIA report – 94-306. Boulder, CO: U.S. Dept. of Commerce, National Telecommunications and Information Administration, 1994.

[28] A. V. Alejos, M. G. S´anchez, and I. Cui˜nas, “Measurement and analysis of propagation mechanisms at 40 GHz: Viability of site shielding forced by obstacles,” IEEE Trans. Veh. Technol., vol. 57, no. 6, pp. 3369–3380, 2008.

[29] H. Zhao, R. Mayzus, S. Sun, M. Samimi, J. K. Schulz, Y. Azar, K. Wang, G. N. Wong, F. Gutierrez, and T. S. Rappaport, “28 GHz millimeter wave cellular communication measurements for reflection and penetration loss in and around buildings in New York City,” in Proc. IEEE Int. Conf. Commun. (ICC), 2013.

[30] H. J. Liebe, “MPM–An atmospheric millimeter-wave propagation model,” International Journal of Infrared and Millimeter Waves, vol. 10, no. 6, pp. 631–650, 1989.

[31] J.Wang, Z.Lan, and C.W.Pyo,”Beam codebook based beamforming protocol for multi-Gbps millimeter-wave WPAN systems,”IEEE J.Sel.Areas Commun., vol.27, no. 8,pp. 3-4,2009.

[32] S. Sun, T. S. Rappaport, R. W. Heath, A. Nix, and S. Rangan, “Mimo for millimeter-wave wireless communications: beamforming, spatial multiplexing, or both?” IEEE Commun. Mag., vol. 52, no. 12, pp. 110–121, Dec. 2014.

[33] S. Sun and T. S. Rappaport, “Multi-beam antenna combining for 28 GHz cellular link improvement in urban environments,” in Proc. IEEE GLOBECOM, Dec. 2013, pp. 3754–3759.

[34] J. Zhang, X. Huang, V. Dyadyuk, and Y. Guo, “Massive hybrid antenna array for millimeter-wave cellular communications,” IEEE Wireless Commun., vol. 22, no. 1, pp. 79–87, February 2015.

[35] C. Doan, S. Emami, D. Sobel, A. Niknejad, and R. Brodersen, “Design considerations for 60 GHz CMOS radios,” IEEE Commun. Mag., vol. 42, no. 12, pp. 132–140, Dec 2004.

[36] G. P. Fettweis, “Hetnet wireless fronthaul: The challenge missed,” in IEEE Commun. Theory Workshop, 2014.

[37] J. Mo and R. W. Heath, Jr, “Capacity Analysis of One-Bit Quantized MIMO Systems with Transmitter Channel State Information,” ArXiv e-prints, Oct. 2014.

Page 101: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

LabVIEW Implementation of Hamming (n,k) Channel Coding Scheme

Proceedings of IC4T, 2018 83

LabVIEW Implementation of Hamming (n,k) Channel Coding Scheme

Alkesh Agrawal, Ashish Anand and Mukul Misra Faculty of Electronics & Communication Engineering, Shri Ramswaroop Memorial University, Lucknow, India

[email protected], [email protected], [email protected]

Abstract— The paper presents the LabVIEW (Laboratory Virtual Instrument Engineering Workbench) implementation of Hamming (n,k) channel coding scheme with k=4 and n=7. Channel coding schemes add redundancy to the message bits according to certain pre-defined rule prior to transmission and makes the reception of noisy information easy at the receiver level. Hamming (7,4) is a single error correcting channel coding scheme which is implemented in four stages: Encoder stage, Channel, Syndrome stage, and Decoder stage. The uniqueness about this work is that the Hamming (n,k) channel coding scheme is implemented on the license version of LabVIEW and National Instruments (NI) myRIO hardware kit and the results obtained matches with the standard results.

Keywords—block codes, encoder, syndrome pattern, decoder, redundancy

I. INTRODUCTION

In a Digital Communication Link, coding process is involved at three different levels. At source level there is implementation of Source Coding schemes where the symbols with certain a-priori probability are coded to assign minimum codes with maximum a-priori probability to reduce the redundancy and to increase the information content per source symbol. The next level of coding is Channel Coding Schemes where on the contrary the redundancy is deliberately stuffed into the message contents with pre-defined rules and the parity or redundant bits are generated from the message bits only. The third level of coding scheme is Line Coding Schemes, which are electrical representation of bit patterns to be transmitted.

The channel coding schemes play a vital role in reception of information transmitted from the transmitter in the presence of noise. The channel coding schemes makes the link more immune to error so that the information corrupted due to presence of noise can be easily detected and corrected at the receiver level. The Channel coding schemes are broadly classified as Block Codes, Cyclic Codes, and Convolutional Codes.

The Channel coding schemes find applications in many communication systems like: Reed-Solomon Codes (R-S) are used to detect the burst errors on a CD [1]. The Convolutional Codes [2], Walsh Codes [3-4], types of Channel coding scheme are used in Mobile Communication. Another type

of Channel coding schemes like Turbo Codes [5-6] and Low Density Parity Check (LDPC) [7] are used in Modems and Telephone Transmission systems.

II. CHANNEL CODING SCHEME: HAMMING (N,K) BLOCK CODES

Hamming (7,4) is a specific subset of Hamming (n,k) block codes with k number of message bits transformed into n number of code-word bits with addition of (n-k) parity bits into the k number of message bits.

For Hamming (n,k) codes: n=2m – 1 (1)

k=2m - 1- m (2)where m ≥ 3

The Channel Coding Scheme is implemented in three steps. The first step involves Encoding process, the second step involves Syndrome calculation, and the third step involves Error detection and correction process.

Hamming (7,4) block codes have double error detecting (t2) and single error correcting capability (t1), calculated using minimum distance (dmin) given by Eqn. (1) and Eqn. (2).

dmin ≥ (2t1+1) (3) dmin ≥ (t2+1) (4)

III. ALGORITHM

The algorithm has been implemented in LabVIEW. The step-wise description is as follows:

Step 1. Fix m=3 and k=4. Take Generator Matrix [G]=[P|Ik] of order 4x7 in which Parity sub- matrix is of order 4x3 and Identity sub-matrix is of order 4x4. Generate code words of 7-bits each [C1, C2, C3, C4, C5, C6, C7] using matrix calculation [C]=[m][G] where C1 (first parity bit) depends on (m1, m2, m3), C2 (second parity bit) depends on (m2, m3, m4), C3 (third parity bit) depends on (m1, m3, m4), C4 depends on m1, C5 depends on m2, C6 depends on m3, C7 depends on m4.

Step 2. The encoded message bits into code-word are transmitted and noisy version of code-words is received through a channel with a provision of introduction of error in a single bit.

Page 102: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Alkesh Agrawal , Ashish Anand and Mukul Misra

Proceedings of IC4T, 201884

[R]=[E1, C2, C3, C4, C5, C6, C7] or [R]=[C1, E2, C3, C4, C5, C6, C7] or [R]=[C1, C2, C3, C4, C5, C6, E7]

Step 3. The received code-word (r) is processed with the parity check matrix [H] of the form [In-k | P

T] and order 3x7. Syndrome pattern is calculated as [S]=[r][HT]. The syndrome pattern is a 3-bit pattern (100, 010, 001, 101, 110, 111, 011) for error in (1 2 3 4 5 6 7) position respectively.

Step 4. For the generated syndrome pattern [S]=(100, 010, 001, 101, 110, 111, 011), error pattern is calculated as [e] = (1 0 0 0 0 0 0 0, 0 1 0 0 0 0 0, 0 0 1 0 0 0 0, 0 0 0 1 0 0 0, 0 0 0 0 1 0 0, 0 0 0 0 0 1 0, 0 0 0 0 0 0 1 ) respectively. The actual transmitted code-word is estimated as [C]=Ex-OR {[R],[e]}.

IV. LABVIEW IMPLEMENTATION

The algorithm designed for Hamming (n,k) channel coding scheme is implemented in LabVIEW in four different stages as mentioned in algorithm process.

The first stage is the LabVIEW implementation of the Encoder as depicted in Step 1. The input has been given by 16 position rotary DIP switch as shown in Fig. 1(a). The switch is interfaced by using National Instruments (NI) myRIO kit. The output from the switch is used to generate 4-bit message word using the software with the help of Digital Input block. The output bits are stored into an array using a Build array block. Each message bit is separated out from a 4-bit pattern and using exclusive-or operator, 7-bit codewords are generated shown in Fig. 1 (b).

The second stage is the LabVIEW implementation of the channel as depicted in Step 2. The channel has been implemented using a case structure. Each case represents the

Fig. 1 (a). LabVIEW implementation of 4-bit message generation by NI myRIO kit.

introduction of error at a single bit position. Fig. 2 shows the implementation of the channel with introduction of error in the seventh bit position, where the 7th bit is inverted from 1 to 0 as case structure.

The second stage is the LabVIEW implementation of the channel as depicted in Step 2. The channel has been implemented using a case structure. Each case represents the introduction of error at a single bit position. Fig. 2 shows the implementation of the channel with introduction of error in the seventh bit position, where the 7th bit is inverted from 1 to 0 as case structure.

The third stage is the LabVIEW implementation of the syndrome bit pattern as depicted in Step 3. where the received 7-bit codeword is processed with the parity check matrix given by equation:

[S]=[r][HT] (5)

as shown in Fig. 3.

Fig. 1 (b). LabVIEW implementation of Encoder

Fig. 2. LabVIEW implementation of Channel

Page 103: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

LabVIEW Implementation of Hamming (n,k) Channel Coding Scheme

Proceedings of IC4T, 2018 85

Fig.3. LabVIEW implementation of generation of syndrome bit pattern.

The fourth stage is the LabVIEW implementation of the decoder as depicted in Step 4. In this stage the syndrome bit pattern generated in the third stage is used to match the column of the [H]. The ith column match with the generated syndrome bit pattern signifies the error in the ith bit. In this decoder stage a case structure is used which identifies the position of the error bit present and then rectifies it. Fig. 4 shows the implementation of the decoder.

V. RESULTS

Fig. 5 shows the front panel for the complete implementation process of Hamming (7,4) series, which includes the generation of 4-bit message word implemented using blocks shown in Fig. 1(a), generation of 7-bit codeword using blocks shown in Fig. 1(b), stuffing 1-bit error using blocks shown in Fig. 2, generation of 3-bit syndrome pattern using blocks

Fig. 4. LabVIEW implementation of Decoder

Fig. 5. Front Panel of LabVIEW implementation of Hamming (n,k) series with 4-bit message word, 7-bit codeword, bit error in 6th bit, syndrome

pattern, and decoded output.

Fig. 6. 4-bit message words and 7-bit codewords extracted from Front Panel of LabVIEW implementation.

Page 104: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Alkesh Agrawal , Ashish Anand and Mukul Misra

Proceedings of IC4T, 201886

Fig.7. 1-bit error stuffing and 3-bit syndrome patterns extracted from Front Panel of LabVIEW implementation.

shown in Fig. 3, decoding the transmitted codeword using blocks shown in Fig. 4.

Fig. 6 shows the tabulated 4-bit message words generated by rotating the DIP switch and 7-bit codewords generated using the 4-bit message words through LabVIEW software implementation.

Fig. 7 shows the tabulated 1-bit error stuffed and 3-bit syndrome pattern generated for every bit error position throufg LabVIEW software implementation.

Hamming (7,4) channel coding series is implemented by LabVIEW software and National Instruments (NI) my RIO kit in four steps. First step was generation of 4-bit message words (0000 to 1111) and corresponding 7-bit message word, second step was 1-bit error stuffing, third step was generation of 3-bit syndrome pattern, and fourth step was decoding the transmitted 7-bit codeword. The results generated exactly matches with the standard results [8-9]. The whole process can be made more effective by generating the results on LCD display simultaneously that can be used for real time situations.

ACKNOWLEDGMENT

The author(s) acknowledge Shri Ramswaroop Memorial University for providing the licensed LabVIEW software and hardware circuit boards to work on this project.

REFERENCES

[I.] S. Reed, G. Solomon, “Polynomial Codes over Certain Finite Fields,” Journal of the Society for Industrial and Applied Mathematics (SIAM), vol. 8, pp. 300–304, 1960.

[2] A. J. Viterbi, “Convolutional codes and their performance in communications”, IEEE Trans. Commun. Technol., vol. 19, pp.751-771, 1971.

[3] M. Amadei, U. Manzoli and M. L. Merani, “On the assignment of Walsh and quasi-orthogonal codes in a multicarrier DS-CDMA system with multiple classes of users,” Global Telecommunications Conference, GLOBECOM ‘02. IEEE, Taipei, Taiwan, vol. 1, pp. 841-845, 2002.

[4] A. G. Shanbhag J. Holtzman, “Optimal QPSK Modulated Quasi-Orthogonal Functions for IS-2000” Conference Record of the 2000 IEEE Sixth International Symposium on Spread Spectrum Techniques and Applications (ISSSTA) vol. 2, pp. 756-760 2000.

[5] C. Berrou and A. Glavieux, “Near optimum error correcting and decoding: Turbo Codes, “ IEEE Trans. Commun. vol. 44, pp. 1261-1271, 1996.

[6] C. Berrou, R. Pyndiah, P. Adde, C. Douillard and R. Le Bidan, “An overview of turbo codes and their applications,” The European Conference on Wireless Technology, pp. 1-9, 2005.

[7] S.Y. Chung, D. Forney, T. Richardson, and R. Urbanke, “On the design of low-density paritycheck codes within 0.0045 dB of the Shannon limit,” IEEE Communication Letters, vol. 5, pp. 58–60, 2001.

[8] S. Lin and D. J. Costello, “Error Control Coding,” 2/e, Pearson Education, 2005.

[9] S. Gravano, “Introduction to Error Control Coding,” Oxford University Press, 2007.

Page 105: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

The Application of AI: A Clustering Approach using K-Means Algorithm

Proceedings of IC4T, 2018 87

The Application of AI: A Clustering Approach using K-Means Algorithm

1Saurabh Dixit and 2Arun Kumar Singh1Central Institute of Plastics Engineering and Technology, Lucknow

2Regional Engineering College, [email protected] and [email protected]

Abstract—Artificial Intelligence (AI) has gathered tremendous momentum over the last decade. This has been partly due to the enormous upsurge in the computing capabilities and partly due to the change in lifestyle where people spend a lot of time storing and retrieving data. Machine Learning is a specialized branch of AI where the computers or machine tend to mimic humans in the sense that they learn from past experiences. The experiences, in this case, being huge volumes of data which convey meaningful results when manipulated by a suitable algorithm. By means of training algorithms, an understanding of the inter-relationships between input variables is developed and can be used for interpreting the outcome as well as optimizing the design parameters. Systems learn from internal training data models and adjust their parameters accordingly. In a typical characteristic scenario of artificial intelligence, there is lots of data and system is too complex for numerical analysis. It requires significant technical expertise to combat the challenges associated with developing a predictive model. Time is also a critical factor in the analysis of data. The various learning algorithms have been classified into clustering, classification, and regression. Clustering is used to segment data into natural subgroups, classification is employed to build a model to predict groups for new observations. Regression is a technique used to hone predictive models which rely on input and output, known as a response. In this paper, an overview of Machine Learning is provided. An unsupervised form of learning is evaluated using a K-means clustering algorithm to optimize the location of offices for a courier company. A MATLAB environment is used for simulation.

Keywords—Artificial Intelligence, Clustering, K-means, Machine Learning, Supervised Learning

I. INTRODUCTION

Machine Learning is a specialized branch of Artificial Intelligence (AI). Machine learning has witnessed tremendous growth and is a cutting-edge technology in the field of computing, the design of logical algorithm patterns and analysing complex data structures. The ubiquitous interest in machine learning is conditioned by similar factors responsible for the popularity of data mining and Bayesian analysis [1-2]. Machine learning which has evolved due to the enhanced capability of computers is an amalgamation of algorithm

and data, where algorithms act as machine while data sets invoke the learning pattern. Arthur Samuel has coined the definition for Machine Learning as the field of study that gives computers the ability to learn without being explicitly programmed. It teaches computers or machines to learn from experience. Fig. 1 depicts the classification of Machine Learning with Supervised and Unsupervised learning as the two broad categories of Machine learning [3]. In Supervised learning, predictive models are built in the presence of a definite response. Regression and Classification are the key techniques employed in Supervised Learning. Regression is a statistical process for estimating the relationship between a dependent variable y and one or more independent variables x. It is widely used for prediction and forecasting. If the range of values used for prediction are within the dataset, the process is known as interpolation. If the range of values lie outside the dataset, then the process is known as extrapolation. Classification involves running an algorithm on training data from a group of observations drawn from a distribution such as blood status, malignant tumour, jet’s sound profile, defect, colour.

Whereas in Unsupervised learning, there is no definite response, and the input data provides the learning platform. Clustering is the dominant technique employed in Unsupervised learning. It is used to arrange computing clusters as in social network analysis, distinct segments in the market, examine and analyse astronomical data. Therefore, cluster analysis includes categorization and dividing a large group of input variables into clusters which share some similarity based on an algorithm.

It is possible to analyse a model with vast amounts of complex data and there is no governing equation between the data variables. The scale of data is an important parameter and varies directly with the accuracy of the training model. A typical scenario of Machine Learning comprises of a situation when the rules of a task are changing along with a corresponding change in the nature of data. In such a scenario, the training algorithm needs to adapt as in automated trading, energy demand forecasting and predicting shopping trend. Therefore, Machine learning algorithms deploy ingenious

Page 106: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Saurabh Dixit and Arun Kumar Singh

Proceedings of IC4T, 201888

methods to extract information directly from data without depending on a predetermined equation as a model. The algorithms intuitively enhance their performance as the number of samples available for learning increases. Many domains and applications are taking advantage of machine learning algorithms namely Medical, Vision, Robot, Natural language Processing, Financial, Business Intelligence [4-5]. The machine learning model works on the following pattern:

i. Choose the training experience features

ii. Choose the target function

iii. Choose how to represent target function

iv. Choose a learning algorithm, to infer the target function.

In this paper, an overview of Machine Learning is provided along with an illustration of the Unsupervised learning algorithm. K-means clustering algorithm is simple and effective whereby a set of observations is partitioned into natural groupings or clusters of patterns in such a way that the measure of similarity between any pair of observations assigned to each cluster minimizes a specified cost function [6]. This algorithm is employed to customize the office locations of a Courier company so that the cost function can be minimized. Section II elaborates the K-means algorithm, results are presented in Section III, conclusion is drawn in Section IV.

II. K-MEANS ALGORITHM

It is a form of Unsupervised learning which comprises of arranging the training set into coherent groups. Training

Fig. 1 Classification of Machine Learning

sets or data points are clustered into patterns to minimize the specified cost function. K-means algorithm partitions data into a given number of the mutually exclusive cluster according to a Cost function. The function returns the index of the cluster to which it has assigned each observation. A cost function is characterized by the measure of similarity between any pair of data points.

Let denote a set of multidimensional observations. Now, this set of N observations (data points) is to be partitioned into K clusters (K<N),

To optimize the clustering, the cost function used is:

Here (Euclidean distance). To arrange observations into 2 clusters as Small and Large based on Height and weight. The various steps involved in developing the Cost function is summarized as:

i. The number of clusters, K is chosen.

ii. Each cluster is initialized by picking randomly one point per cluster. This point is referred to as centroid.

iii. Each point is placed in the cluster whose current centroid it is nearest to.

iv. After all, points are assigned, the locations of centroids are updated.

The algorithm

i. Input: - K (number of clusters)

ii. Training set X (1), X (2), ……. X(N)

iii. Randomly initialize K cluster centroids

iv. Repeat

{ for i=1 to NC(i)= index (1 to K)For j=1 to K

= mean of points} The above algorithm can be summarized as dividing observations into a given number of groups. Then iterative push and pull approach is employed. Centers are used to making groups. K-means uses random initial center location. Fig. 2 depicts the flowchart of the K-means algorithm as described earlier. The number of clusters is decided intuitively. Each cluster evaluates its centroid. The distance of each observation from centroid is measured. Each observation is grouped into the cluster where it is at a minimum distance from the centroid. Thus, this algorithm is useful for customizing the distance traveled by a traveling salesman as in location routing as done in [7-8].

Page 107: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

The Application of AI: A Clustering Approach using K-Means Algorithm

Proceedings of IC4T, 2018 89

Fig. 2 Flowchart depicting K-means Algorithm

III. RESULTS AND DISCUSSION

Fig. 3 depicts a scenario with randomly generated data in a coordinate system. The coordinate system represents the locations of a courier company over a given geographical region. The locations of a Courier company are scattered randomly in a given geographical region. To optimize the cost and financial viability, Unsupervised learning is employed in the form of K-means clustering to group the office locations into two clusters which minimize the distance between them. In fig. 4 the locations have been customized in accordance with the K-means algorithm which uses the centroid approach along with push and pull approach to arrange groups. There are respectively two clusters, red and the blue with their respective centroids. The observation points in fig. 3 have been either pulled towards or pushed away from a given centroid based on minimizing the cost function. Therefore, the locations represented by the coordinate system will minimize the distance traveled by a traveling salesperson. Although only two clusters are depicted in fig. 4, the illustration can be extended to a higher number of clusters with each cluster having its own centroid. Thus, each cluster can represent a city center and the coordinates are aligned with the distances over a given geographical area. This prototype model can be extended to a real-world scenario, along with the benefits of cost and time efficiency.

Fig. 3 Randomly generated data (Office locations)

Fig. 4 Office locations in two clusters characterized by K-means algorithm

IV. CONCLUSION

In this paper, an illustration is provided as to how Machine learning can be implemented in optimizing the locations of the office. Machine Learning which has stemmed from AI is pervading all walks of life and is the hot topic of interest and research. The applications of AI are becoming ubiquitous with enhanced computing capability and advanced memory and storage capacity of computers. Unsupervised learning is used in the form of K-means clustering, where randomly distributed data or observation points are grouped into clusters where the cost function is minimized based on the Euclidean distance of observation points from the centroid of each cluster. This prototype model can be implemented in a practical scenario to locate the offices of a courier company.

Page 108: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Saurabh Dixit and Arun Kumar Singh

Proceedings of IC4T, 201890

REFERENCES

[1] Arpita Nayak, Kaustubh Dutta. “Impacts of machine learning and artificial intelligence on mankind”, 2017 International Conference on Intelligent Computing and Control (I2C2), 2017

[2] E. Alpaydin, Introduction to Machine Learning, MIT Press, 2014.

[3] J Smith. “Machine Learning using MATLAB” CreateSpace Independent Publishing Platform (April 18, 2017)

[4] Machine Learning with MATLAB, Mathworks (Web Resource)

[5] Andrew N.G. “Machine Learning with MATLAB” , Stanford University (Web Resource)

[6] Haykin S.,“Neural Networks and Learning Machines” Pearson Publication, 2016

[7] M. S. Zaeri ; J. Shahrabi ; M. Pariazar ; A. Morabbi “A combined spatial cluster analysis - traveling salesman problem approach in location-routing problem: A case study in Iran” 2007 IEEE International Conference on Industrial Engineering and Engineering Management Page(s): 1599 - 1602

[8] G.Gutin A.P.Punnen, “The Traveling Salesman Problem and Its Variations”, Springer press, 2002. 1602

Page 109: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

The Internet of Things (IoT) Revolution in India

Proceedings of IC4T, 2018 91

The Internet of Things (IoT) Revolution in India Himani Kamboj

Telecom Regulatory Authority of India New Delhi, India [email protected]

Abstract—The digital market has gone through various transformations in the last couple of years. One of them is the emergence of Internet of Things (IoT). The myriad of opportunities that arise from IoT has inspired a surge in innovation. Governments worldwide are using IoT based solutions for social & economic development. A lot of work is being done in India in the IoT space. This study paper provides a detailed overview of the IoT landscape in India, and discusses Government and Industry initiatives. It also highlights some of the key challenges that may hinder the growth of IoT sector in India.

Keywords— India, Internet of Things (IoT), Machine-to-Machine communication (M2M)

I. INTRODUCTION

IoT can be defined as the network of interconnected sensor-equipped electronic devices that exchange data, and connect with each other. It is the interplay of telecom, electronic hardware, and software industry [3].

IoT is not a new concept. The history of IoT dates back to 1999 when Radio-Frequency Identification (RFID) technology gained widespread attention. Over the last decade, IoT has got significant traction, due to the collaboration between various industries and the academia.

IoT as a new growth engine for the Information and Communications Technology (ICT) industry has sparked global interests. The world populated with connected things is expected to offer numerous benefits in all sectors of the economy. The data generated through IoT can enable

Table I. IoT ApplicationsIndustry ApplicationsAutomotive Autonomous cars, Fleet management, Predictive

maintenance, Asset trackingTransportation Electronic Toll, Parking management, Traffic

controlAgriculture Irrigation control, Environment sensing, Animal

tracking, Remote monitoring and control of equipments, Schedule release of pesticides

Retail Personalized customer experience, Inventory management, Point of sale (PoS) terminals

Healthcare Remote monitoring of patients, Wearable health devices

Utilities Smart energy meters, Smart grid

optimized productivity across industries, create truly smart cities with smart grids, smart transport systems and smart homes, bring in efficiency in delivering health services, improve agricultural yields, offer personalized customer experience and enhance public safety. Some IoT applications specific to certain sectors are listed in Table I.

It has been seen that sometimes the terms M2M and IoT are used interchangeably. But, they are not the same. In fact, M2M communication is a key component of the IoT architecture. IoT is a wider concept incorporating people, data, processes, and things (M2M communication)[5]. “Fig.1” shows the components of the IoT ecosystem.

A. IoT Technologies

a) Connectivity layer: In order to make the network of devices, there is no one-fits-all connectivity solution for all IoT use cases. Several technologies, as listed in Table II, can be used to satisfy varying connectivity requirements [4]. The existing telecom network is a ready infrastructure that can be used for providing connectivity in IoT based services. The telecom access technology is also evolving rapidly to meet the requirements of IoT. Fifth generation of cellular network technology (5G) - the latest iteration of cellular technology is expected not only to provide faster access, but also acts as an information duct built to connect billions of IoT devices. With the rollout of 5G networks in the near future, critical IoT applications will mostly ride over it. Apart from telecom network, many new approaches, for example, Low Power Wide Area Network technologies (LPWAN) which are particularly designed for Machine Type Communication (MTC), are being experimented and adopted. These technologies are specialized

Fig. 1. IoT –Building Blocks

Page 110: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Himani Kamboj

Proceedings of IC4T, 201892

for interconnecting devices that send small amount of data over wide area with high power efficiency so as to maintain battery life for many years.

b) Service Layer: As more and more things is expected to get connected to the IoT, the volume of data associated with and generated by IoT devices, will increase exponentially. To find relevant information from such large volumes of data, we need Artificial Intelligence (AI). Through Machine learning (an AI technology), we can identify patterns and detect variations in the big data sets. For extracting insight from data that otherwise is difficult for humans to review, other AI technologies such as computer vision and speech recognition can also be used. To bring real value to the IoT use cases, IoT data will need proper analysis.

B. Global IoT Market

The industry analyst estimates that the market for IoT services will grow (as shown in Chart I) significantly[2]. There are now over 17.5 Billion connected devices in the world. This figure is estimated to rise to 31.4 Billion by 2023.

Global spending in the IoT market is predicted to reach USD 772.5 Billion in 2018, showing an increase of 14.6% from 2017[6]. It is also expected that the worldwide IoT spending will experience a compound annual growth rate (CAGR) of 14.4% through the 2017-21 period outstripping the USD 1 Trillion mark in 2020 and reaching USD 1.1 Trillion in 2021. Modules/sensors, connectivity, and ongoing services (content

Table II. Connectivity solutions for IoTCoverage Area TechnologiesWide area network technologies 2G, 3G, 4G, 5G

Low power wide area network technologies(LPWAN)

LoRa, NB-IoT, Sigfox, Ingenu, Weightless, SilverSpring’s Starfish, EC-GSM

Short range technologies

NFC, RFID, IrDA, Bluetooth, z-Wave, Zigbee, WiSUN, Wi-Fi, Li-Fi, Thread

Fixed Line technologies Optical Fiber

Chart I. IoT connections outlook

Chart II. Proportion of global IoT revenue by technology, 2017-2021

as a service) would continue to account for the majority of revenue in the IoT market (as shown in Chart II) in 2021. Analytics software, security software, and IT services are expected to witness the highest growth between 2017 and 2021[5].

II. IOT IN INDIA

A. Indian IoT Market- Projection

The M2M connections in India are likely to reach 429 Million by 2021 [5]. Chart III shows the number of connected devices in India from 2016 to 2021.

In India, various industries are adopting IoT to improve efficiency and reduce costs. Industries that would be at the forefront of adopting M2M solutions by 2020 include automotive and transportation, utilities, finance and insurance, retail and healthcare. [5] Table III shows the proportion of M2M connections sector wise.

The proliferation of connected devices is estimated to create USD 270 Billion IoT market in India by 2022. The expected sector-wise IoT revenue is shown in Chart IV.

Chart III. India M2M connections forecast (in Millions)

Page 111: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

The Internet of Things (IoT) Revolution in India

Proceedings of IC4T, 2018 93

Table III. Proportion of M2M connections in India by industry verticals, 2020

Industry Vertical PercentageAutomotive and transportation 24%Energy and utilities (excluding oil and gas) 15%Finance and insurance 13%Retail and wholesale 11%Government 10%Oil and gas 9%Healthcare 6%Manufacturing 5%Others (business services, education, agriculture, construction, etc.)

7%

Chart IV Sector-wise IoT revenue, 2022

*As per the data provided by Ericsson

b. Policy and Regulation

The Indian Government is working actively to provide clear and consistent regulations and policies for IoT/ M2M communication. Recognizing the potential of M2M communications to advance all aspects of Indian society, it incorporated M2M as early as 2012 in its National Telecom Policy (NTP-2012).

In 2015, Department of Electronics and Information Technology (DeitY) released “Draft Policy on Internet of Things” [3] which focuses on creating USD 15 Billion IoT industry in India by 2020. To achieve this, three focus area were identified – (1) Capacity development (both Human & Technology) for IoT specific skill-sets for domestic as well as global markets, (2) Research & Development for all the assisting technologies, and (3) Development of IoT products specific to Indian needs in all possible domains.

In May 2015, the Department of Telecom (DoT) published “National Telecom M2M Roadmap” to guide the development of M2M-related policies. The Roadmap

focuses on communication aspects of M2M with the aim to have interoperable standards, policies, and regulations suited for Indian conditions across sectors in the country. [1] Telecom Engineering Centre (TEC) of DoT also came out with 9 technical reports on M2M detailing sector-specific requirements/use cases to carry out gap analysis and future action plans with possible models of service delivery.

DoT also updated the Mobile Numbering Plan to meet numbering requirements of billions of SIM connected devices. Separate 13 digit M2M Mobile Numbering Plan has been finalized for SIM-based M2M devices.

To address concerns like interface issues with Telecom Service Provider (TSPs), KYC and security, DoT in its “National Telecom M2M Roadmap 2015” envisaged a registration-based regime for M2M Service Providers utilizing telecom facilities from authorized TSPs. In May 2016, DoT issued draft guidelines for the registration mechanism for M2M Service Providers.

In order to have the inter-ministerial coordination to address all issues related to M2M, DoT constituted an Apex body on M2M incorporating participation from Ministries and other Government departments. DoT also constituted M2M Consultative Committee incorporating representatives from Telecom Standards Development Society of India (TSDSI), Bureau of Indian Standards (BIS), and sectoral industry representative bodies to bring M2M industry concerns and regulatory bottlenecks to the notice of Apex body. M2M Review Committee was also formed by DoT to support the implementation of actionable points evolved from National Telecom M2M Roadmap.

Telecom Regulatory Authority of India (TRAI) submitted its recommendations to DoT on Machine-to-Machine (M2M) Communications in September 2017 covering the licensing, spectrum, e-SIM, roaming, QoS, security & privacy related issues. [4] TRAI has unbundled each layer of the M2M ecosystem and has provided recommendations accordingly.

• For connectivity layer, TRAI has recommended permitting existing TSPs, ISPs and other connectivity providers like UL (VNOs) to provide M2M connectivity with just amendments in their licensing agreements. For connectivity providers using the unlicensed bands, TRAI has recommended to a have light touch regulation. A registration process is recommended for connectivity providers using PAN/WAN for commercial purposes. UL (M2M), a new authorization under UL is recommended for connectivity providers rolling out LPWAN networks.

• For the M2M service provider, TRAI has concurred with DoT’s view of mandating MSPs to register with the government under M2M service providers Registration.

Page 112: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Himani Kamboj

Proceedings of IC4T, 201894

According to TRAI, this will provide them recognition as a registered entity with the Indian Government and will boost their business globally.

• For critical applications in which any inconsistency in availability or service delivery quality can cause significant harm to customers, TRAI has recommended to use only licensed spectrum.

• For manufacturing/importing of M2M devices in India, TRAI has recommended the government to issue comprehensive guidelines. TRAI has also recommended to create National Trust Centre (NTC) for M2M devices and applications certification.

• Considering the need to have more spectrum for access services due to likely high influx of connected devices owing to M2M/IoT services, TRAI has recommended having additional delicensed spectrum in 800 & 900 MHz band.

• In context to roaming, TRAI has recommended to keep National roaming for M2M/ IoT under forbearance. To keep uniformity of the processes, TRAI has allowed International roaming in M2M under GSMA “M2M Annex” framework.

• On the issue of Data Protection in M2M sector, since a paper on “Privacy, Security, and ownership of Data in telecom sector” was under consultation, TRAI in its recommendations have mentioned that after due deliberation, a comprehensive recommendation on Data Protection will be issued.

In May 2018, DoT issued directions for implementing restrictive features for SIM cards used specifically for M2M communication services (M2M SIMs), KYC instructions for issuing M2M SIMs to entities providing M2M communication services under bulk category and instructions for using embedded-SIMs (e-SIMs).

C. Government Projects

The Indian Government has taken several initiatives which are to be driven by IoT deployments. One of the topmost initiatives is the Digital India Program that aims at “transforming India into a digital empowered society and knowledge economy”. This program is expected to provide the required impetus for the development of the IoT industry ecosystem in the country.

The Ministry of Urban Development’s vision of setting up 100 Smart Cities, Ministry of Power’s project to set up 14 Smart Grid pilots and Ministry of Road Transport decision to mandate the commercial passenger vehicles of more than 22 seating capacity, to be equipped with GPS will ignite proliferation of IoT/M2M communication market in India.

The establishment of Centre of Excellence - Internet of Things

(CoE-IoT) by Department of Electronics and Information Technology (Deity), NASSCOM and Education and Research Network (ERNET) will foster Indian start-ups in the IoT space.

D. Industry Initiatives

Presently, some operators have set up networks in the ISM band (delicensed band), specifically for M2M communication. The existing GSM/CDMA/landline networks also support IoT operation. Many telecom operators in India have taken initiatives in the IoT segment. Some of the initiatives are-

a) Tata Communications has partnered with Hewlett Packard Enterprise to support the roll-out of LoRa based platform in India.

b) Bharti Airtel and Bharat Sanchar Nigam Limited have partnered with Nokia to create a strategic roadmap for developing IoT in India.

c) Reliance Communication’s IoT unit has entered into a strategic partnership with IoT software platform provider Cumulocity to deliver IoT solutions.

d) Reliance Jio Infocomm Limited has signed a contract with AirWire Technologies to offer the latter’s connected car IoT devices to customers in India.

Apart from the big players, many start-ups are also working in the IoT space. According to the IoT start up directory 2017 released by India Electronics and Semiconductor Association (IESA) and The Indus Entrepreneurs (TiE) at IoTNext 2017 Summit, there are over 971 start-ups across the nation, working in the field of IoT.

III. KEY CHALLENGES

IoT in India may face a number of challenges, which, if unresolved, can hinder growth prospects. It is vital to carefully analyze the issues and develop a robust framework that promotes inclusive growth. Some of the key challenges are-

a) Data Security and Privacy: With the large-scale adoption of IoT, much more activities around us will become online raising the security concerns multifold. So, it is important to adopt standards, modify IT Act in accordance with the present dynamics of the digital market and incorporate security and privacy preserving techniques in systems. Transparent mechanisms should be put in place to boost the confidence of the users so that the fear of loss/misuse of data is alleviated. A balanced and complete set of rules for data privacy and security are required to encourage investment and innovation in the emerging sector of IoT.

b) Cross-border data flow: The data generated from trillions of devices will be stored across continents, with minimal information to the entity generating data about its usage and ownership. If servers are located outside the country, it will

Page 113: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

The Internet of Things (IoT) Revolution in India

Proceedings of IC4T, 2018 95

raise regulatory compliance challenge, data security issue and difficulty for Law Enforcement Agencies (LEAs) to lawfully intercept. The security threats are evolving and are in dynamic stage. The government should address them dynamically. In today’s connected world, free movement of data is important to its appropriate place and to where it is needed. However, protection of critical and sensitive data cannot be neglected. Instead of restricting cross-border data flow, the government should regulate it. To address the difficulties faced by LEAs to lawful intercept, the government should focus on increasing and strengthening Mutual Legal Assistance Treaties (MLATs).

c) Policy and Regulatory Framework: IoT will benefit every global citizen. However, the use cases will vary widely from nation to nation. Advanced nations might target increase in productivity using IoT while developing nations might identify the delivery of social benefits as a primary goal. However, in any of these scenarios, it is a prerequisite to have a national roadmap for this disruptive technological concept to become a success. Clarity in policy and regulatory environment will embolden the stakeholders as well as the consumers to make investments as well as embrace the changes proactively. Also, IoT applications will percolate to various industrial verticals. New business models specific to a sector or at times cross-sectoral as well may come up as a result of customization of services using IoT. In such a situation, it is difficult to have a single regulatory framework. Hence, the regulations should be two-tiered, with a basic common IoT framework applicable to all IoT applications and separate service-specific guidelines to cater to the specific needs of each segment.

d) Standardization Process: Technology companies are working independently from each other to provide solutions using different frameworks and platforms. Due to which many devices cannot integrate with one another. Too many standards cause interoperability issue between different platforms/vendors. It is vital to overcome the interoperability issue for its wide-scale adoption. Global harmonisation of standards is mandatory in the IoT environment so that the machines and devices can move across national boundaries seamlessly. National standards organisations should derive from the global processes to ensure tangible results.

e) Network coverage: Despite all the effort made to provide coverage in the length and breadth of the nation, we still have to work on the performance of the telecom network in our country. There are certain critical applications of IoT that require a highly reliable network. QoS for these services, if deficient, could result in serious consequences. The issue of connectivity needs to be addressed for a successful implementation of IoT in the country.

f) Device security: Device security is paramount since many scenarios are devoid of human intervention in a connected society. One rogue machine can play havoc and disrupt an entire city in a networked society. Hence, security by design should be mandated for every device that forms part of the IoT environment. Global and national device certification agencies can take up the onerous task of certifying the integrity of connected devices.

g) Affordability: What comes with connectivity is the affordability of devices and internet services in the market. If people can’t afford these services, they will be left underserved in a rapidly-changing and technology-driven growth. Apart from affordability, public investments in ICT education, research, and human resource development must catch up across all regions otherwise institutions and citizens would not be able to absorb technological advancements brought about by IoT.

IV. CONCLUSION

IoT is emerging as the next wave of telecom revolution. To benefit from this disruptive technology, we should create an enabling environment that can foster the growth of IoT services in India. The scale of IoT is humongous and requires large-scale participation not just from the government but also from the various industries, corporate, start-ups, academic institutions, and citizens to bring it to fruition and increase investment in the sector. Effective cooperation and collaboration amongst regional countries is also essential for the IoT market to flourish. Sharing experiences among the countries would give more fruitful results to overcome problems and challenges.

REFERENCES

[1] Department of Telecom (DoT), India, “National Telecom M2M Roadmap,” 2015.

[2] Ericsson, “ Mobility Report,” 2018.

[3] Department of Electronics and Information Technology (DeitY), “Draft Policy on Internet of Things,” 2015.

[4] Telecom Regulatory Authority of India (TRAI), “Recommendations on Spectrum, Roaming and QoS related requirements in M2M communications,” 2017.

[5] Federation of Indian Chambers of Commerce and Industry (FICCI), “M2M-changing lives of 130 crore+ Indians,” 2018.

[6] International Data Corporation (IDC), “Worldwide Semiannual Internet of Things Spending Guide,” 2017.

[7] Machina Research, “Service Provider Opportunities & Strategies in the Internet of Things,” 2015.

Page 114: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

MV Ramachandra Praveen, Prakhar Srivastava, Dr Ragini Tripathi and Razia Sultana

Proceedings of IC4T, 201896

MQTT Protocol Simulation Using MIMIC Simulator1MV Ramachandra Praveen, 2Prakhar Srivastava, 3Dr Ragini Tripathi, 4Razia Sultana

1Indian Air Force, New Delhi2Sc/Engr -SE, Space Application Centre, Indian Space Research Organization, Ahmedabad

3Professor EC,Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow4Astt. Communication Engineer, Bangladesh Meteorological Department, Dhaka

[email protected],[email protected],[email protected],[email protected]

Abstract - Internet of Things is slowly emerging as a key changer. Machine to Machine Communications (M2M) which is a subset of Internet of Things too is prevalent in numerous applications. Message Queuing Telemetry Transport (MQTT) protocol which is widely used in M2M applications offers considerable advantages compared to other protocols due to its Publish/Subscribe architecture. For deploying MQTT in applications involving large number of sensors, it is imperative that a simulation of such large number of sensors be carried out to arrive at aggregate data rates. This will help designers to plan and implement large scale sensor actuator networks using MQTT protocol on bandwidth constrained links. The aim of this paper is to discuss the simulation of MQTT protocol using Mimic Simulator for calculating aggregate data rate of sensors.

Keywords- Data rate, Machine to Machine Communication, Message Queuing Telemetry Transfer Protocol.

INTRODUCTION

Internet of Things or IoT as it is popularly called is a revolutionary technology that is bound to influence all spheres of human life and engineering fields. IoT has many fields of application like Life Sciences, Energy Metering, Safety and Security, Industrial Automation etc. Machine to Machine (M2M) communications is a sub domain of IoT which involves devices communicating to devices. M2M communications is evolving in a larger way due to the rapid pace at which automation is carried out in many fields. It is expected that M2M communications market have a large market potential and that there will be about 600 million M2M units in the field by 2020. The critical component of (IoT) are the communication protocols which integrate heterogeneous sensors to multiple kinds of actuators and enable smooth and reliable data transfer

IoT environments in general and M2M implementation in particular have sensor-actuator architectures as the key functionality. The sensor element may be a single sensor or a large number of sensors deployed over a wide geographical area. Generally, the sensors or sensor elements operate mostly in constrained environments where in there is a shortage of processing power of end devices. Moreover, the bandwidth available for relaying the sensor information to the central location would also be limited. A typical real life scenario would be a sensors deployed over a long multi-nation oil

pipe line to monitor any leakage. Since such pipelines pass through inhospitable terrains like forest and deserts, the sensor elements have to necessarily operate on battery power or solar power which is limited. The other example would be Automatic Weather Stations (AWS) which are deployed at remote locations. There would not be generally any access technologies like Cellular or Terrestrial communications present in such areas for providing the back haul connectivity. Hence, Satellite communication is the only mode for back haul connectivity. Satellite links too are band width constrained. These factors highlight the necessary for a light weight protocol which cater to both factors like less processing power/ battery life of end devices and less bandwidth of backhaul links.

The important protocols that suit most IoT implementations are Message Queuing Telemetry Transfer (MQTT) and Constrained Application Protocol (CoAP) [1].However, the significant difference between these two is CoAP works on Request/Response mechanism whereas MQTT is a Publish/Subscribe protocol. Publish/Subscribe protocols fare better be meet better the IoT requirements than request/response since clients do not have to request updates thus, the network bandwidth is decreasing and the need for using computational resources is dropping [2]. CoAP runs on HTTP. So the security features available for Web are equally applicable for CoAP. But MQTT uses TLS for security. CoAP is User Datagram Protocol (UDP) based and hence traffic will be less but at the cost of reliable delivery. MQTT is Transmission Control Protocol (TCP) based. Hence CoAP experiences more packet losses than MQTT though results may vary depending upon QoS and traffic conditions. MQTT experiences lower delay than CoAP [3]. CoAP being a HTTP based protocol, it would require the exact IP address of the destination node to communicate and post the data. This would pose as a problem in scenarios where the nodes are behind a NAT (Network Address Translation) IP (Internet Protocol) address. This kind of scenario can be best implemented by MQTT Protocol, since in MQTT, the data transfer architecture is decoupled. The sensors can fully be unaware of the controllers (or actuators) IP address. The sensors send or publish their data to a middleman called Broker. The controllers need only the IP address of Broker. They access the information of sensors from Broker by Subscribing to it. This publish/subscribe

Page 115: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

MQTT Protocol Simulation Using MIMIC Simulator

Proceedings of IC4T, 2018 97

mechanism is the biggest advantage through which scalable sensor-actuator networks can be implemented.

In certain case scenarios, where in guaranteed and secured delivery of sensor data is mandatory with minimum packet loss MQTT definitely scores superior to CoAP and emerges as the best suitable option. However, to gain a complete understanding of protocol, it is imperative that a simulation of the protocol with experimental/emulated data be carried out.

This paper discusses MQTT protocol specifications and its simulation using MIMIC Simulator from Gambitt Communications.

MQTT PROTOCOL

I-Basics of MQTT Protocol

MQTT is an ISO standard publish-subscribe-based “lightweight” messaging application layer protocol for use on top of the TCP/IP protocol. MQTT was created by Dr. Andy Standford-Clark of IBM and Arlen Nipper of Arcom (now Eurotech) in 1999 for monitoring and control of gas pipelines. It is designed for connections with remote locations where end terminal processing power or bandwidth or both are limited.

Presently, MQTT Ver. 3.1 is in vogue. MQTT-SN (MQTT Sensor Networks) is a variation of the main protocol catering needs for embedded devices running protocols likes ZigBee. MQTT design goal is to overcome limitations and connect different sensors and actuators irrespective of the underlying technologies. The code overhead of MQTT is so less that it can be implemented on some of the smallest devices with meagre computing power.

MQTT is Publish/Subscribe protocol. The Publisher can be a single sensor or an aggregator node or any device that can collate the information and connect to a central site/server. The client or sensor agent initiates a connection by sending a CONNECT message to a server or Broker. The Server or Broker responds back by sending a CONNACK or a Connection Acknowledgment message. The Publisher publishes the information under heads called ‘Topic’. A topic looks like a hierarchical file path. The publishing-subscribing method of messaging system requires a intermediary that can communicate with publisher and subscriber on either side. In MQTT, it called as a message broker. The broker collects data on various topics from various publishers and distributes to one or more subscribers based on their topic of subscription. Publish in MQTT scenario refers to giving information about a given topic to a server that is configured as a MQTT message broker. The clients publish the information under a specific Topic by using a PUBLISH message. The topics are hierarchical in nature. MQTT Topics follow a hierarchical approach in which the levels are separated by a delimiter (/).

An example topic hierarchy would be; /home/lights/front room lights. The broker then pushes the information to subscribers who have earlier subscribed to these topics. Clients can subscribe to a specific level of a topic’s hierarchy or can use certain wild card characters if they want to subscribe to more than one topic in topic hierarchy. The acceptable wild card characters are # (Hash character) which is a multi level wild card and + (Plus character) which acts as single level wild card. The UNSUBSCRIBE message is used by a Subscriber client to stop receiving messages of a particular topic. The Broker responds with UNSUBACK message which denotes the acknowledgement of broker to the UNSUBSCRIBE message. The client finally terminated the connection with the broker by sending a DISCONNECT message.

II-Working of MQTT Protocol

Every MQTT communication has four stages. It starts with a connection, followed by an optional authentication then communication and finally the session is terminated. A client first establishes a TCP/IP connection to the broker on a standard port or a custom port as designed and published by Brokers.

Non-encrypted communication uses 1883 and encrypted communication uses 8883 as standard ports. Encrypted communications is based on SSL/TLS. The Publisher/ Subscriber validate the Certificate of Broker before trusting it to either publish or subscriber the information. To ensure trust between broker and clients, the clients too may publish their certificates to broker and prove their authenticity. Though rarely used, it has become normal for clients to publish their SSL/TLS certificates to Brokers. It is to bre noted that introduction of encryption ad authentication would consume power. Hence in case of power constrained deployments, the authentication is carried out by specifying username and password in clear text as part of CONNECT message sent by client to server during hand shake., Open brokers published on the Internet, will accept anonymous clients with

Figure 1. Illustration of publish/subscribe mechanism of mqtt through broker

Page 116: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

MV Ramachandra Praveen, Prakhar Srivastava, Dr Ragini Tripathi and Razia Sultana

Proceedings of IC4T, 201898

blank username and passwords. Due to resource overheads involved, authentication is rarely used in MQTT.

The MQTT Publish/Subscribe mechanism using Hive MQ- a public Broker is illustrated below for easy understanding:

III- QoS Levels of MQTT

Code footprint of MQTT protocol is small making it a light weight protocol. Each message has of a fixed header and optional variable header. The fixed header is of 2 bytes. The payload size is limited to 256 MB. (QoS) level is also specified for each message. MQTT is designed to work well on unreliable networks and as such provides three levels of Quality of Service for different situations. These allow clients to specify the reliability they desire. Quality of Service levels determines how a MQTT packet will be handled by the protocol stack. Though the reliability increases as the QoS levels increase, the latency and also the bandwidth requirements also increase. The various QoS levels are given below:

(i) QoS Level 0: This is the lowest level of QoS. . In this level, no acknowledgement is required and the MQTT relies on TCP/IP layer for reliable delivery. In this level, the publisher sends the message through a PUBLISH packet to the Broker at most once. The Broker also sends the message to Subscriber at most once. There is no acknowledgement and hence there is no mechanism to check whether the message delivery was successful or not. Hence, this QoS level is also referred as “at most once,” “QoS0” or “fire and forget.”

(ii) Qos Level 1: In applications where Acknowledgement is required, QoS level 1 is used. This is acknowledged service. The message is delvered to broker minimum once. It can be more than once also. So in addition to PUBLISH packet, the broker exchanges and PUBACK packet with Publisher and Subscriber exchanges PUBACK packet with Broker. Since on non receipt of PUBACK packet, the sender retransmits the same packet again, the issue of dealing with duplicates has to be handled by higher layers. Hence, this QoS level is also called as “at least once” or “QoS1.”

(iii) QoS Level 2: This QoS level is used for assured delivery. In this mechanism two pairs of packets are used. The first pair is called PUBLISH/PUBREC (Publish and Publishing Receive) and the second pair is called PUBREL/PUBCOMP (Publishing Release/ Publishing Complete. These packet pairs ensure that there are no duplicates and the packet is reliably deliverd only once. Since it involves This QoS level is also called as “exactly once” or “QoS2.”

SIMULATION OF MQTT

MQTT protocol is widely used in scenarios where a large number of sensors are present and the sensed information is to be relayed to a central monitoring site through band width constrained links. However, to arrive at the aggregate data rate that would come out from large number of sensors a software simulation using MIMIC MQTT Simulator of Gambitt Communications is carried out.

The MIMIC MQTT Protocol Module is an optional facility that simulates the Internet of Things (IoT) Message Queue Telemetry Transport standard. Each MIMIC agent instance configured with MQTT acts as a publisher client (eg. a sensor) that can be configured to connect to a Broker (public or private) and can publish information. This information can be subscribed by clients by connecting to Broker and specifying the relevant topic name. Once MQTT is loaded, any agent instance configured to support the MQTT protocol will be able to publish MQTT messages to a MQTT broker .

I-Configuring MIMIC MQTT Simulator

MQTT configuration pane is to be used to configure the parameters for a MQTT session. The main parameters that are set for simulation in MIMIC are given below:

Config File: This mandatory parameter specifies the MQTT configuration file, to determine what MQTT packet is generated by the corresponding sensor. We will not be able to start the agent unless this parameter is set. We can also edit configuration files directly.

Broker Address: This mandatory parameter specifies the address of the broker server.

Port: This optional parameter specifies the broker port to use. The default port is 1883.

Client ID: By default, the client identifier field in the CONNECT message will be a string like mimic/mqtt/00001, the last number indicating the agent instance number. We can override this string with this configurable. If the string ends with #, then this is meant to be a prefix, and the client identifier sent to the server will replace the # with the agent instance number.

Username: This optional parameter specifies the username to be used for authentication.

Password: This optional parameter specifies the password to be used for authentication.

Use TLS: This optional Boolean parameter determines whether TLS security is enabled for this simulated sensor. The default is false.

TLS Config File: If the sensor is configured to Use TLS security as above, then this parameter specifies the TLS

Page 117: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

MQTT Protocol Simulation Using MIMIC Simulator

Proceedings of IC4T, 2018 99

configuration to use. If no configuration file specified, the agent’s TLS client will not have a default certification. To use the sample certification included in MIMIC installation, we have to load the tls.cfg file.

II- MIMIC Files for Simulation

MIMIC Simulator is mainly governed two types of files for running MQTT simulation. They are config file or CFG file and Java Servlet file or JSON file. They are explained below:

(i) CFG File: The Config file determines the simulation scenario. It specifies four important parameters like Topic, Count, Interval and QoS.

(ii) JSON File: This file defines the actual payload content that will be delivered by a MQTT packet. A JSON file is written in JAVA language. We can customize the JSON file to include data from different type of sensors. For our simulation, this file is configured to have data from accelerometer, gyro, pressure, temperature, humidity, light etc.

III- Running MQTT Simulation

In present simulation, MIMIC is configured with 25 sensor nodes (maximum allowable in trial version of MIMIC). Each sensor node is configured to publish a topic to a open broker running at www.iot.eclipse.org.Each sensor node is configured to sense seven different kinds of physical parameters like temperature, pressure, Relative Humidity etc and publish these sensed parameters to a broker once in every 1 second with a QoS of level 0-Fire and Forget method. The MIMIC Simulator generates MQTT packets as per the parameters specified by config file. These packets can be captured by a

suitable packet sniffing software like Wireshark for analysis. The screen shot of MIMIC depicting 25 sensor nodes is given below:

The sensor data generated by MIMIC Simulator get Published at www.iot.eclipse.org under the topic mimic/mqtt/00001 can be viewed or subscribed by using MQTT Spy, which is Google Chrome extension. The MQTT Spy need to be configured with Brokers IP address and the topic name there by making the sensor data published by the Sensor agents available to the subscriber. The significant advantage of MQTT protocol in terms of decoupling of publisher and Subscriber can well be appreciated by this. The subscriber needs to know only the address of Broker and the topic name to get the sensor data sent by the Publisher. Similarly, the Publisher only needs to know only the IP address of Broker to publish the sensor information. The below figure depicts the Publisher sending the sensor data to Broker and the MQTT Spy Subscriber getting the published data from Broker.

VI Results

In the test case considered for simulation, we have choosen a total of 25 sensor nodes each sensing seven different physical parameters (Pressure, Temperature, Relative Humidity, Gyro values, Accelerometer, Magneto meter and Light). From the

Figure 2. Mimic mqtt simulator running with 25 agentsFigure 3. Screen shot of agents publishing sensor data and subscriber

receiving data.

Page 118: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

MV Ramachandra Praveen, Prakhar Srivastava, Dr Ragini Tripathi and Razia Sultana

Proceedings of IC4T, 2018100

Wireshark packet capture files we arrive at the following packet length details (including the IP Header):

Based on the above data, we can arrive at the aggregate data rate for all 25 sensors using the formula:

Aggregate Data Rate = {(Sampling interval in seconds* PUBLISH message length)} * No of Sensor nodes (1)

various factors like sampling frequency, resolution of sensor, number of sensors that a given band width can support etc, before going for the actual design or deployment.

FUTURE SCOPE

The Present Simulation is carried out with trial version of MIMIC using only 25 nodes. The licensed version of MIMIC that can support upto 25000 nodes can be utilised for simulation. Also, custom made CFG and JSON files may be employed to generate more realistic simulation of MQTT Protocol.

REFERENCES

[1] Priyanka Thota and Yoohwan Kim “Implementation and Comparison of protocols for M2M communications” in 4th International Conference on Applied Computing and Information Technology, Las Vegass USA, 12-14 Dec 2016.

[2] Vasileios Karagiannis, Periklis Chatzimisios, Francisco Vazquez-Gallego, Jesus Alonso-Zarate “A Survey on Application Layer Protocols for the Internet of Things” Transaction on IoT and Cloud Computing 2015.

[3] Dinesh Thangavel, Xiaoping Ma, Alvin Valera, Hwee-Xian Tan, Colin Keng-Yan Tan, “Performance Evaluation of MQTT and CoAP via a Common Middleware” IEEE Ninth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 21-24 April 2014, pp. 1-6.

AUTHOR INFORMATION

MV Ramachandra Praveen, Aeronautical Engineering (Electronics), Indian Air Force, India

Prakhar Srivastava, Scientist/Engineer-SE,

Space Application Centre, Indian Space Research Organisation, Ahmedabad, India

Dr. Ragini Tripathi, Head of the Department, Electronics Engineering, Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow, India

Er. Razia Sultana, Assistant Communication Engineer, Strom Warning Centre, Bangladesh Meteorological Department, Dhaka, Bangladesh.

TABLE-I Packet lengths of various MQTT MessagesParameter ValueCONNECT message length 84 bytesCONNACK message length 58 BytesPUBLISH message length 435 BytesSampling interval for publishing data 1000 milli sec ( 1 sec)No of sensor nodes 25 nodes

Substituting the values of Table-I in (1), we get the aggregate data rate as 87.7 Kbps.

This aggregate data rate will serve as a measure to will designing or calculating the communication link throughput. Also, in case of bandwidth constrained links Satellite communication links, the aggregate data rate is critical while arriving at the Link Budget calculations.

Moreover, since MQTT is a TCP based Application layer protocol, the TCP metrics like Congestion Window Size, Window Scaling Methodology, Buffer Size requirements etc can be pre determined for a given throughput. These parameters play an important role in real –time or near real time applications where minimum latency is desirable.

CONCLUSION

MQTT is a very efficient protocol for M2M communications. In situations where reliable sensor data delivery is required on constrained bandwidth links, MQTT performs well. However, to understand the network performance parameters in band width constrained links like jitter, throughput etc, it is prerequisite to carry out the simulation of MQTT to arrive at the aggregate data rate and to understand the packet exchange mechanism. This will enable the design engineers to determine

Page 119: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Cloud Computing In Indian Aerospace and Defense Sector: Relevance and Associated Challenges

Proceedings of IC4T, 2018 101

Abstract - Cloud computing as a concept has been around for quite some time but after launch of digital India program, a lot of buzz about “cloud computing” and its revolutionary promise to transform how work is done towards productivity enhancement, product quality, innovation ability and cost-effectiveness and flexibility.

Cloud computing swiftly becoming the revolutionary operating practice being adopted by Industries in various domain of business, as it ensures economies of scale through use of shared IT resources including software, hardware and business applications to achieve coherence. The scalable cloud based IT resources are provided by third party companies and their associated maintenance and routine upgrades are also taken care by them and thus users need not to cater any extra capital towards this.

This paper make an attempt to present a comprehensive view on Cloud computing framework, review of its benefits, relevance in Aerospace and defense sector, Implementation areas, risks and viable mitigations considering complex security requirements of the Aerospace and Defense (A&D) Sector.

Key Words: Cloud Infrastructure (IaaS, PaaS, Saas), IoT, Cloud Hosting, Cloud Architecture, Cloud based Enterprise Resource Planning(ERP),Product Life Cycle Management(PLM), CAD ,Cloud Enabled Operations in A&D Sector, Equipment health Monitoring, Data stealing ,Cloud Security

INTRODUCTION

Indian Aerospace and Defense industry passing through a critical phase and they are evolving ways and means to ensure sustainability in this dynamic and emerging market where global leaders are eyeing to capture lucrative Indian defense market. It is also an fact, with the globalization, there is no physical boundaries for carrying out business and whoever is capable to timely capture the market and meet the customer requirements in competitive way will remain and lead.

In 1854, “Survival of the fittest” is a phrase that originated from Darwinian evolutionary theory also seems appropriate in this context.

It is certain that to compete with domain leaders and ensure stable footprints on global platform, Indian Aerospace and Defense industry must need to evolve from current way of

Cloud Computing In Indian Aerospace and Defense Sector: Relevance and Associated Challenges

Saurabh Kumar GuptaAerospace Systems & Equipment R&D Centre

Hindustan Aeronautics Limited (HAL), Lucknow, India, [email protected]

doing things and should adopt best practices in business, modernize the old legacy infrastructure, maximize utilization of IT Tools, Cloud Computing and Internet of Things (IoT) etc.

The Indian Aerospace and defense industries exploring ways to reduce overhead expenditures, internal cost elements, inefficiencies, enhance productivity, manage timelines, gain agility, collaborations, outsourcing etc to achieve competitive edge.

The biggest concern of Armed forces from Indian defense and Aerospace industry is to overcome undue delays and cost over runs in production supplies, which results in shortage of equipment’s and aircraft fleets leading to threat on national security.

In various joint reviews, assessment by IDSA and other study groups, it has been emerge that A&D firms to strengthen their production and Mfg. capabilities and collaborate with SME Partners.

A&D have to start moving towards more use of software automation in manufacturing, machine-to-machine communication controls, internet of things devices and sensors, data sharing with collaborators, analytics and flexible IT consumption. All these solutions are available in cloud based ecosystem.

So far very limited efforts have been made to ascertain the role of IT and its new instrument i.e Cloud Computing in Aerospace and Defense (A&D) industry

However, in recent past, cloud computing emerge as a cost effective, efficient IT solution to overcome business inefficiencies in organizations and also assist in gaining competitive edge in operations and services by changing traditional way of doing business.

Cloud based ecosystem has five basic characteristics [2] which distinguishes it from traditional IT services (Fig1):

1) Demand based provision of services, usage monitoring without the help of human interface.

2) Broad network enables big data transfer and high end computing services.

Page 120: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Saurabh Kumar Gupta

Proceedings of IC4T, 2018102

3) Elasticity to scale up the IT resources as needed.

4) IT resource pooling across multiple applications and users.

5) Quantification of service utilization for better resource planning and monitoring of expenditure.

6) Cost effective IT infrastructure and services.

Fig1: Cloud Computing Motivators

The present technology of Cloud based computing is the blend of various other technology developments including:

n High-speed reliable networks

n Very large capacity servers and processing hardware’s

n Virtualisation capabilities[9]

n Web 2.0 standards

n Development of open source software’s (e.g. Apache Hadoop web server & Linux OS).

TYPES OF CLOUD COMPUTING SERVICES

The most common and widely adopted cloud computing services are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) ( Fig2).

Fig 2: Cloud Computing Services & Architecture

INFRASTRUCTURE AS A SERVICE (IAAS):

IaaS is a cloud based computing model where virtualized infrastructure is offered by external cloud providers. With IaaS, user companies outsource for storages, servers, data center space and cloud networking components connected through the internet, offering similar functionality as that of an on-premises infrastructure.

In this service model, cloud providers managing the networking firewalls, high capacity servers and the physical data center. Some key players offering IaaS are Amazon EC2, Microsoft Azure, Google Cloud Platform, GoGrid, Rackspace, Digital Ocean among others.

PLATFORM AS A SERVICE (PAAS)

PaaS is an upper level of IaaS. In PaaS, service providers are taken care of both hardware and software’s required for the development of end application. PaaS provides flexibility to organizations to emphasis on key business areas without any consideration of base platform.

The PaaS environment enables cloud users (accessing them via a webpage) to install and host data sets, development tools and business analytics applications, apart from building and maintaining necessary hardware. Some key players offering PaaS are Google App Engine, Heroku, AWS, Microsoft Azure, OpenShift, Oracle Cloud, SAP and OpenShift.

SOFTWARE AS A SERVICE (SAAS)

In SaaS based cloud architecture, users directly uses the applications, which resides in cloud server and thus need not to install it on their systems/devices. Users can store and evaluate data and execute using application. Here, the cloud service provider delivers the entire software suite as a pay-per-use model. This model is largely being adopted by organizations worldwide.

However, based on the requirement as per the business needs, users can decide the appropriate cloud model as no “one-size-fits-all”. Organizations must evaluate the current need and future prospects while deciding shift to cloud architecture.

There are four basic cloud delivery models, as outlined by NIST, which relate to who provides the cloud services. Agencies may employ one model or a combination of different models in delivery of applications and business services.

CLOUD HOSTING CATEGORIES

Based on hosting categories, Cloud can be classified as public cloud, private cloud, and hybrid.

PUBLIC CLOUD:

Public cloud, in general, is SaaS services offered to users

Page 121: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Cloud Computing In Indian Aerospace and Defense Sector: Relevance and Associated Challenges

Proceedings of IC4T, 2018 103

over the internet. It is the most economical option for users in which the service provider bears the expenses of bandwidth and infrastructure. It has limited configurations, and the cost is determined by usage capacity.

Public cloud is not suitable for organizations operating with sensitive information as they have to comply with stringent security regulations [5].

However, newly formed SME’s, budget sensible startups, academicians can adopt public cloud computing and take the benefits of cloud based resources and application tools to perform day to day operation. It is usually recommended that if handling data are of highly sensitive nature then public cloud may not be appropriate solution.

PRIVATE CLOUD COMPUTING

This hosting model considered as most secured one as dedicated servers, hardware and applications are kept reserved for specific users with enhanced security controls. The private cloud environments are generally adopted by defense, banking, law and organization dealing with strategic nature of data and are thus are bound by Govt guidelines.

In private cloud, as resources are not shared with others, which results in improved performance as compared to public cloud. In addition to performance, added security layers permits organization to operate secretive information and complex work in cloud environment. The only limitation towards adoption of Private cloud is high cost implication as compared to public cloud.

HYBRID CLOUD

Hybrid cloud combines the benefits of both public and private hosting services to ensure better performance efficiencies to clients. This hosting service provides flexibility to user to keep confidential data on private clouding and generic data on public cloud. In this way, this model provides cost effective solution to users’ in contrast to private cloud.

CLOUD ARCHITECTURE

The cloud based architecture consists of following system and sub–system (Fig 3):

Front and Back End infrastructure

Virtual Interface

Operating Platform and Applications software

Balancing.

However, despite the potential gains achieved by civil domain through cloud computing, data security, privacy, compatibility etc are still major question marks for defense and aerospace, which directly impact cloud adoptions by them. Here, sharing data and business applications to a third party, data security [1] is the prime constraint among the major challenges of Cloud Computing (Fig 4).

Fig 3 : Cloud Architecture System

Fig 4: Major Challenges of Cloud Computing

Cloud adoption speedily gaining pace in various business domains worldwide due to its promising benefits. However, Aerospace and Defense (A&D) sector are still not explored its full potential due to data security concerns [3]. In spite of security concern, this paper makes an attempt to identify the key operations in defense and aerospace sector where cloud computing can assists in gaining competitive edge with enhanced business performance in business using IT enabled cloud ecosystem. In later part of this paper, a security

Page 122: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Saurabh Kumar Gupta

Proceedings of IC4T, 2018104

concerns in defense and aerospace sector and possible way-out has been discussed.

CLOUD RELEVANCE IN KEY OPERATIONS OF DEFENSE AND AEROSPACE SECTOR

1. Design and Development of Accessories, System and Platform.

2. Platform Evaluation / Flight Trials

3. Series Productionisation

4. Maintenance ,Repair and Overhaul(MRO)

5. In order to appreciate the role of cloud computing and associated challenges in defense and aerospace domain, key areas have been further sub categories as follows:

1. DESIGN AND DEVELOPMENT (D&D)

Today’s design environments demand faster development and better results at lower costs. All of these can be achieved by evaluating design options early in the development process.

However, traditional design optimization and validation tools—such as back-of-the-envelope calculations, rules of thumb, physical prototypes or complex software—can be fraught with error, delay, or high costs.

In contrast to this , cloud based simulations modules enables us to quickly and easily validate design and understand how your ideas will affect the design’s performance—all without time-consuming training, sophisticated expertise or extra expenses.

The areas where cloud computing assists during design and development phase can be concise as follows

1. a) CONCEPTUALIZATION PHASE:

Design conceptualization phase is one of the most critical phases in product life cycle of the product considering design optimization in terms of performance and cost affordability. Generally, this consumes substantial time and money and here cloud offers a tool in the form of readymade design templates, which greatly reduces the efforts and time in design conceptualization phase

There are countless template options in cloud, which assists in conceptualization and product optimization. In addition to design templates, cloud also offers analysis and simulation tools, which are extensively used by designers of A&D in design conceptualization phase.

1. b) PRELIMINARY AND DETAIL DESIGN PHASE

Cloud Based computing because of its big data storage and analytic capability is used for Nx Modeling tools, FEA, CFD, Flow master, Thermal, Electrical, Reliability assessment,

Dynamic simulation tools for finalization and optimization of design. Through cloud based DOEs, engineers and scientists gains valuable information without consuming a significant amount of engineering time and cost. Cloud computing is also enabling “generative design.

1. c) ENTERPRISE RESOURCE PLANNING

Cloud based Enterprise Resource Planning provides a flexibility to extend the ERP interface to sister divisions, stakeholder and partners located at distant locations for better planning, resource management in dynamic manner. This will leads to enhance functional efficiency and reduce operational costs.

Apart from this, chance of data loss in cloud based ERP are minimal as cloud service provider keep it safe via placing back up servers at different locations. In spite of these, cloud ERP have few limitations in comparison with premises based ERP, cloud based ERP is less customizable in general and data security is also a major concern. However these can be dealt through customized hybrid computing.

Critical functions of the ERP are materials planning (MRP), creation of purchase requisitions, order, inventory management, supply chain management and manufacturing planning and shop floor management. These are increasingly finding a base with the cloud.

According to one study, use of cloud based ERP increased from 20 percent to 59 percent. Cloud based ERP solutions cohesively integrate core manufacturing functions and add the speed and flexibility to the industry-specific tasks.

1. d) PROTOTYPE MGF AND ASSEMBLY

Traditionally, prototype development was all about creating physical prototypes, qualify them through rig testing, and in the event of deviation in performance, developed product undergone few iterations till realization of intended product. During development phase these iterations consumes significant time and thus needs were foreseen to use an advanced IT tools to hasten-up the part manufacturing to ascertain the performance of product in advance via rig testing. Here, cloud based CAD/CAE/CAM, Rapid prototyping and 3D printing [4] offers a pragmatic solution by early manufacturing of parts. Later on, role of cloud can be extended to online inspection of parts through cloud synchronized sensors and Internet of Things (IoT) [7]. This all add an overall efficiency towards product realization in time bound manner.

1. e) QUALIFICATION TESTING

In aerospace and defense domain, Development and Qualification testing are an important and necessary requirement to verify the design and manufacturing process. Qualification tests are conducted on components, subsystems,

Page 123: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Cloud Computing In Indian Aerospace and Defense Sector: Relevance and Associated Challenges

Proceedings of IC4T, 2018 105

systems and platform to demonstrate that functional, structural, Environmental and Endurance requirements have been achieved.

In QT schedule, tests have been tailored to simulate real conditions and the performance results is then compared established accept-reject criteria based on mission requirements.

Here analytic capability and big data repository of cloud assists in qualification by analysis using high end analysis/simulation tools and also assists in qualification by similarity/analogy using previously generated performance data. This results in faster development cycle and huge cost saving as compared to rig test of physical prototype.

1.f) CLOUD BASED PRODUCT LIFE CYCLE MANAGEMENT

With globalization of Aerospace and defense market and entry of new players in this domain, the key pressure shifts to innovative solutions and life cycle support at competitive price for next 30-40 years. The program cost is now evaluated based on initial product and life cycle support cost. This requires availability of updated information of the product throughout its lifecycle to all stakeholders. Here, cloud based PLM ensures the availability of updated information of the product including BOM, Change Management, obsolescence issues etc in real time to all stakeholders.

Here, cloud offers the potential to Leverage economies of scale, enhancing flexibility, agility (Fig5)

Fig 5: Benefits of Cloud Based PLM over Traditional PLM

And ensures real time data sharing between manufacturers and supply chains partners. Based on role in supply chain, cloud based PLM, provide limited access /editing right to specific users and in this way shielding sensitive information.

As per Transparency Market Research’s report ,cloud based PLM market is expected to grow from US$40 billion in 2014-15 to US$75 billion by 2022.As per that report cloud-based PLM services market growing at a rate of 18 % in Defense and Aerospace sectors as compared to on-premise PLM.

PLATFORM EVALUATION AND HEALTH MONITORING

After successful qualification test as per specification requirement of components or Line Replaceable units(LRUs) or System, clearance for aircraft use issued by certification agency. The item is then considered as flight worthy and equipped on platform for flight evaluation checks. During flight trials, performance data under various set of flight configuration collected for performance analyses, health monitoring, establishing failure detection and diagnostic mechanism. The flight performance data act an important tools for establishing the framework of further refinement/modification/improvement, next level research and technology advancement.

The data generated during flight evaluation and subsequent service operation if manage appropriately leads to paradigm shift towards health monitoring of system and LRUs.

It is also a fact that flight data management is consider as cumbersome task as it need huge space requirement and limitation to analyze real time performance data acts as an another challenge towards health monitoring of equipment /aircraft.

However due to Cloud Computing big data processing capabilities, high capacity recording, data transfer and IoT , real time analysis of performance data is now possible which leads to understand the dynamic behavior of system and later on act as tools for failure notification, troubleshooting and diagnostic reporting[8].

The scalable, high data transfer rate and multi tenancy feature of cloud computing improvise health monitoring in aerospace and defense equipment. The major outcome of cloud based Health monitoring and flight analysis are:

REAL TIME MONITORING:

Performance Monitoring and Real time notification to Ground crew and maintenance engineer for quick turnaround preparedness

BASIC & ADVANCED ANALYTICS:

Extensive data analysis capabilities to support trending, Usage monitoring, etc.

Data Analysis act as an important tool for performance refinement and next level product development for Configurable Performance Indicators for airlines and user.

SERIES PRODUCTIONINSATION

In aerospace and defense industry, the prime customers are armed forces and time delivery of intended product is of prime importance. However due to various constraints including

Page 124: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Saurabh Kumar Gupta

Proceedings of IC4T, 2018106

technology, infrastructure, resource and capacity constraint, defense ecosystem is still not gaining pace in India.

As per SIPRI global arms report (2010-14),India was the world’s largest importer of major arms, accounting for 15 per cent of the global total sales . The total value of import for the year 2014-15 in India is Rs 2,736,676.99 Crs and our trade balance is (-Ve) Rs 840,328.581. This is not the only alarming aspect, our import and dependencies for critical technologies and systems especially in defense sector posed biggest challenge for the government considering national security and economic reforms in the country.

In order overcome this bleak gap in defense production, A&D firms to strengthen their production and Mfg. capabilities. They have to start moving towards more use of software automation in manufacturing, machine-to-machine communication controls, internet of things devices and sensors, data analytics and flexible IT consumption – all these solutions are available in cloud based ecosystem.

With the adoption of Cloud based ecosystems, production cycle time can be significantly reduce by saving prior tooling configurations and online calibration of machine tools[6]on the shop floor. It also aid in reduction of inventories by dynamic monitoring and real time production management using mobile device management.

MAJOR MAINTENANCE ,REPAIR AND OVERHAUL (MRO)

In aerospace and defense industry, availability of equipment and jets for intended purpose can be ensured through proper maintenance/repair/overhaul (MRO)support system. Routine servicing of equipment and aircrafts is taken care be defense forces at bases, whereas major servicing and repair is confined with Defense PSUs, Ordinance factories and Govt Labs.

The basic and foremost requirement in MRO is to ensure that right information (from repair procedures to equipment log history) is available with the Frontline operators whenever they need it, with the right tools and parts in the right place[10].This will increase the effectiveness and reduce turnaround times results in overcoming maintenance-caused delays.

Here, cloud comes in picture and with its supreme capability of handing big data and analytic capability, platform related records are securely hosted in the cloud, and can be revoke access as and when needed.

At present, 1% of the global MRO is constitutes by India. In developing countries like India, the requirement of MRO will be huge and anticipated to raise to $ 2 billion by 2021. In recent past, Govt of India has launched open sky policy, initiated procurement of new jets considering depleting squadrons, business aircraft fleet in the country, all these shall

1

emerge as a strong foundation for Indian MRO industry. Here, cloud plays an important and effective role in attaining these objectives if appropriately managed.

V. EMERGING CHALLENGES IN CLOUD FOR AEROSPACE & DEFENSE (A&D) SECTOR

Like all other technologies, there are associated challenges with cloud based ecosystem as it comprehends many complex technologies. Some of the most important challenges are as follows:-

n Traffic Hijacking

n Insecure cloud Interface

n Downtime

n Denial of Service.

n Malicious Insiders.

n Inadequate Due Assiduousness.

n Technology Vulnerabilities associated with shared resources

n Data Breaches at the end of service provider

n The security threats and privacy risks associated with cloud computing should be addressed at prime level by service providers. Below are some proposed way outs:-

n Encryption of data at the kernel level.

n Dis-integrate of data into puzzle pieces, encrypt before transferring to cloud. These puzzle pieces are re-assembled behind the firewall while processing. In this way, user can avoid the risk of man-in-the-middle attacks.

n To prevent data stealing, provision of login and password session to be introduced while accessing data or processing applications.

n Enhanced use of filter, firewalls to overcome denial of Service Attack

n Apart from above measure, cloud service providers should review and configure their security measures as per security updates issued by Cloud Security alliance (CSA) and other auditing bodies.

CONCLUSION

Since inception of cloud technology, continuous evolution have succeeded in instituting base for scalable and flexible data processing , storage using high end servers managed by third party. The trend toward digital transformation through cloud promises exciting new prospects for the Aerospace and

Page 125: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Cloud Computing In Indian Aerospace and Defense Sector: Relevance and Associated Challenges

Proceedings of IC4T, 2018 107

Defense (A&D) sector..

It is a fact that technology and associated challenges will move together, however important is too manage these challenges. In Aerospace and defense domain, the level of threat are of serious nature and could lead to compromise with National security. Because of these associated challenges, Indian A&D in present scenario reluctant to adopt this cloud based ecosystem. In first stage, reluctant defense and Aerospace sector could make a beginning by opting for non-classified or non-mission critical generic information on cloud. Later on with a gained experience, a balanced approach can be worked out by A&D industry say adoption of Hybrid model for cloud operation with data filtering and encryption etc. New researches continuously evolving ways and means to plug-in cyber security gaps that threaten confidential information and operations. Recently, US Defense Department took a large step forward towards transition of massive amounts of data to a commercially operated secure cloud system [11]. Such initiatives will act as a base and proposed a roadmap for cloud based System in Aerospace & defense and domain.

REFERENCES

[1] Monjur Ahmed And Mohammad Ashraf Hossain). “Cloud Computing And Security Issues In The Cloud” International Journal Of Network Security & Its Applications (Ijnsa), Vol.6, No.1, January 2014

[2] “National Institute Of Science And Technology (NIST Standards For Cloud Computing”, Http://Www.Nist.Gov/Itl/Cloud/Upload/Nist_Sp-500-291_Version 2_2013_June18

[3] “A Special Report Cloud Computing” Aerospace Industries Association Of America, Inc, 2012

[4] Zhenying Han “Cloud Computing Contribution To Manufacturing Industry”, Master´S Thesis In Information Systems Science Author: Zhenyin,2015

[5] “Forecast: Improved Economy In The Cloud: An Introduction To Cloud Computing InGovernment”, A Microsoft Us Government White Paper, March 2010.

[6] Columbus, L. “10 Ways Cloud Computing Is Revolutionizing Manufacturing”. Forbes Magazine Mar 2013

[7] Bandyopadhyay, D. – Sen, J. “Intenet of Things: Applications and challenges intechnology and standardization”2011

[8] Bombardier Selects Tech Mahindra as a Supplier for the C Series Aircraft’s Health Management System.”, Marketwire Canada,Oct 5 2015 Issue

[9] Tao, F. – Zhang, L. – Venkatesh V.C. – Luo, Y. – Cheng, Y. “Cloud manufacturing:a computing and service oriented manufacturing model”Journal of EngineeringManufacture. Vol. 255(10), 2011

[10] How Cloud Computing will Change the Aviation Maintenance Operation.http://www.atp.com.

[11] “Cloud Computing Strategy”, US DoD, July 2012.

Page 126: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Apoorva Chauhan, Arohi Srivastava & Atrey Tripathi

Proceedings of IC4T, 2018108

Artificial Intelligence based Health Analysis SystemApoorva Chauhan, Arohi Srivastava and Atrey Tripathi

Department of Computer Science and Engineering Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow, India

[email protected], [email protected] and [email protected]

Abstract— The modern technologies have changed the face of healthcare services but still we seem to be at backfoot when it comes to prediction and prevention of diseases beforehand. The adaption of Machine Learning will play a vital role in changing the results of healthcare industry by using fact-based analysis and providing necessary care that revolves around the patient data.

We present an AI based system that will provide complete health assistance to a person. It will analyze the risk of various diseases that the user is likely to acquire based on his medical history and other parameters like lifestyle, eating habits, sleeping routine, exercise pattern, etc. and will provide time to time assistance and diagnosis methods to improve the user’s health. This methodology can be implemented in sectors like decision support systems in health care, risk management, health analysis and disease prevention. When a user enters the various parameters related to his lifestyle and a disease, the system analyzes the degree of risk and provide all the probable best treatment for its prevention and cure. The method we propose is expected to predict accurate analysis of patient data.

Finally, we provide a model which can be used for predictive analytics by implementing machine learning algorithms to predict the likeliness (chances in terms of percentage) of the user to be prone to a disease.

Keywords—Machine Learning, Predictive Analysis, Health Care

INTRODUCTION

Today the healthcare industry – from public health to hospital administration to research and diagnostics – is growing at a phenomenal rate as baby boomers age and medical technologies continue to provide significant advancements. ML has a potential role to play in all these areas, while helping to address one of the industry’s most pressing concerns: rising costs.

Self-driving cars, Siri, etc are some of the examples of machine learning and artificial intelligence used in the real world. So why not to use artificial intelligence in the field

of healthcare. Machine Learning, Artificial Intelligence and the mathematical formulations can help in diagnosing various problems in the medical field. Many researchers have devoted their effort in up-bringing new methods and techniques for the healthcare. A system which predicts and helps in analysing the health of a patient so that they can take a better care of themselves would probably help in decreasing the death ratio.

A lot of researches have been done in this field but all the researches are focusing on a implementing new methods of a particular disease. In this research we are concentrating on all the types of health issues a patient is having whether it be cancer, allergies, diabetes, cardiovascular diseases and many more.

Predictive analytics of data can be can be calculated that which patients are at more risk and what could be done to reduce the risk. This will help patient to not to be stuck in detection phase and contain paradigm but to predict and prevent.

LITERATURE SURVEY

n In the paper by Moustakis and Charissis the role of ML was reviewed in medical decision making and an detailed literature review on various ML applications in medicine that could be of use to medical experts keen in applying ML algorithms to improve the efficiency and quality of medical decision making systems was presented. The issue of comprehensibility, i.e. how well the doctor can comprehend and, thus, use the results from a system that enforces ML methods, is extremely vital and should be carefully considered in the evaluation and analysis.

n The work by Ian McCrae, Dr Kathryn Hempstalk and Dr Kevin Ross deeply assessed application of machine learning to health care sector. This work explored the endless possibilities of how ML can be fitted perfectly in healthcare just based on the access to the information and the permission to use it. Their work

Page 127: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Artificial Intelligence based Health Analysis System

Proceedings of IC4T, 2018 109

very clearly proves that ML is 20% more effective than the current LACE technique being used by the current health department in the US. They also explained three major ML algorithms classification, clustering and regression. Their main focus was on critically assessing data since it is most crucial.

n The paper by Pravin Shinde and Prof.Sanjay Jadhav analysed efficient machine learning algorithms and techniques used in concluding disease and treatment related sentences from texts in medical journals. They basically used NLP(Natural Language Processing) for analysis and prediction. They divided their problem into two parts first recognizing short texts of their use and then processing the extracted information to analyse the problem.

n M. Durairaj and V. Ranjani analysed the potential use of classification based data mining techniques such as rule-based methods, decision tree algorithm, Naïve Bayes and artificial neural network (ANN) to the humongous amounts of data generated in the healthcare industry. In this work, medical issues have been scrutinized and assessed such as cardiovascular diseases and blood pressure problem.

n A system has been evolved by Beigi et al (2011) for estimating the proteins based on the derived features from the Chou’s pseudo amino acid composition server and a machine learning algorithm Support Vector Machine (SVM) is used which is probably the best method to analyse cardio-vascular diseases.

n The paper by Alexopoulos et al. stressed on the area of inductive ML methods in medical analysis of cardiovascular disease. The approach was based on the See5 algorithm, which is an upgraded version of the C4.5 algorithm. In the experiments recorded, this approach presented the capability to learn and grow from various cases and examples and handle missing information by making a decision tree, which could be converted to IF/THEN rules. Extra focus was given to the determination of the complexity and understanding level of the decision rules by consulting various medical experts.

PROBLEM STATEMENT

This topic came to our mind as health is one of the most serious problems of the world. Especially in India it is a very serious issue. So it is very important for us to work in this field to improve the health conditions in India as well as in the world. Many people are not even aware about the health problems they are facing until it reaches to vulnerable condition. Our main motive is to be able to develop a technology that can not only analyse a disease but also can help taking decisions

for its cure and can also predict future probability of a health related problem and the measures that should be taken to prevent those problems. Various researches had been made on predicting and analysing risk and cure of various diseases but here in our research paper we are focusing on making an AI system that proves to be a lifetime assistant for our health that accurately analyses risk of all kinds of diseases, helps to take decisions regarding the methods to cure and prevent all types of diseases, and also to assist the person in improving his/her health.

PROPOSED SYSTEM

It is believed that the coming era is the era of Artificial Intelligence as It is proving its scope in almost every field. It has also sent vast waves in the field of healthcare, even concluding to the fact that AI doctors will even replace human doctors in the coming future. But, we do not goal on the complete replacement of human physicians rather we goal on providing them every possible help and assistance to take the best possible decisions regarding the methods to cure and prevent diseases as human touch will always be required.

All these researches and analysis on different perspective is possible only because of the presence of datasets on all kinds of diseases. In our project we have covered the following modules:

1. Taking user login and his/her inputs regarding the problem he/she is facing

2. Analysing probability of disease using machine learning on the basis of the patient’s symptoms by training the model using datasets for different datasets.

3. Taking decision whether the patient is suffering from any disease or not. If the probability of patient suffering from disease is more we analyse best method to cure it. If the disease is likely to happen to the patient we provide measures to prevent that disease

4. At last we take time to time analysis of the person so that the patient may be free from all kind of disease.

Basically we goal not only on analysing and preventing the disease but a life time assistance by taking time to time health details of the person and updating his/her health analysis.

Taking person’s data :-

In the first step we take data from the person like his/her age previous health related reports and other symptoms if he/she is facing any health related issue

Analyzing risk of disease using machine learning :-

In this step we apply machine learning on the previous datasets present on a particular disease and train our model by those datasets. Then we give the person’s symptoms to the model

Page 128: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Apoorva Chauhan, Arohi Srivastava & Atrey Tripathi

Proceedings of IC4T, 2018110

to analyse the risk or probability of disease. There are various machine learning algorithms present to do this task. According to some surveys cardiovascular diseases, cancer, diarrhoea, chronic respiratory and diabetes etc. have been a major cause of deaths in India. Some of the machine learning algorithms we have applied to predict probability of particular disease to happen are:

For predicting cardiovascular disease:

Cardiovascular diseases basically include diseases related to heart or blood vessels. They include coronary artery disease like angina and heart attacks etc. They are presently one of the major cause of deaths not only in India but also in the whole world. We can use Logistic Regression technique for analysing the probability of CVDs by passing various variables.

The basic formula for logistic regression is:

Where,

Y -> binary dependent variable

e -> the base of the natural logarithms

and z ->

Where,

β0 -> constant

βj -> coefficients

Xj -> predictors

And (j=1,2,3,…,p)

Variables :

Each attribute is a potential risk factor. There are both demographic, behavioural and medical risk factors.

n Demographic sex: male or female;(Nominal)

n Age: age of the patient;(Continuous - Although the recorded ages have been truncated to whole numbers, the concept of age is continuous)

Behavioural

n Current Smoker: whether or not the patient is a current smoker (Nominal)

n cigsPerDay: the number of cigarettes that the person smoked on average in one day. (can be considered continuous as one can have any number of cigarettes,

even half a cigarette.)

Medical( history):

n BPMeds: whether or not the patient was on blood pressure medication (Nominal)

n prevalentStroke: whether or not the patient had previously had a stroke (Nominal)

n prevalentHyp: whether or not the patient was hypertensive (Nominal)

n diabetes: whether or not the patient had diabetes (Nominal)

n Medical(current)

n totChol: total cholesterol level (Continuous)

n sysBP: systolic blood pressure (Continuous)

n diaBP: diastolic blood pressure (Continuous)

n BMI: Body Mass Index (Continuous)

n heart Rate: heart rate (Continuous - In medical research, variables such as heart rate though in fact discrete, yet are considered continuous because of large number of possible values.)

Page 129: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Artificial Intelligence based Health Analysis System

Proceedings of IC4T, 2018 111

n glucose: glucose level (Continuous)

n totChol: total cholesterol level (Continuous)

n sysBP: systolic blood pressure (Continuous)

n diaBP: diastolic blood pressure (Continuous)

n BMI: Body Mass Index (Continuous)

n heartRate: heart rate (Continuous - In medical research, variables such as heart rate though in fact discrete, yet are considered continuous because of large number of possible values.)

n glucose: glucose level (Continuous)

The result is obtained with 88.14% accuracy.

For predicting Cancer:-

It is another major health issue today. People are mostly unaware of the disease until it reaches to higher stage. So our goal is to predict it even before it is likely to happen, so that one could take preventive measures.

To pinpoint the most relevant/risk factors of cancer as well as predict the overall risk we can use Support vector machine. Although this method takes time to train the model but it’s result is more accurate than any other algorithm.

SVM basically uses classifying approach by classifying the dataset into two groups and then taking decision whether the person’s symptoms lies in the first group or the other. This method can further be extended by classifying the dataset into mare than one group. The method proceeds by dividing the subject into two groups by following a decision boundary defined on the traits Xij, which can be written as:

Where, wj -> weight putting on the jth trait to manifest its relative importance on affecting the outcome among the others

Then the decision is taken by ai, if ai >0 then the patient falls under group 1 and if ai <0 then the patient falls under group 2.

For predicting Diabetes:

It is another growing serious health problem faced by most of the people these days. We have applied multivariate technique for its analyzation.

Multivariate Regression is one of the simplest Machine Learning Algorithm. It comes under the class of Supervised Learning Algorithms i.e., when we are provided with training dataset. In multiple linear regression, we use multiple independent variables (x1, x2, …, xp) to predict y . Visual understanding of multiple linear regression is a bit more

complex and depends on the number of independent variables (p). If p = 2, these (x1, x2, y) data points lie in a 3-D coordinate system (with x, y, and z axes) and multiple linear regression finds the plane that best fits the data points.

e.g. of multivariate regression graph

Page 130: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Apoorva Chauhan, Arohi Srivastava & Atrey Tripathi

Proceedings of IC4T, 2018112

For greater numbers of independent variables, visual understanding is more abstract. For p independent variables, the data points (x1, x2, x3 …, xp, y) exist in a p + 1 -dimensional space. What really matters is that the linear model (which is p -dimensional) can be represented by the p+ 1 coefficients β0, β1, …, βp so that y is approximated by the equation y = β0 + β1*x1 +β2*x2 + … + βp*xp.

Finding the features on which a response variable depends (or not) is one of the most important steps in Multivariate Regression. To make our analysis simple, we assume that the features on which the response variable is dependent are already selected.

The outcome of the prediction will depend on the following variables:

The result is obtained with 86.58 accuracy.

III. Taking decision :-

In this step our system will take decision whether the patient is suffering from a disease or not and take actions according to the following result.

If the person is suffering from the disease the work of the system is now to take decision of the best method to cure that disease by again using machine learning. If the person is likely to get the disease then give preventive measures so that he/she can prevent himself/herself from that disease.

Page 131: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Artificial Intelligence based Health Analysis System

Proceedings of IC4T, 2018 113

Time to time updation of patient’s data :-

At last our system objectives to time to time update user’s medical data and does time to time analysis of his/her health so that the patient’s health may be maintained. In this way we want to give lifetime assistance to the person.

CONCLUSION

The presented system and methodology results on parameters based on the user’s lifestyle i.e. his food habits, sleeping routines, work patterns, and then based on those patterns decides on the basis of the various machine learning algorithms implemented that which diseases are probable to hit the user i.e. it’s symptoms ,cause and treatment .Outcomes evidently reveal that the technique used in the proposed work reduces the time and the work load of the medical experts in analysing obtained data about certain diseases and their treatment in order to take optimized decision about patient care, monitoring and treatment. The text mined document can be used in health-care sphere. The patient can use this very document to clearly understand about the particular disease its symptoms, side effects, medicines, treatment methods.

Machine Learning by using analytics which help in identifying patterns, while predictive analytics used in this paper will provide the answers to the unsolved prediction of diseases based on the outcomes. Here our approach was to build a framework that will help create an automated

System Architecture Of The Proposed System

decision model for optimized results and continual attempts and efforts of researchers in this field will surely build such a system with the best possible efficiency. In Medical Science, various parameters like age, sex, lifestyle, mental status, eating habits etc. are primary cause that punctuate the biological functioning of body and gradually lead to diseases that intensify the possibility of acquiring sudden chronic or serious health problems.

We propose an all-in-one medical kit which based on factors related to the disease predicts the patient’s health , provides him the correct medication and asks for regular updates to monitor the health efficiently and effectively. Also, the doctor can be recommended if the situation is critical based on checked parameters.

FUTURE WORKS

n Improving Performance:

With the advancement in the technologies it will help us in improving our system. More advance and more accurate prediction of a diseases with the help of natural level processing like detecting the problems by facial expression, scanning the stomach and detecting problems related to it and many more.

n Personalised Medicine:

No two patients can be given the same doze of drugs. If we know patient’s history and genetics the correct doze of drug can be referred to the patient. In personalized medicine a patient’s health and disease treatment is done on basis of their medical history, genetic, diet and the past condition.

n Empower Health-sector:

In a country like India where there is shortage of doctors and nurses. There is only 4.8 doctor over 10000 patients. Hence this system will help us to not only predicting patient diseases but also decreasing work load of doctor’s. It will also bridge the gap between doctors and patient. Multiple times patient hesitates in telling the problems he is suffering from. This system can overcome such drawbacks.

n Treatment using IoT and Robotics:

Automatic detection of blood glucose and injecting insulin when needed without disturbing user’s lifestyle. A machine which detects the doze of antibiotics to be needed as per patient’s diet and sleep. Combing the visual data and motor patterns in a device which could help in carrying out successful surgery. Training the machine in such a way that it can perform a better than any living team of doctors.

The future work of this research paper has a lot of potential to analyze various problems and do the treatment according to it.

Page 132: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Apoorva Chauhan, Arohi Srivastava & Atrey Tripathi

Proceedings of IC4T, 2018114

REFERENCES

Pravin Shinde,Prof.Sanjay Jadhav , “Health Analysis System Using Machine Learning “ International Journal of Computer Science and Information Technologies, Vol. 5 (3) [Online]. Available: https://pdfs.semanticscholar.org

Ian McCrae, Dr Kathryn Hempstalk ,Dr Kevin Ross ,”Introduction to Machine Learning in Healthcare”[Online]. Available: http://web.orionhealth.com

Charissis, G., Melas, C., Moustakis, V., Zampetakis, L.A. (2010). Chapter 20: Organizational implementation of health-care information systems. In: Handbook of Research on Developments in e-Health and Telemedicine: Technological and Social Perspectives (Cruz-Cunha, M.M., Tavares, A.J. & Simoes, R. Eds), pp. 419-450, New York, Medical Information Science Reference.

R. Subha,K. Anandakumar ,A. Bharathi,”Study on Cardiovascular Disease Classification Using Machine Learning Approaches “International Journal of Applied Engineering Research ,Volume 11, Number 6 (2016)

P. Saranya,B. Satheeskumar,”A Survey on Feature Selection of Cancer Disease Using Data Mining Techniques”, IJCSMC, Vol. 5, Issue. 5, May 2016, pg.713 – 719

L. Hunter And K.B. Cohen, “Biomedical Language Processing: What’s Beyond Pubmed”.

[7] Kwok Tai Chui ,Wadee Alhalabi , Sally Shuk Han Pang , Patricia Ordóñez de Pablos , Ryan Wen Liu , Mingbo Zhao .”Disease Diagnosis in Smart Healthcare: Innovation, Technologies and Applications”[Online]. Available: www.mdpi.com/2071-1050/9/12/2309/pdf

Page 133: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Fusion of Remote Sensing Images using Anisotropic Diffusion Technique

Proceedings of IC4T, 2018 115

Fusion of Remote Sensing Images using Anisotropic Diffusion Technique

1Anil Singh, 1Manish Kumar, 1Aman Gautam and 2Ashutosh Singhal 1Dept. of Electronics and Comm. Engg., MMMUT, Gorakhpur (U.P.), India. [email protected]

2Dept. of Electronics and Comm. Engg., SRMGPC, Lucknow, [email protected], [email protected] and [email protected]

Abstract – Merging of remote sensing images having complementary data has been performed in this paper by utilizing Anisotropic Diffusion technique. The proposed fusion method involves decomposition of image into sub-bands by anisotropic diffusion and further synthesizing the decomposed images into single fused image by max-min rule. The result shows improvement in the visual quality of fused image. To validate the visual analysis of obtained image performance parameter such as entropy (E) and Structural Similarity (SSIM) has been evaluated.

Keywords–Anisotropic Diffusion, Image Fusion, Structural Similarity (SSIM), Entropy (E).

I. INTRODUCTION

Remote sensing images [17]-[19] are generally the outcome of coherent imaging systems like SAR, laser-illuminated images, etc. These remote sensing images on one hand, requires high spectral resolution in order to have better discrimination of land covers while on other hand, a high spatial resolution is required for precise information on texture and shapes [1]. Each kind of imaging sensor works particularly in a given operating range and environment range. Therefore, it is not possible practically to have all information contained in a single image. The incapability of sensors to acquire all information into a single image contributes to image fusion. Therefore, in order to provide compact representation of desired remote area with minimum loss of information, it is required to synthesize all source images into a single image which contains all the information. Image fusion is a technique of merging images having complementary information into a single fused image [2]. The fused image contains improved spatial resolution, enhances visual interpretation and has contains information from both source images. Literature in the past few decades provides various studies discussing a number of image multi-resolution decomposition methodologies such as Discrete Wavelet Transform (DWT) [3], Shearlet Transform [12], and

Laplacian Pyramid (LP) [4]-[13] which are frequency domain decomposition while time domain decomposition methods are employed such as Empirical Mode Decomposition (EMD) [5]. The outcome of employing these methods produces artifacts, have lack of directional information and anisotropy in fused image. AD filter [6] removes the shortcoming of aforesaid problems by smoothing the image by keeping a record of edge containing region and homogeneous region. It is a multiple iterative technique which involves segregation of image pixel into approximate and detail layer respectively. After decomposition of image into two layers, fusion techniques such as Independent Component Analysis (ICA), Principal Component Analysis (PCA) [7] can be utilized to get a fused image. Image fusion techniques such as maximum, Intensity Hue Saturation (IHS), averaging, minimum and weighted averaging image fusion have been also used. In addition, Guided and Bilateral filter [8] which are smoothing and edge preserving filter have been widely used for fusion of images. However, these techniques [7] have limitations such as halo effects and gradient reversal artifacts. Godse et al. provides a detail comparison between wavelet and Fourier transform and suggested wavelet transform is preferred over Fourier as wavelet sharpen image, have good spectral quality and are multi-resolution. Later, he utilized Wavelet and maximum fusion rule for fusion of source images but resultant images are blurred and have contrast reduction [9]. Sadhasivam et al. [10] carried out fusion of source images using PCA by utilizing maximum fusion rule as fusion technique. However, the final fused image shows less illumination and contrast. It can be assimilated from above discussion that wavelets and PCA based fused images have good spectral quality and are multi-resolution but are accompanied with artifacts and has redundant information. Therefore, it is necessary to eliminate all these redundant information and artifacts from the fused resultant image in order to have only the necessary information of both inputted source images. AD utilizes fusion approach of intra-region smoothing which preserves edges, reduces

Page 134: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Anil Singh, Manish Kumar, Aman Gautam & Ashutosh Singhal

Proceedings of IC4T, 2018116

redundant information and removes artifacts [7]. Maximum and Minimum based approach extract maximum feature from source images and reduces unnecessary information and feature enhancement technique in the resultant image. So, it can be sense that combination of AD as decomposition technique and minimum-maximum as fusion rule can be better approach for fusion of remote sensing images. In the proposed paper, AD is used to decompose source images which provide approximate and detail layer. Further, max fusion rule is enforced on detail layer while min fusion rule is used on approximate layer to provide final resultant image. The proposed fusion works using AD and max-min are promising when evaluated by using performance parameters such as SSIM and E [16].

II. PROPOSED FUSION TECHNIQUE

The proposed fusion approach comprises of two stages: Decomposition of source images into various sub bands and fusion of sub bands to obtain fused image. Anisotropic diffusion method is employed for sub band decomposition. Max fusion rule is employed on detail layer and min fusion rule is employed on approximate layer respectively to carry out fusion of two layer. This results in yielding a final fused image having both minimum and maximum intensity of corresponding pixel in the source images. It results in highlighting the components having maximum information in the source image and suppressing the redundant information.

A. Anisotropic diffusion

The AD filtering is the original work of Perona and Malik [6] that was capable of enhancing boundary details within the data. Here, to vary spatial, a constant i.e. diffusion coefficient is chosen in such a way as to improve intraregion smoothing over interregion smoothing. A monotonically decreasing conduction function is chosen for classifying the image into homogeneous and heterogeneous region while true and false edges are differentiated by gradient factor. This results in restoration of image content and edge preservation which are important for interpretation or data extraction.

( )( , , ) || ||I x y t div g I It

∂ = ∇ ∇ ∂

Where || ||I∇ represents edge stopping function and

|| ||I∇ is the gradient magnitude respectively. It has been assimilated that the conventional AD performs better for images which are corrupted by additive noise. Moreover, the image get blur with additional iterations and introduces blocky effects [11].

B. Maximum Fusion rule

It selects maximum pixel from every corresponding pixel from the input images, which is the resultant pixel for the fused image [11]. It chooses only the maximum valued coefficients from the coefficient matrix. Thus, effectively, every selected pixel for the fused image will be the pixel having maximum intensity from the corresponding pixels of the source images.

( , ) max[ ( , ), ( , )]f x y a x y b x y=One of the advantages of max fusion rule is that no trade-off is made over the good information of the input images. This fusion rule helps in improving contrast of the fused image.

III. RESULTS AND DISCUSSIONS

The proposed fusion algorithm has been performed on two sets of remote sensing images. The images are listed below. Fig. 1(a) and 1(b) are scenes from rough patch of a ground having roads. The vegetation is visible in 1(a) whereas 1(b) focuses on roads. The fused image 1(c) shows both vegetation covers as well as roads. Fig 2(a) is image of forests having building, roads and a statue in front of building. Building and forests are visible in Fig. 2(a) while 2(b) focuses on statue in front of building. When both these images are fused Fig 2(c) is obtained in which the entire characteristic is included.

To validate the obtained fused results, various performance parameter such entropy (E) and structural similarity (SSIM) [14] are calculated.

Page 135: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Fusion of Remote Sensing Images using Anisotropic Diffusion Technique

Proceedings of IC4T, 2018 117

IV. CONCLUSION

Anisotropic diffusion based image fusion using maximum fusion rule has been presented in this paper. Anisotropic diffusion is employed on source images so as to preserve high frequency information as well as contrast. Maximum fusion rule extracts vital information from both images in synthesize to form a fused image. However, further improvement can be made by using other fusion algorithms to enhance obtained results.

V. REFERENCES

[1] W. Wenbo, Y. Zing and K. Tingjung, “Study of Remote Sensing Image Fusion and its Application in Image Clasification,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 37, No. 7, pp. 1141-1146, January 2008.

[2] A. Krishn, V. Bhateja, Himanshi and A. Sahu, “Medical Image Fusion using Combination of PCA and Wavelet Analysis,” Proc. 3rd (IEEE) International Conference on Advances in Computing, Communications and Informatics (ICACCI-2014), Gr. Noida (U.P.), India, pp. 986 - 991, September 2014.

[3] A, Singh, V. Bhateja, A. Singhal, S. C. Satapathy, “Multi-focus Image Fusion Method Using 2D-Wavelet Analysis and PCA,” Information and Communication Technology for Intelligent Systems (ICTIS 2017), Vol. 1, pp 616-623, August 2017.

[4] W. Wang and F. Chang, “A Multi-Focus Image Fusion Method Based on Laplacian Pyramid”, Journal of Computers, Vol. 6, No. 12, pp. 2559-2566, 2011.

[5] Hui, C. S., Hongbo, S., Renhua, S., Jing, T, “Fusing remote sensing images using a trous wavelet transform and empirical mode decomposition,” In: Pattern Recognition Letters, Vol. 29, No.3, pp. 330--342, 2008.

[6] P. Perona and J. Malik, “Scale Space and Edge Detection using Anisotropic Diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 7, pp. 629-639, July 1990.

[7] A. Srivastava, V. Bhateja and A. Moin, “Combination of PCA and Contourlets for Multispectral Image Fusion,” Proc. (Springer) First International Conference on Data Engineering and Communication Technology (ICDECT-2016), Pune, India, Vol. 2, pp. xx, March, 2016.

[8] G. Geetha, S. R. Mohammad and Y. S. S. R. Murthy, “Multifocus Image Fusion Using Multiresolution Approach

With Bilateral Gradient based Sharpness Criterion”, Proc. of ICAIT, Hyderabad, India, pp. 103-115, June 2012.

[9] Godse, D. A., Bormane, D. S.: Wavelet based Image Fusion using Pixel based Maximum Selection rule, In: International Journal of Engineering Science and Technology, vol. 3, no. 7, pp. 5572--5578, (2011).

[10] Sadhasivam, S. K., Keerthivasan, M. K., Muttan, S.: Implementation of Max Principle with PCA in Image Fusion for Surveillance and Navigation Application, In: Electronic Letters on Computer Vision and Image Analysis, Vol. 10, No. 1, pp. 1--10, (2011).

[11] A. Singhal, V. Bhateja, A. Singh, S. C. Satapathy, “Visible-Infrared Image Fusion Method Using Anisotropic Diffusion,” Intelligent Computing and Information and Communication. Advances in Intelligent Systems and Computing, Vol. 673, pp 525-532, January 2018.

[12] V. Bhateja, A. Srivastava, A. Moin, A. L. Ekuakille, “Multispectral medical image fusion scheme based on hybrid contourlet and Shearlet transforms domain” , Review of scientific Instrument, Vol. 89, Issue 8, August 2018.

[13] A. Sahu, V. Bhateja, Himanshi and A. Krishn, “Medical Image Fusion using Laplacian Pyramids, ” International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom), India, pp. 448-453, November 2014.

[14] V. Bhateja, A. Srivastava and A. Kalsi, “Fast SSIM Index for Color Images Employing Reduced-Reference Evaluation,” Proc. of (Springer) International Conference on Frontiers in Intelligent Computing Theory and Applications (FICTA 2013), Bhubaneswar, India, vol. 247, pp. 451- 458, November 2013.

[15] V. Bhateja, M. Misra, S. Urooj and A. Lay-Ekuakille, “Bilateral Despeckling Filter in Homogeneity Domain for Breast Ultrasound Images,” Proc. 3rd (IEEE) International Conference on Advances in Computing, Communications and Informatics (ICACCI-2014), Gr. Noida (U.P.), India, pp. 1027-1032, September 2014.

[16] A. Sahu, V. Bhateja, Himanshi and A. Krishn, “Medical Image fusion in Wavelets and Ridgelets domain: A comparative Evaluation,” International Journal of Rough Sets and Data Analysis, 2015.

[17] D. Jiang, D. Zhuang, Y. Huang and J. Fu, “Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications,” INTECH Open Access Publisher, August 2011.

[18] W. Wenbo, Y. Zing and K. Tingjung, “Study of Remote Sensing Image Fusion and its Application in Image Clasification,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 37, No. 7, pp. 1141-1146, January 2008.

[19] C. Pohl and J. L. Van Genderren, “Multisensor Image Fusion In Remote Sensing: Concepts, Methods and Applications,” International Journal of Remote Sensing, Vol. 19, No. 5, pp. 823-854, July 1998.

Images E SSIM1(c) 7.1145 0.32112(c) 6.2215 0.76603(c) 5.6863 0.79634(c) 6.9491 0.6928

Page 136: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Prof. S.C. Tewari, Sakshi Pandey, Saumya Singh & Saiyed Nabeel

Proceedings of IC4T, 2018118

Development of Small Size Window Cleaning Robot using Wall Climbing Mechanism

Prof. S.C. Tewari, Sakshi Pandey, Saumya Singh and Saiyed NabeelECE Department, SRMCEM, Lucknow, India

[email protected], [email protected], [email protected] and [email protected]

Abstract- A window cleaning robot is capable of climbing vertical surface as well as cleaning the glass walls. In this paper we deal with the design and development of a window cleaning robot using an 8-bit microcontroller along with Electric Ducted Fan (EDF). The EDF is used for holding the system against wall surface while the movements are controlled by using a smart phone, that is, inbuilt with Bluetooth. The entire mechanical hardware is designed by assembling various sub units which are briefed. The window cleaning robot can be used in homes, offices and large buildings. The results of the experimental work were demonstrated after designing, fabrication and integration.

Keywords― Electric Ducted Fan (EDF), Electrostatic Speed Control (ESC), LiPo battery, microcontroller.

INTRODUCTION

A wall climbing robot is a very essential device. It can be used in maintenance, inspection and cleaning of high rise buildings, bridges, malls etc. It increases operational effectiveness and reduces risk of human life involved in such operations. Cleaning is a very essential and unavoidable daily routine of our life. However, it becomes even more difficult when we have to perform it in areas which are not so easy in reach, such as high-rise buildings with glass windows and narrow towers. Earlier temporary Scaf- foldings were constructed or Gondola systems were used for workers to stand on in mid-air or high altitude for cleaning the outer surface of tall buildings. This increased the cost as well as involved risk of falling of workers from life threatening heights. So, there is a need for developing a cleaning system that can be used on both ground and vertical surface, which can be controlled from larger distance in lesser time also. Some customized window cleaning machines have already been installed into the practical use in the field of building maintenance. However, most of them are mounted on the building from the beginning and they need very expensive costs. Therefore, requirements

for small, lightweight and portable window cleaning robot are also growing in the field of building protection [1].

The main aim of the paper is to design a window cleaning robot that climbs the wall or glass surface and is operated manually by using smart phone from a distance. The cleaning device is to be held against the glass/wall surface and work against gravity. For this purpose we use Electric Ducted Fan for creating suction for hold the device against vertical surface. The basic requirement of the window cleaning device is that it should be light in weight and of small size for portability.

I. PROPOSED METHODOLOGY

The mechanism of the designed system is divided into two parts –

n Wall climbing mechanism

n Cleaning mechanism.

The wall climbing mechanism involves use of Electric Ducted Fan for holding the device against the wall and movement is controlled by a smart phone. The cleaning mechanism involves use of flat sponge made mop, microfiber fabric cloth and a cleaning liquid.

n Wall Climbing Mechanism

Figure 1: Block representation of the mechanical portion of the wall climbing mechanism

Page 137: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Development of Small Size Window Cleaning Robot using Wall Climbing Mechanism

Proceedings of IC4T, 2018 119

The wall climbing mechanism has two subparts:

n Mechanical Part

n Electrical Part

The block representation of mechanical part is shown in figure 1. For the mechanical part the chassis of dimensions 160mm X 120mm is used. A vacuum is created in the middle of the body where an EDF is placed to stick the robot on the vertical surface. The dia. of the vacuum chamber is 80 mili -meter. The EDF consists of ducted propeller and brushless DC motor. Suction mechanism for the EDF is shown in figure 4. EDF with high speed BLDC motor inside the suction cup flows the air to the outside direction of the vacuum chamber. Thus air pressure inside the vacuum chamber becomes low compared to atmospheric pressure. This high atmospheric pressure exerts force on the robot to the direction of the wall to stick it on the wall.[2] A 3300KV BLDC motor with 60A ESC powered by a 3 cell Lithium Polymer battery is used to control the speed of the motor according to required adhesion. Four micro metal gear motors are used for movement of robot on vertical plane. Pololu wheels are used in the robot for greater friction and durability. These wheels properly fit the output shafts of the micrometal gear motor. Figure-2 illustrates the whole setup, how the connection is made and how the robot moves on the vertical plane.

Figure 2: Flowchart of the wall climbing robot.

Electrical part of the robot is based on atemega microcontroller and wireless technology. [2] An Android application bluetooth Terminal HC-05 is used to send command from the android device to the bluetooth module. HC-05 bluetooth module has been used to receive the command from the android device. The atmega microcontroller is connected with this module in order to establish a wireless Bluetooth connection with pc or smart phone, that is, inbuilt with Bluetooth feature. The application provides use of appropriate commands for controlling the motion of the robot on the glass surface.

L293D motor driver shield is used to drive the micro metal gear motor according to the command. Two motor driver shields are used for driving four micro metal gear motors.

The software Bluetooth Terminal HC-05 contains 4 different command modes which help in precisely controlling the robot in vertical plane. Figure-5 shows control procedure of electrical components.

Cleaning Mechanism

For the cleaning mechanism two sponges are attached at the back side of the device below the hylam sheet. The sponge is fixed by using glue gun. A cannula drip is attached to one of the sponge. This cannula drip provides water/ cleaning fluid drop by drop to the sponge. The sponge soaks the water/cleaning fluid and becomes wet. As the robot moves on the glass surface/window the wet sponge cleans the surface. Figure 6 shows block diagram of the cleaning mechanism .The cannula drip is connected to a bottle that contains water/cleaning fluid via drip chamber. Drip chamber allows an estimate of the rate at which water/cleaning fluid is administered. The roller clamp attached to the tube is used to regulate the speed of the flow of cleaning liquid/water, or to stop or start the flow .A dry sponge is attached at the back of the device below the

Figure 5: Main Controlling Unit electrical components.

Figure 3: Block representation of the electrical portion of the wall climbing robot.

Page 138: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Prof. S.C. Tewari, Sakshi Pandey, Saumya Singh & Saiyed Nabeel

Proceedings of IC4T, 2018120

hylam sheet for proper cleaning purpose and used to dry the window/glass surface after it is cleaned by wet sponge.

S.No. Components Name Description Quantity

1 Microcontroller IC

Atmega328 1

2 EDF HI 7008 As per requirement

3 Motor 6 Micro Metal Gear motor

4

4 Programmer Board

USBASP 1

5 Resistors Carbon film As per Requirement

6 Battery LiPo 2

7 LEDs Red, Green As per Requirement

8 Bluetooth Module

HC-05 1

9 Base Chassis 1

10 ESC 60A 1

11 Voltage Regulator IC

7805 1

12 Jumper Wires As per requirement

13 Bread Board Wire

5m

14 Soldering Iron Soldron 1

15 Soldering Wire Bharti 1

MECHANICAL HARDWARE

The description of various hardware components required for making the given robot is given below in tabular form.

Figure 6: Block Diagram of Cleaning Mechanism The wall climbing process is mainly done by creating the suction process using electric ducted fan. Figure-7 shows initial configuration as well as testing of EDF.

Figure-7: Initial configuration and testing of EDF

Figure-8: Wall climbing robot after assembling all the components

The microcontroller unit and EDF unit is assembled together as show in figure 8. All the components are given supply from the single source i.e. Li-Po battery. Working model of final window cleaning robot is shown in figure-9.

Figure 9: Final design of window cleaning robot.

Page 139: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Development of Small Size Window Cleaning Robot using Wall Climbing Mechanism

Proceedings of IC4T, 2018 121

RESULT ANALYSIS

Initially suction of the electric ducting fan was check against the wall by using Li-Po battery which provide 12V supply and 3000 mAh.

The robot moves smoothly on the window with adhering by the suction created by the electric ducted fan. The project intends to replace or minimize human involvement in cleaning the window by replacing it with a small cleaning robot for office window. The abilities are: portable, miniture size, lightweight and can clean all the corner of the office and sky high buildings window.

CONCLUSION

The construction and maintenance of smart buildings with architecture of frontages from glass walls attract the services of washing-cleaning and periodically inspection of them. Handling cleaning of these frontages’ buildings supposed high costs as cleaning handling conditions which are difficult and dangers. The constructive variant of climbing robot from services fields as cleaning robotic operations of frontages of external glass walls, presented in this project, has the aim to substitute the human operator from hard, danger and risk’s workings. The window cleaning robot consists of four-wheel locomotion mechanism and a EDF(Electric Ducted Fan). This robot moved on the window smoothly with adhering by a EDF and this robot change a traveling direction at right angle at the corner of the window.

FUTURE SCOPE

The Bluetooth module used in this project has a range of 10-15 metres. Bluetooth range can be enhanced by adding highly developed wireless controllers in the robot. More so, Bluetooth could be replaced by the Wi-Fi module to enhance range. A coverage path planning approach to enable the robot to move autonomously and systematically navigate throughout surface can be developed. A surface coverage algorithm to enable robot to cover large surface area can be used. The driving pattern can enable the robot to systematically cover the surface in a manner that increases efficiency. It can be used for painting of buildings by using a paint spraying kit. The operating time of small cleaning robot can be increased by using a (220V, 50Hz) supply to dc 12 Volt converters that can provide up to 15-20 amperes of current required for operation.

By using ultrasonic sensor (Receiver and Detector) this robot can also serve the purpose of fault or crack detection in walls of dam, large ceilings and in mirrors of skyscrapers. It can also be used for cleaning and defect detection of wind turbine blades.

REFERENCES

[1] R. H. Faisal, N. A. Chisty, “Design and Implementation of a Wall Climbing Robot”, International Journal of Computer Applications, Vol. 179, No. 13, pp.1-5, 2018.

[2] Prof. S.S. Bhansali, Mayur R. Puri , Ajay U. Ingole , Sandeep Kaware, “Review On Wall Climbing Cleaning Robot”, International Journal For Engineering Applications And Technology,pp.1-5, 2015.

Page 140: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Akanksha Sondhi, Nidhi Agarwal

Proceedings of IC4T, 2018122

WDF: Comparative study and applications1Akanksha Sondhi and 2Nidhi Agarwal

Department of Electronics & Communication Engineering 1GD Goenka University, Gurugram, India

2Shri Ramswaroop Memorial University, Lucknow, [email protected], [email protected]

Abstract – Wave Digital Filters (WDFs) are sculpted in Lattice or Ladder configurations. They have superior characteristics relating to coefficient preciseness, dynamic range requirements and stability under finite arithmetic conditions. The text dealt in this paper manifolds various aspects of WDF which provide a paradigm in designing and application of WDF and its countless correlation with additional areas particularly signal processing. A comparative study of IIR system identification using minimal coefficient is proposed in this paper. Modelling comparisons are carried out by using adaptive Lattice Wave Digital Filter (LWDF) and adaptive canonical filter realizations. The conceding results shows better efficiency of adaptive LWDF. The analogy between WDF can be used in place of analog elements as they are based on wave quantity. However, analog elements are practically not ideal and possess some non-linearity and are also leaky in nature. Hence, the notion of leaky inductor and capacitor is taken care of. Thereafter, wave theory is used to establish wave digital counterparts for these leaky analog elements.

Index Terms – Wave Digital Filter, Lattice Wave Digital Filter, System identification, Leaky passive elements, Adaptive filter.

I. INTRODUCTION

The paramountcy of WDF is deep-rooted. They belong to a class of digital filters in which lossless filters are inserted between resistive terminations. Hence, to every single WDF there exists a reference filter as of which it is originated [1]. Their reference filters are derived from classical filters by the use of wave quantities, instead of taking currents and voltages as signal parameters [2] Numerous useful things of classical filters are absolute repercussions of their passivity and in addition pointedly the losslessness of the filter two-port itself. Hence, it is mandatory that the correlation amongst a WDF and a passive reference filter is well-kept even after revising the multiplier coefficients in more or less wide ranges. In WDF structures, the outstanding pass band sensitivity properties of lossless filters are uphold, furthermore directing to less accurate prerequisites for the multiplier coefficients, good dynamic range performance and also automatically preserving stability under linear conditions [3]. Two input terminals and two output terminals are present in WDF.

Wave digital filtering is a multidimensional concept. This comprises of little requirements for the good dynamic range,

coefficient word lengths, forced-response Stability, lack of parasitic oscillations, etc. WDFs possesses exemplary stability properties, therefore, they are prime contenders for adaptive and adjustable filtering.

In yesteryears, IIR canonical filters are used for the sculpting of digital filter systems, because of their ease of implementation and analysis [4]. Although, problems like slow rate convergence, sensitivity to word-length effect and stability problems are present in these structures. This give researchers to further explore this area which lead to the birth of novel filters well-known as lattice wave digital filters (LWDF). LWDFs are not as much sensitive to the word length effect and have enhanced stability with useful dynamic range. Application of LWDF reduces the coefficients requirements for designing higher order filter. On contrary, the IIR canonical structure employs twice the quantity of coefficients for identification purpose. The idea here is to put forward the comparative analysis for the system identification models in signal processing [5, 6]. The modelling techniques compared in this paper involves using the lower order known IIR system to identifying the higher order unknown IIR system [7]. Two types of structural techniques are compared here. Firstly, the identification is done by using adaptive canonical the number of coefficients, mean, mean square error (MSE), standard deviation and variance.

Until now, digital peers of passive elements such as capacitors, resistors, transformers, inductors, etc. using bilinear transform and wave theory is already found [8]. Anyhow, large frequency structure and secondly, the identification is carried out using lattice wave digital filter structure. The comparison is done on account of mean, mean square error (MSE), variance, standard deviation and the requirement of the number of coefficients.

Until now, digital twins of passive elements for instance resistors, inductors, capacitors, transformers, etc. using bilinear transform and wave theory is previously obtained [8]. Anyhow, large frequency warping effect is seen while performing A/D conversion. The reason for this is the mapping technique involved, herein, analog frequency looming infinity is plotted against the digital frequency π. Considering the fact that compression of the frequency axis occurs in bilinear transform, which yields sharper

Page 141: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

WDF: Comparative study and applications

Proceedings of IC4T, 2018 123

transition band, however, the high frequency area in analog domain is rigorously compressed [9]. Therefore, various transformations techniques are provided in the literature to cater these non-linearities. The techniques employed are Al-Alaoui transformation and fractional bilinear transform (FBT). The results obtained shows linearly better mapping for A/D conversion [10].

The agenda of this paper is to compare the various digital equivalences. All the transformations that are compared in this text are obtained by keeping in mind the practical nature of passive analog elements. However, considering the practical nature of analog passive elements, some non-linear and leaky component are attached to these elements. Therefore, the wave digital alike of these leaky analog components are compared in this literature. All the equivalences are mentioned in a tabular form.

Basically, this work is distributed into five sections. Introduction to wave digital filters and LWDF is given in section I. In section II, wave digital filters are discussed in brief. In section III the solving process of system identification is mentioned and the comparative results are drawn. In section IV wave digital equivalents of passive elements by employing various transformations are compared. Finally, all the conclusions drawn are well described in section V.

II. BASIC PRINCIPLES OF WAVE DIGITAL FILTERS

Mathematical modelling of digital filters involves computation of difference equations. While implementing digital filters one should focus upon sequential processing of all the arithmetic operations given by these difference equations [11].

A lattice wave digital filter is a categorical group of WDF wherein both the lattice branches are realized through cascading of the first and second degree all-pass sections. Adaptors are the elements that are used to implement these all-pass sections. Ideally, implementation is carried out by usage of delay element and symmetrical 2-port adaptors. Basically,

adaptors carry out arithmetic operation which includes a multiplier and three adders. Two coefficients namely γ (range -1 to +1) and α (range 0 ≤ α ≤ 1/2) controls the response of each adaptor. There exist 4 types of adaptors on account of the value of γ alongside the analogues value of α coefficients. These 4 types of adaptors are shown in fig. 1.

Even in finite arithmetic states, system stability is preserved in adaptive LWDF which is not true for adaptive IIR canonic filters.

III. WDF IMPORTANCE IN IIR SYSTEM IDENTIFICATION

The process of system identification implicate an adaptive algorithm that bid to postulate the factors of adaptive filter in order to formulate an improved model for the unknown system. This is achieved by minimalizing an error cost function amongst the output of the adaptive filter system and the unknown system [12, 13]. A block diagram of this process is well portrayed in Fig.2. Here, various sections constitute in the process of system identification, viz. unknown system, estimating adaptive system and an adaptive algorithm. The process of system identification is having various application in day to day life viz. biomedical applications, satellite communications, and smart antennas in wireless communications. Therefore, system identification is an important process in signal processing.

Fig. 1. Equivalent circuits of 2-port adaptors

Figure. 2: Adaptive IIR system based system identification process

Many text are there which involves various methods to perform IIR system identification. The basic aim for any proposed method is to be accurate and cost effective. This can be achieved if we are able to identify a given unknown IIR system using minimal coefficients. This work can be achieved by employing various type of filters. Despite of IIR and FIR filters, WDF due to their excellent properties prove to give promising results in system identification. The implementation of lattice realization of adaptive WDF have provided better results over adaptive canonic filter. The method proposed in literature involves system identification of an unknown IIR system considering both types of systems viz. transfer function having either symmetrical or asymmetrical coefficients [7].

Page 142: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Akanksha Sondhi, Nidhi Agarwal

Proceedings of IC4T, 2018124

The comparison is done for recognition of 4th order unknown IIR system by means of 3rd order known IIR system. Various metaheuristics can be used to get optimized results. The comparative study is shown in figure (3) & (4).

Figure. 4: MSE obtained in optimization of system with arbitrary coefficients

It can be easily viewed from the graph that adaptive LWDF provides less MSE as compared to adaptive canonic filter when used in identification of higher order system using lower order system. Thus the requirement of coefficients is reduced and hence the system becomes cost effective. Also, less MSE obtained in case of adaptive LWDF ensures more accurate identification of 4th order unknown system using 3rd order adaptive system.

IV. WAVE DIGITAL EQUIVALENTS

In WDF, wave quantities are used for representation of signals. It is given by equation no. (1) & (2) wherein, A represents the incident wave and B represents the reflected wave.

Figure. 3: MSE obtained in optimization of system with symmetric coefficients

A = V 1 (1)

B = V 1R (2)

In classical analogy a port is symbolized thru a current I, a voltage V and a resistance R. Various types of transformations can be applied to get analog to digital equivalence.

A. Bilinear Transform

In bilinear transform, mapping of s-plane and unit circle in z-plane is performed. Herein, left half of the s-plane is mapped to region inside of the unit circle and right half is mapped to the region lying outside of the unit circle. Also, imaginary axis of the s-plane is scaled towards the perimeter of the unit circle in the z-plane [14]. This approach is appropriate for designing of filters that helps in process of frequency

Table. 1: Digital realizations of analog passive elements (R, L, and C) and leaky impedances using bilinear transform, Al-Alaoui transform and fractional bilinear transform

ELEM-ENTS

REPRESENTATIONS IN DIFFERENT TRANSFORMATIONS

BILINEAR TRANSFROMA-

-TION

FRACTIONAL BILINEAR

TRANSFROMA--TION

AL-ALAOUI TRANSFORMA-

-TION

LEAKY IMPEDENCES

RESISTOR

As digital sinkAs digital sink

As digital sink As digital sink

INDUCTOR

As cascade of inverter and

unit delay

As a adder, delay and 2 multipliers

As a adder, delay and 2 multipliers

As cascade of inverter and

unit delay

CAPACITOR

As a delay

As a adder, delay and 2 multipliers

As a adder, delay and 2 multipliers

As cascade of inverter and

unit delay

Page 143: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

WDF: Comparative study and applications

Proceedings of IC4T, 2018 125

selection, although it does not essentially uphold the time-domain properties. The bilinear transform is defined by equations (3) & (4).

(3)

(4)

B. Fractional Bilinear Transform

This transformation is assigned by Pei et. al. in [15]. The analog to digital transformation using FBT is carried out with the help of equation (5).

(5)

Although usage of FBT lead to increase in quantity of arithmetic operations needed to produce capacitor and inductor as compared to bilinear transform, nonetheless it is worth to comment that in high frequency region, bilinear transform adds huge distortion due to nonlinear effects. On the other hand, the linearity property is preserved when FBT is used in creating the digital counterpart of the passive elements.

C. Al-Alaoui Transform

M. N. Al-Alaoui introduced Al-Alaoui transformation meant for analog to digital conversion [16]. This transformation is achieved by interposing the trapezoidal and the rectangular integration guidelines to get the result given by equation (6).

(6)

The comparative results for making digital equivalence of analog passive elements after applying all the three transformations namely, Al-Alaoui transform, bilinear transform and fractional bilinear transform is shown in table (1). Also, the digital equivalence of leaky analog elements for instance capacitor, resistor and inductor is shown in table (1). These wave digital counterparts of leaky analog elements can be found useful for the practical application of digital filters where the entire environmental state are also contemplated [17].

V. CONCLUSION

Wave digital filters prove to be better than canonic filter structures. The modeling of higher order unknown IIR system using a lower order LWDF is found to be more accurate as compared to canonic system. This is achieved with the help of optimization algorithms and the comparisons are made according to the values obtained for the mean square error. The error function of LWDF comes out to be less as compared

to that of canonic filter. Thus, LWDF serves to be superior option over canonic realization in IIR system identification. The main advantage is that the elevated order unknown IIR system is perceived by the lesser order adaptive LWDF structures in more accurate and efficient manner, as it requires less amount of coefficients for its structural comprehension. Also, due to importance of wave digital filters, the digital equivalence of passive analog elements can be derived. In this paper, the original wave digital peers of resistor, inductor and capacitor using Al-Alaoui transforms and fractional bilinear is compared. The results have shown linearly better performance of digital circuits. These suggested wave digital counterparts hints to enhanced linear digital circuits overpowering the nonlinear influences of usual digital peers. The upcoming research may stress on recognition of nonlinear system by means of adaptive LWDF system.

REFERENCES

[1] A. Fettweis, “Digital circuits and systems,” /FEE Trans. Circuits Syst. (Centennial Issue), vol. CAS-31, pp. 31-48, Jan 1984.

[2] V. Belevitch, Classical Network Theory. San Francisco, CA:Holden-Day 1968.

[3] G. C. Temes and H. J. Orchard, “First-order sensitivity and worst case analysis of doubly terminated reactance twoports,” /€€€ Trans. Circuit Theory, vol. CT-20, pp. 567-571, Sept. 1973.

[4] Mostajabi T, Poshtan J, Mostajabi Z (2015) IIR model identification via evolutionary algorithms. Artificial Intelligence Review. 44(1):87-101.

[5] Jiang S,Wang Y, Ji Z (2015) A new design method for adaptive IIR system identification using hybrid particle swarm optimization and gravitational search algorithm. Nonlinear Dynamics. 79(4):2553-76.

[6] Netto SL, Agathoklis P (1998) Efficient lattice realizations of adaptive IIR algorithms. IEEE Transactions on Signal Processing. 46(1):223

[7] Sondhi A., Barsainya R., Rawat T.K. (2016) Lattice Wave Digital Filter based IIR SystemIdentification with reduced coefficients. In: Corchado Rodriguez J., Mitra S., Thampi S., El-Alfy ES. (eds) Intelligent Systems Technologies and Applications 2016. ISTA 2016. Advances in Intelligent Systems and Computing, vol 530. Springer, Cham

[8] Andreas Antoniou, Digital Signal Processing, McGraw-Hill Publication, 2006.

[9] R. Barsainya, T. K. Rawat, R. Mahendra, “A new realization of wave digital filters using GIC and fractional bilinear transform,” engineering Science and Technology: an International Journal, 2015, doi:.1016/j.jestch.2015.08.008

[10] Soo-Chang Pei and Hong-Jie Hsu, “Fractional bilinear transform for analog-to-digital conversion,” IEEE Trans. on

Page 144: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Akanksha Sondhi, Nidhi Agarwal

Proceedings of IC4T, 2018126

signal processing, Vol. 56, no. 5, pp. 2122-2127, May 2008.

[11] A. Fettweis, “Wave digital filters: Theory and practice,” IEEE Proc., Vol. 74, no. 2, pp. 270-327, Feb 1986.

[12] Gazsi L (1985) Explicit formulas for lattice wave digital filters. IEEE Transactions on Circuits and Systems. 32(1):68-88.

[13] Yli-Kaakinen J, Saramki T (2007) A systematic algorithm for the design of lattice wave digital filters with short-coefficient wordlength. IEEE Transactions on Circuits and Systems I. 54(8):1838-51.

[14] Richa Barsainya, Tarun Kumar Rawat, Shivani Bansal, Arnesh Majhi, “Novel design of Fractional Delay Filter using lattice

wave digital filter”, Power Electronics Intelligent Control and Energy Systems (ICPEICES)IEEE International Conference on, pp. 1-6, 2016.

[15] Soo-Chang Pei and Hong-Jie Hsu, “Fractional bilinear transform for analog-to-digital conversion,” IEEE Trans. on signal processing, Vol. 56, no. 5, pp. 2122-2127, May 2008.*

[16] M. A. Al-Alaoui, “Novel digital intergator and differentiator,” IEEE electronics letter, Vol. 29, no. 4, pp. 376-378, 1993.

[17] M. A. Al-Alaoui, “Novel approach to analog to digital transforms,”IEEE Trans. on circuits and systems-I, Vol. 54, no. 2, pp. 338-350, 2007.

Page 145: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Page Ranking Framework: Personalization with Privacy Issues and Enhancement

Proceedings of IC4T, 2018 127

Page Ranking Framework: Personalization with Privacy Issues and Enhancement

1Nidhi Saxena and 2Bineet Gupta1Department of Information Technology, 2Department of Computer Applications

Shri Ramswaroop Memorial Group of Professional Colleges, Lucknow, India [email protected], [email protected]

Fig. 1. Framework For Personalization Approach

Abstract— Web Search engine plays a vital role in our daily life that lead to expansion of web pages and web sites every day. This exponential growth of web search pages need ranking before providing to user in order to fulfill user requirements. On the other hand ranking algorithm requires user involvement which is termed as personalization. Personalization provides interest based results through ranking techniques but reduces privacy and lead to various issues. This paper deals with the analysis of various web page algorithms using personalization approach. A comparative study is performed showing limitation of algorithms with their further enhancement. The paper also depicts how personalization breaks privacy with an approach to enhance Personalization with privacy. The paper provides further scope of research in the field of personalization.

Keywords— Page Ranking, Personalization, Privacy, Security, Web Search Log.

INTRODUCTION

Web is now became a great source of information. This information is stored on web which results to increasing size of database. With ever increasing growth of WWW there is a necessity to have search engine which can provide user the web pages which are more relevant according to search made. When a user submit a query the search engine provide results based on query matching concept but as the number of web pages matching query are limitless it is difficult or rather impossible to visit each page therefore ranking plays a major role to shorten the searching time therefore in order to get ranked results Web page ranking is required. Ranking provides user relevant results top ranked while searching. Therefore efficiency of search engine depends on Ranking. Various approaching were introduced which provides ranked results but among them personalization is considered to be most efficient and effective.

Personalization includes the technology which differentiates among individuals. Personalization uses user profile as a basis of finding user interest. Profile can be created explicitly and implicitly

In explicit profile creation user profile are generated by filling the registration form or by specifying the web page of interest while enquiring.

Implicit personalization is done on the basis of user behaviour. Individual profiles are created having search history. On the basis of earlier search made for a query, user interest is known and search results are shown. Therefore, most of personalized search systems use this approach to provide results after analysing personal search information.

Implicit personalization look towards the user interest by extracting some information related to earlier search from Web log. Web log can exist on server or on client side. It re-ranked the result according to user behavior. This personalization approach provides significant results and solved most of the problems with user friendly approach but still this approach had issues like user identification before personalization for better results and security issue which is difficult to resolve along with personalization During personalization personal information get revealed on a public server. It is necessary to provide server full access to their personal and behavior data on the Internet. The system does not require permission of user for accessing such information which violates an

Page 146: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Nidhi Saxena & Bineet Gupta

Proceedings of IC4T, 2018128

individual’s privacy. Therefore none of the above methods implicit or explicit creates and maintains secure personalized search engine.

Apart from it there are security issues like link farm where spammers, try to obtain maximum follower links in the social network. Followers increase the extent of a user’s direct spectators, which seriously affects the ranking of web pages in search engines.

Therefore in order to provide both web personalization and security, it is necessary for an algorithm to consolidate user’s personal information into organized user profile using a secure method for personalization.

This paper provides brief description of personalization approaches which are providing significant results. Personalization with privacy issues are depicted and a system which can overcome the clash between personalization and privacy protection is introduced. It deals with the protection of privacy with personalization.

II. RELATED WORK

Web mining allow user to find the relevant pages from the bulk of information. Personalization belongs to one of the category of Web mining i.e. Web Usage mining (WUM) which deals with the creation of user search history. Other than (WUM), Web Content mining (WCM) concentrate on structure of document while Web Structure Mining (WSM) deals with link between the web pages. Among these three categories WUM is providing user interest based results therefore more efficient than others.

Earlier lot of work on personalization took place that includes both explicit and implicit profiling techniques. Dynamic ranking [1] introduces two algorithms which takes feedback from user and therefore involve user in order to find the user interest. Query growth and using social means for personalization [2] takes user profile having tags along with web documents. A new profile was introduced (RLT profile) [3] which create probabilistic profile considering key entries of web search such as users, websites, and queries. The profiles keeps topic and reading level distributions, which is used in combination with search log data to compare different entities. Work on semantic extraction from news articles is used in profiling. It has a great impact on personalization and can be applied to Social web system [4].

Explicit user profiling based approaches collects user information explicitly [5]. Document available on user machine also becomes user information while personalization which is used in some algorithms [6]. It is assumes that if the document present on user machine from some time it provides indication of his interest in that document and can become a great source of personalization.

Lot of work was conducted on user profile which keep record of each page visited by the user and time spend on that page [7] [8] [9]. These algorithms proved effective for web page prediction. The web log developed keep data regarding navigation while web search which provides optimized page ranking. The earlier version was later improved with the addition of click event along with earlier records. [10]. Click event shows how long user shown his interest in particular page. Algorithm shows that as the number of factor increases, the results becomes more optimized and personalization becomes more effective. In 2013 a new ranking algorithm Ratio Rank [11] was introduced which takes in links weights and out link weights along with number of visit count .It proved to be a better approach for personalization. Further enhancement was in New Enhanced page rank algorithm [12] that considers link of the webpages. It consider behavior, therefore relevancy of the webpages resumed is high. Some algorithms considers both implicit and explicit creation of user profile based on previous search query result as well as some personal information like the documents and e-mail user read [13].

Privacy is necessary while using Internet in all areas including searching. Some previous studies on Private Information Retrieval (PIR) [14], works on the problem of retrieval of user information while keeping the query private. The system conserves the privacy of the user profile and allows access to general information that user willingly discloses.

Some works on privacy deals with protecting individual data entries. A substantial way of measuring privacy is to observe the difference in past and later knowledge of a specific value [15] [16].Shannon’s information theory is another way of measuring privacy. Some other techniques like k-anonymity [17] where different user information is not compared but rather the information collected over time or a single.

Several solutions to overcome link farming have also been anticipated. Link-based statistics was used by Becchetti [18] to build automatic detection of Web-spam. Gy-ongyi proposed the Trust rank algorithm [19] to fight with web spam, which works on the assumption that good pages usually link to other good pages. Other algorithms that are inversions of Trust rank have been proposed to identify spam pages. Other algorithms like Bad Rank identify and blacklist pages uses spam page identifier [20]. Wu and Davison [21] proposed an algorithm for reducing spam pages in the Web [22].

III PERSONALIZATION WITH PRIVACY ISSUES

The most important issue that should be handled during user profiling is privacy destruction. Many users disagree to give personal information either implicitly or explicitly during searching process. In both cases uses losses privacy and he knows that all of his actions are recorded without his

Page 147: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Page Ranking Framework: Personalization with Privacy Issues and Enhancement

Proceedings of IC4T, 2018 129

consent. Many time user reluctant to fill any registration form for explicit personalization.

In this section three domain of risks are discussed: Social Data based Personalization, Behavioral Personalization and Location- based search.

A. Social Data based Personalization-

Now a days every one access many social website like Facebook, Twitter, LinkedIn and many more. These websites have large amount of personal data stored like photos, location, friends and communication. These data is sometimes used for social search as a mean of personalization. This results to one of the challenging issue as all these social websites contains highly sensitive information.

B. Behavior Personalization

In most cases behavior profiling tracks user behavior to a wide range without any consent of user. User behavior profiling method is now becoming very popular in Web search. Profiles are created based on various activities like

Site visited, time spent. One risk associated with such personalization is that personal information and search record will be disclosed to other users on the same computer. Users on same browser can see each other personal search.

C. Location- based search

Location based personalization is becoming popular day by day. Apart from Google Android searches many browsers like Mozilla Firefox has a location programming interface that track user search location and use it further for providing results based on location. This procedure improves the results. But many users do not wish to allow their location as they may want to keep their house location secret as a privacy concern.

Privacy and security depend on the user involved and benefits that user receives from sharing individual information. The challenge here is whether security in web search can be maintained along with personalization.

IV. PRIVACY-ENHANCING PERSONALIZED SEARCH

In order to provide privacy along with personalization it is necessary to adopt Client side profiling. As the user personal information get stored in the log created on the client side rather than server side, user may have more control over data but this technique cannot itself provide privacy in true sense. The client end profile will have search records of all the users making search from same machine and disclose search results to everyone.

Personalization require new framework for associating privacy. A simple enhancement to the earlier search model is to identify user separately on single computer. With user

identification separate log will be created have search detail separately, New session for each user can disclose their search records. The system not only support privacy with personalization but can improve personalization as each user interest vary and any query can provide more than one type of results like query “ mouse “ can result to computer device mouse as well as mouse as an animal. So the relevancy of searched pages can vary person by person. If the system keep web search history separately this confusion can be reduced and search will be earlier, efficient and user friendly. The defined approach is beneficial in two ways and provides privacy by architecture.

Fig. 2. Personalization with Privacy Framework

The given model illustrates the personalization task in a unique way. It combines the factors discussed above with a new concept of keeping user identity and using it for searching and ranking.

The system maintains accounts on client side of each user who accesses the system and any unknown person is not allowed to make search from particular system. Even all the users are not allowed to look into the searched results of each other which lead to increased privacy.

On the other side the system does not uses link priorities for ranking results which is adding a special feature of security to this search engine.

V. RESULTS AND EVALUATION

In this section the performance of system is evaluated on following objectives: to verify the personalization in user profile and to consider privacy while maintaining search quality.

The approach is evaluated with different participant that runs system on single PC. The system keeps each user profile distinctively.

Page 148: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Nidhi Saxena & Bineet Gupta

Proceedings of IC4T, 2018130

A. Personalization with Privacy

With the use of separate profile on single PC the system is maintaining the privacy of searched record. For the purpose of authentication biometric method has been adopted which itself is a unique feature and provide privacy to overall search. Each user search will be recorded on the basis of his or her id.

Such feature has never been adopted earlier in web search which differentiate it from any other search engine.

B. Security

Security is maintained in UBIP by using web usage mining and not adopting page rank method of link structure. The system will provide search results based on earlier searches made by same user with the use of unique profile on PC. Therefore there is to interference of spammer and link farm cannot be introduced by the spammers. All defined methods make it more secure.

Table 1 Comparative Analysis of Personalization with Privacy

Comparison Area Google Bing Proposed System

Personalization Search log on client and location based

Search log on client and location based

Based on individual search log on client with identification

PrivacyAnd security

T h r o u g h encryption

S e c u r e d private datatransfer

Very High due to user identification and separate search log.

VI. CONCLUSION AND FUTURE SCOPE

Personalized search is a pathway to advance search. However, personalization grants the server full access to user personal information, which results to low privacy, which is always necessary on Internet. In this paper, we examined different issue related to privacy during personalization and suggested an approach which can maintain a stability between users’ privacy and search quality. A personalized framework discussed in the paper provides personalized results based on search log created with each user’s identification. . Through this profile, users search are not exposed to other users. The results shows that user profile is helpful in refining search quality This paper is emphasizing more on privacy and security issues, which are incorporated with the use of user identification and profile making method. Hence this approach is an enhancement in the field of personalization with the incorporation of privacy

Yet, this paper is an exploratory work on small scale. There is a necessity to adopt such features while maintaining search log. The system can act as a basis for higher scale search engine.

REFERENCES

[1.] C. Brandt, Y. Yue , J. Bank, “Dynamic Ranked Retrieval,” ACM WSM’11, Hong Kong , China, 2011.

[2.] D. Zhou, S. Lawless, V. Wade, ” Improving Search via Personalized Query Expansion using Social Media. Information Retrieval For Social media,” Springer, pp. 218-242, 2012.

[3.] J. Y. Kim, K.C. Thompson, P.N. Bennett, S.T. Dumais,” Characterizing Web Content, User Interests, and Search Behavior by Reading Level and Topic,”, WSDM’11 ACM , pp. 9–12, 2012.

[4.] F. Abel, Q. Gao, G.J. Houben, K. Tao, “Semantic Enrichment of Twitter Posts for User Profile Construction on the Social Web,” ESWC’11, Springer, vol. Part II,pp. 375-389, 2011.

[5.] C. Srinvas, “Explicit User Profiles for Semantic web search using XML”, IJERA ,vol. 4, pp. 234-241, 2012.

[6.] R. S. Bhadoria, D. Sain, R. .Moriwal, “Data Mining Algorithm for personalizing user’s profile on Web,” IJCTEE, vol. 2, 2012 .

[7.] H. Liu, V. Keselj, “Combined Mining of Web Server logs and web contents for classifying user navigation patterns and predicting user’s future requests,” Data and Knowledge engineering ,ACM, pp.304-330, 2007.

[8.] R. Khanchan, M. Punithavalli, “An Efficient Web Page Prediction based on Access Time Length and Frequency,” EEE 3rd International Conference on Electronics Computer Technology, vol. 5, pp. 273-277, 2011.

[9.] N. Matthijs, F. Radlinski, “Personalizing Web Search using Long Term Browsing History,” ACM WSM’, pp.11 9-12, 2011.

[10.] R. Agarwal, K. V.Arya, S. Shekhar, “An Efficient Weighted Algorithm for Web Information Retrieval System.,”IEEE International Conference on CICN ,pp. 126-131, 2011.

[11.] R. Singh, D.K. Sharma, “ Ratio Rank :Enhancing Impact of Inlinks and Outlinks,” IEEE International Advance Computing Conference, 2013.

[12.] R. Singh, D. K. Sharma, “ Enhanced Ratio Rank :Enhancing Impact of Inlinks and Outlinks”IEEE International Advance Computing Conference”,2013.

[13.] J.Teevan, S. T Dumais, E. Horvitz,”Personalizing Search via Automated Analysis of Interests and Activities,” 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2005.

[14.] W. Gasarch,” A survey on private information retrieval,” The bulletin of the European Association for Theoretical Computer Science (EATCS), vol. 82 , pp. 72--107, 2004.

[15.] R. Agrawal, R. Skriant, “ Privacy preserving data mining,” ACM SIGMOD Conference on Management of Data (SIGMOD), Dallas, Texas, May 2000.

Page 149: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Page Ranking Framework: Personalization with Privacy Issues and Enhancement

Proceedings of IC4T, 2018 131

[16.] A. Evfimievski, J. Gehrke and R. Srikant, “ Limiting privacy Breaches in privacy preserving data mining,ACM SIGMOD/PODS, San Diego, CA, 2003

[17.] L. Sweeney, “k-anonymity: a model for protecting privacy,” International Journal on Uncertainty Fuzziness and Knowledge-based Systems, vol. 10 (5), pp. 557-570, 2002.

[18.] L. Becchetti, C. Castillo, D. Donato, R. Baeza-Yates,and S. Leonardi,” Link analysis for web spam detection,” ACM Transactions on the Web, vol. 2, pp. 1–42, March 2008.

[19.] Z. Gy¨ongyi, H. Garcia-Molina and J. Pedersen, “Combating

web spam with trustrank,” Conference on Very Large Data Bases (VLDB), 2004.

[20.] M. Sobek. Google PageRank - PR 0. http://pr.efactory.de/e-pr0.shtml.

[21.] B. Wu and B. D. Davison,” Identifying link farm spam pages,”ACM Int’l Conference on World Wide Web (WWW), 2005.

[22.] B. Wu, V. Goel, and B. D. Davison, “Propagating trust and distrust to demote web spam,” Workshop on Models of Trust for the Web, 2006.

Page 150: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

N. K. Srivastava, Ram Krishna and S. Chandran

Proceedings of IC4T, 2018132

Massive Mimo – The Runway To 5G1N. K. Srivastava, 2Ram Krishna and 3S. Chandran

1SRMGPC, Lucknow, 2Former DDG, DOT, New Delhi, 3Former AGM, Bharat Sanchar Nigam Ltd., New [email protected], [email protected] and [email protected]

Abstract-As 5G is fast becoming a reality now, there is immediate requirement in augmentation of existing switching sufficiency & performance, high efficacy of spectrum, cutback of delay, enhanced security, complete monitoring & management etc., in order for its smooth launch. International Telecommunications Union (ITU) has drawn relative standards and performance indicators to be implemented for a smooth transition to 5G. There is eager excitement around in knowing that 5G download speeds would be in the order of 1-10Gbps compared to the present 100 Mbps offered by 4G networks. Also, the latency time of 45ms in 4G networks would be vastly improved to 1 ms in the 5G network.

Keywords: Distributed antenna systems, Single-user MIMO, Multi-user MIMO, LTE, Massive MIMO, Spatial Modulation, Beamforming, 3GPP Releases.

I. INTRODUCTION

The upcoming and future wireless technology would be required to handle multiple amount of data compared to the current loads. Multiple Wi-Fi and 4G standards are already using MIMO (Multiple Input Multiple Output) principles. Immediate coincidental transferences and Spatial reuse of spectrum are required to be improved due to scarce spectrum available for full coverage. Massive MIMO technology with multitudes of antennas used at access points gives the perfect solution. Data signals are thus available simultaneously to multiple users as the antennaes are phase-synchronized and each signal is focused on the intended user. Massive MIMO playa a dominant role in the run-in to the 5G network.

II. DISTRIBUTED ANTENNAE SYSTEM (DAS)

Distributed antennas work using remote radio units (RRU), having connectivity to the Node-B using minimal delay and high frequency elements. This gives consistent transportation

via every available antennas simultaneously which helps in curbing path attenuation and extensive paling in mobile communication networks. DAS can spread pre-coding across different transportation interfaces to operate single-user and multi-user MIMO communication.

III. MULTIPLE INPUT MULTIPLE OUTPUT (MIMO)

MIMO technology is the essential part of 3GPP E-UTRA long term evolution (LTE 4G) as well as for wireless telecommunication specifications including IEEE 802.11n (Wi-Fi), IEEE 802.11ac (Wi-Fi), HSPA+ (3G) and WiMAX (4G). This involves sending two information streams which can be transmitted over two or more antennas, commonly known as 2by2 MIMO two transmitter and two receiver antennas configuration). Link quality, reliability and throughput between the transmitter and receiver can be enhanced by addition of more antennaes to the transmitter and receiver. MIMO classifications are Single User MIMO (SU MIMO) and Multi User MIMO (MU MIMO).

During allotted time frame, SU-MIMO shares the complete transmission capacity of the access point (AP) to one device.

MU-MIMO facilitates simultaneous communication with multiple devices. Network speed is multiplied and reduction in the waiting time of each device for signal is achieved. SU-MIMO and MU-MIMO form part of the 4G network.

Page 151: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Massive Mimo – The Runway To 5G

Proceedings of IC4T, 2018 133

IV. MASSIVE MIMO

Conventional MIMO deploys more than one antennae at the source and destination giving increased efficiency, reduced error in the network and multiplication of antennae capacity. Several technology standards (LTE, 802.11n WiFi, 802.11ac WiFi, etc,.) use MIMO as a basic element. Massive MIMO use this technology and multiplies by manifolds, i.e., this network will have thousands of antennae. . Massive MIMO (known also as ‘Large-Scale Antenna Systems’, ‘Very Large MIMO’, ‘Hyper MIMO’, ‘Full-Dimension MIMO’, ‘ ARGOS’, etc,.) system utilizes a very large number of antennas. The additional antennaes help in the signal transmission and reception without missing even a minutest space . This technology simultaneously transmit & receive multiple data channels over the same radio channel. This way multi-path propagation is achieved due to augmentation of the radio link capacity using numerous transmit and receive antennas. Considerable gain in capacity and power savings is achieved. Massive MIMO advantages range in economical energy saving elements, reduced delay, transliteration of the media access control (MAC) layer, and resistance to intrusion and planned disruption. In the present scenario, it is possible to deploy Massive MIMO using the current infrastructure with relevant up gradation in the radio base stations.

Multiplying the capacity of a Massive MIMO network can augment the magnitude of a communication system without requiring more spectrum. This leads to sizable capacity enhancements. Since the network caters to multiple antennaes, more signal paths will be available leading to appreciable output in terms of speed and security. The system will respond well to elements broadcasting in superior frequency bands and hence improved connectivity will be available with receipt of strong signal in indoors. Beamforming technology is utilized by massive MIMO networks. The existing mobile networks usage of sharing a lone group of spectrum to every user in the range, will result in a performance bottleneck in massively inhabited areas. Massive MIMO networks utilise beamforming technology, enabling the targeted use of spectrum allowing much more reliable data speeds and latency across the network. Massive MIMO could also improve the SNR through beamforming, pushing the system closer to a noise limited environment.

Massive MIMO antennae system

Expanding to accommodate close to 100 antennaes in a single array at the Base Station will form the basis for the 5G networks. These multitude of Node-B antennas apropos the number of mobile devices, resulting in a quasi-orthogonal channel acknowledgement, outputs colossal benefits in spectral efficiency. These settings will facilitate countless equipments to be supplied with the likewise frequency and time effects within a given cell compared to present 4G systems.

Fig.3 (Image Ref: Mitsubushi)

The spread of antennae systems in a Massive MIMO network do not form exact balloon-like beams observed in almost all existing and current network systems. However, the orthogonal nature of the signals emitted by the collection of antennaes form individual beams catering to one or more users at an instant. Scalability is achieved as operators can add even more antennaes to the current set-up in increasing the signal paths and improving efficiency without hampering the existing spectrum.

Spatial Modulation

Antennae index at the transmitter is utilized by spatial modulation as an additional source of information. Data rate of a system can be increased without necessitating any increase in bandwidth requirements. Only a single radio frequency chain is sufficient and energy efficiency is very

Page 152: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

N. K. Srivastava, Ram Krishna and S. Chandran

Proceedings of IC4T, 2018134

much high. Antennae selection in MIMO systems depend on the received signal strength and the channel states whereas, the Spatial Modulation is dependent on the arriving user data.

Massive MIMO Beamforming

Beamforming is where the beam focuses the “transmit” and/or “receive” in one specific area to avoid interference from outside sources and to increase gain and throughput. In Massive MIMO they use 3D beamforming. This focuses the beam both vertically and horizontally. It allows the RF to focus on one area while the other elements can focus on other areas. It increases coverage and densification without moving an antenna or dropping in a small cell. Frequencies less than 6GHz enjoy considerable capacity benefits through Beamforming using massive MIMO.

Fig.4 Even while reusing the existing base station sites with addition of new beamforming antennas, enhanced radio network performance is

expected to be maintained.

Massive MIMO over 4G Networks

The technology provides:

• Enhanced Spectral efficiency

• Superior energy efficiency

• Improved coverage and higher speeds

• Strong indoor signal strength

• High speed coverage to rural areas

• Increased data speeds from 1Gbps to 10 Gbps

• Low infrastructure and component costs

• Easy addition of new antennaes to the system to increase existing speeds and user base

Massive MIMO and 5G

The 5G network is expected to negotiate the excessive data usage surge. Massive MIMO is the ideal technology to fulfil the requirements of the waiting 5G evolution due to its ability to simultaneously handle numerous users and devices in a

succinct space providing continuous high speed data rates and error-free output. Since time taken for signals to travel between Transmitter and Receiver is considerably reduced, the 5G networks will be easily using the network for VR/AR, Self-driving and other AI or ML controlled services.

V. CHALLENGES IN MASSIVE MIMO ANTENNAE DEPLOYMENT

• Complex Signal Processing – the increased number of antennaes and the frequencies at a single location leads to increased chances of interference. This can be managed by the adoption of a wider spectrum and zero-fault precision technology which will control the inter-site channel interference.

• Limited Spectrum availability – The prevalent spectrum in current times is already cluttered due to the overload of new data connections. It is challenging to utilize higher frequency millimetre waves to build Massive MIMO Base Stations as the waves do not penetrate solid materials and absorbed by trees or rain clouds. Through multiple antennaes concentrated radio signals in a narrow directional beam without increasing their transmission power, increased signal gain can be achieved.

• Current smartphones compatibility to Massive MIMO networks – A large number of existing smartphone devices can take advantage of 4G MIMO networks by providing Gigabit speeds without any modifications.

• Few challenges facing smooth deployment could be: many economical less-accurate elements work productively in sync, precise retrieval policy for known channel properties, ample resource management for lately associated elements, using additional degrees of freedom given by excessive service antennas, lessening internal power usage to get overall energy efficiency minimizations, and accomodating new positioning schemes.

VI. MASSIVE MIMO IN 3GPP RELEASES

The first LTE specifications in 3GPP Release 8 supported single user MIMO with 2x2 and 4x4. Multi-antenna uplink

Release 8 Release 9 Release 10 Release 114x4MIMO4x2MIMO8RX uplink

Uplink CRAN

8TX TM8 8TX TM9 Downlink CoMP

Release 12 Release 13 Release 14 Release 15+Downlink eCoMP

• New 4TX codebook

Massive MIMO 16TX

Massive MIMO 32TX

5G massive MIMO 64TX+

Page 153: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Massive Mimo – The Runway To 5G

Proceedings of IC4T, 2018 135

was supported because it is transparent to the devices. The subsequent 3GPP releases have gradually improved the beamforming capability (see Table). All LTE releases are backwards compatible, allowing the new MIMO features to be deployed on the same frequency as any legacy devices. Release 15 brings the 5G new radio with enhanced beamforming capabilities for all new devices.

VII. INDIAN SCENARIO

India has taken giant strides to be among the leading technology growth nations of the world and the explosive internet bandwidth expansion in the country has witnessed induction of the latest technology by various telecom giants as well as mobile manufacturers. India has already rolled-out its firs MIMO technology in Bengaluru and Kolkota named under Project Leap. The first massive MIMO deployment in India was carried out by Bharti Airtel. IPL match venues were the first places in which Massive MIMO was deployed by the leading telecom operators Bharti Airtel and Reliance Jio. The stadiums will get connected with various wireless broadband solutions including Massive MIMO, 4G eNodeBs, WiFi and small cells. The existing 4G deployment is going to be benefitted with two times better speeds. It is also envisaged by the Government that continuing to expand in these lines, the 5G roll-out will definitely happen in early 2020. A Massive MIMO radio laboratory has been setup at IIT Delhi. To augment the network capacity to obtain high sped data and better voice services 10,000 new mobile sites across West Bengal has been decided during the fiscal year 2018-19. Ericsson is partnering with IIT-Delhi for developing 5G-related use cases and early trial of 5G technology. Reliance Jio, Vodafone, and Idea Cellular have since been spearheading the technology in allotted telecom circles. The 3.5GHz band is standardised for both 4G & 5G technology. Ericsson is to introduce narrow-band Internet of Things (IoT) networks while Reliance Jio has since started its commercial NBIoT network in India. Ericsson also launched customised radio product, called Street Macro, for Indian telecom operators, along with new solutions that support massive MIMO to help telcos migrate from 4G to

5G.Considering the growing requirement for capacity due to speedy data evolution, large scale development is taking place across Radio, Core and Transport parts of the Network.

VIII. CONCLUSION

Massive MIMO is a developing technology which enhances MIMO by an exponential change. Massive MIMO multiplies prevalent network capacity by upto seven times using the existing spectrum, thereby augmenting spectral efficiency. While Massive MIMO is meant for 5G technology, few existing smartphones can use it on the present 4G networks. Examples of such smartphones are: iPhone 8 and iPhone X, the HTC 10 and U11, the Huawei P9 and P10, the LG G5 and G6, the Samsung Galaxy S7 and S8, and the Sony Xperia X and XZ. Also, previous generation devices that don’t support MIMO will profit from the enhanced secure and precise network scenario that Massive MIMO gives. Massive MIMO is interference resistant due to selective beamforming and power control. Beamforming using Massive MIMO for 4.9G and 5G with high spectral efficiency offers significant improvements in network capacity, coverage, installation and operation with reduced OPEX. It is supported by 3GPP specifications and commercially viable. To provide a huge quantity of Antennae at the Node-B, further enhancements in Radio and Antennae technology has to be urgently explored.

REFERENCES:

1.] 3GPP website on:

2.] LTE advanced

3.] Specification #36.897

4.] The path to 5G: as much evolution as revolution.

5.] Research papers on Massive MIMO from web.

6.] ET Telecomm from Economic Times.

7.] Google images on Massive MIMO.

Page 154: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Utkarsh Shukla and Vikrant Bhateja

Proceedings of IC4T, 2018136

Comparison between CNN and ANN in offline signature verification

1Utkarsh Shukla and 2Vikrant Bhateja1Department of Computer Science and Engineering, 2 Department of Electronics and Communication Engineering

Shri Ramswaroop Memorial College of Engineering and Management, Lucknow, Uttar Pradesh, [email protected], [email protected]

ABSTRACT -When it comes to information security, biometric systems play a significant role in it. Signature verification is a popular research area in field of pattern recognition and image processing. It is a technique used by banks, intelligence agencies and high-profile institutions to validate the identity of an individual by comparing signatures and checking for authenticity. In this paper, the approach for the verification of signatures is based on comparison between Artificial Neural Network (ANN) and Conventional Neural Network (CNN). We will check which method has more efficiency and accuracy.

Keywords: Signature verification, pattern recognition, image processing, CNN, ANN

I. INTRODUCTION

Since we all know that still offline signature is the main means of authentication in the government and as well as private organization [4]. Signature recognition and verification involves two tasks: one of them is identification of the signature owner, and the other is the decision about whether the signature is genuine or forged [5].

There exist two types of signature verification: online and offline. Online verification requires an electronic signing system which provides data such as the pen’s position, azimuth/altitude angle, and pressure at each time-step of the signing. While offline verification uses solely 2D visual (pixel) data acquired from scanning signed documents. While online systems provide more information to verify identity, they are less versatile and can only be used in certain contexts (e.g. transaction authorization) because they require specific input systems [1].

When any person does his/her signature, then instantly the signature is fed into the signature verification system and then image is compared with one on one signature file. [3] Signature verification is essential in preventing falsification of documents in numerous financial, legal, and other commercial settings. Fig. 1 shows the workflow of the signature verification system. The first step is image acquisition that creates a digitally encoded representation of the visual characteristics of the image. Since the signatures to be processed by the system should be in the digital image

format. After that we have to normalize the signature, resize it to proper dimensions, remove the background noise, and thin the signature. [7].

There various problems faced by biometric systems regarding frauds, one of them is forgery. Forgery is also known as fake signature, it can be classified as follows: random forgeries, produced without knowing either the name of the signer nor the shape of its signature; simple forgeries, produced knowing the name of the signer but without having an example of his signature; and skilled forgeries, produced by people who, after studying an original instance of the signature, attempt to imitate it as closely as possible. Since offline signature is one of the most used authentication tool, hence any type of forgery in it may result in a huge loss. Many times human tester also make mistakes, hence machine helps us in recognizing the forgeries at every level.

Figure 1. Flowchart of signature verification

Page 155: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Comparison Between Cnn And Ann In Offline Signature Verification

Proceedings of IC4T, 2018 137

II. LITERATURE REVIEW

In 1977, first algorithm regarding offline/online signature was published. Much research has followed, attempting various methods for both feature extraction and matching. [2]

“Offline Signature Verification with Convolutional Neural Networks” this paper aims to build offline signature verification system using CNN based on the dataset from the International Conference on Document Analysis and Recognition (ICDAR). [1]

“Signature Recognition and Verification with ANN” this paper presents an off-line signature recognition and verification system which is based on moment invariant method and ANN. Two different types of neural networks are developer with distinct task, one of signature recognition and the latter one for verification. The system achieved a success rate of 100% for 30 trained signatures and failed for untrained signatures. System must be scalable because if we want more accuracy in recognition and verification, we must add extra feature. [5]

“Offline signature recognition using neural networks approach” presents Offline Verification method of signatures using a set of simple shape based geometric features. Three features that matters the most area, centre of gravity, eccentricity, kurtosis and skewness. [8]

“Offline signature recognition system using histogram of oriented gradients” in this paper, an offline signature recognition system which uses histogram of oriented gradients is presented. Feedforward backpropagation neural network is used for classification. The system gives recognition rate of 96.87% with 4 training sample per individual. [10]

“Hindi and English Off-line Signature Identification and Verification” in this paper multi-script (Hindi and English) signature verification is done. The system will classify the signature and tell whether it belongs to the English category or the Hindi one. This paper used SVM classifier that were employed for identification and verification purpose. [9]

III. SIGNATURE VERIFICATION THROUGH CNN

Since we all know signature verification is one of the most important process in banking field. Hence the process of signature verification starts with image acquisition and ends at database searching of the user.

A. Dataset refining and Dataset labelling

We have chosen TC11 dataset which contained 1600 English signatures, this was used for the profile verification and data rendering of the person. There were 20 sets being used for the experiment and we made classes accordingly. Each set contained 25 signatures, it means total 500 signatures were considered. Through Google open refined tool we refined the data and labelled it.

B. Filtering

We basically filtered the data using mean filtering and spatial filtering. Here, we grey scaled the image up to 6 steps and obtained the input image. Then we started to segment the image into 15*5 blocks, after it we used the convolutional layer for the filtering of the image. It removed the background disturbance and noise of the image. Through convolutional layer we also enhanced the image through edge detection filter. Here we used 4*4 filters for more accurate values. In Fig. 2 we can see the filtered image,

Figure 2. Signature image after filtering

θ (x , y) = tan − 1 σ v / σ u Where σu = g(x +1, y +1) − g(x, y)

Δv = g(x +1, y) − g(x, y +1) and g(x, y) is the gray level of (x, y) point.

Hence through the edge Detection and spatial filter a value of 10 * 4 * 16 = 640 dimensional feature vector is obtained.

C. Wavelet Transformation

Image comprises of 2 dimensions that are (x,y), when we grey scale the image we basically change the intensity of the image that is co-ordinates of the image.

Here, we are using 4 x 4 matrix for the wavelet transformation. When we start the recognition part in the wavelet transformation we use e¡d2i/σ. Following Fig. 3 shows the transformation according to the wavelet’s value.

Figure 3. Image of signature after 1 wavelet transformation

Page 156: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Utkarsh Shukla and Vikrant Bhateja

Proceedings of IC4T, 2018138

Here as you can see that we have used 4 x 4 matrix, this we had done with the help of time adjacency matrix. This we had achieved by temporal groups.

D. Segmentation

In segmentation part we started dividing the signature images accordingly with edge detection algorithm. This algorithm is used for edge detection in the images and with the help of canny method we divided the image into two threshold. These threshold detected the strong and the week part of the image. This would result in detection of forgeries.

It comprised of the following pseudo code-:

I = imread (‘signature.jpeg’);

Imshow (I)

BW1 = edge (I,’Canny’);

Here we took input the grey scaled image and started the function. By canny method, we looked for the local maxima of the gradient. This was all done by the Gaussian Filter. Here, we used two thresholds to detect the wee and the strong edges. Threshold is the sensitivity and we took the following readings-:

Sigma would be 5, Low threshold is 40% and High threshold is 50%. Hence following Fig. 5 and Fig. 6 shows the input and output of the segmentation-:

Figure 4. Input image to the segmentation.

From this input image we check all the results in edge detection algorithm and then we get the most 80% accuracy with canny method.

Figure 5. Output with the canny method.

Now, with this image we started feature extraction.

E. Feature Extraction

Since feature extraction is one of the most important method for forgery detection, database rendering result and signature recognition. Hence, if any type of noise or shift to the factors like eccentricity, skewness etc. could cause temporal shift. Here, in the results we got-:

Area of the image 16 pixels, centroid 2, 2, eccentricity 0.365, skewness is 0.041 and kurtosis is 0.978. We basically extracted the wavelets, shadow, texture of the signature and graphometric and geometric of the signature. These results can be seen in the Fig. 7.

Figure 6. Feature extraction

F. Model Training

We had chosen CNN for our model training, since in the past years CNN has been a good and reliable source for image processing or machine learning experiments. Going through CNN, CNN consists of the layers. Here, each layer plays a significant role. Since, when data is provided to the first layer and when it reaches a conclusion or finds a result, then it passes its result to the next layer. And that result is the input to the next layer.

Now, coming on the training part. We had chosen regression loss function, it is one of the most common methods for the loss between predicted quantity and true answer.

Li=∥f−yi∥1=∑j∣fj−(yi)j∣

Through this step we had got 3 patched output, this gave us that our trained model gives us 87% accuracy (somewhat less than expected).

Storing the information could be one of the most important part for the fraud/forgery detection.

III. SIGNATURE VERIFICATION THROUGH ANN

A. Preprocessing

Signatures are grey scaled. This will make the signature more suitable and signature will be made fine for the feature

Page 157: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Comparison Between Cnn And Ann In Offline Signature Verification

Proceedings of IC4T, 2018 139

extraction. The pre-processing stages will include following steps: Background elimination, noise reduction, width normalization and thinning .The pre-processing steps of an example signature are shown in Figure 7,

Figure 7. Signature image after grey-scaling

Pre-processing contains following number of steps-:

a) Background Estimation

b) Noise Reduction

b) Thinning

Feature Extraction

Just like signature verification is one of the most important step in biometric or authentication, just like it feature is also the key feature in image processing. It helps in distinguishing the signatures of various types of users. It depends on the two features that are shape and texture of the image. Where we also focus on the properties like entropy, skewness, kurtosis etc.

Feature extraction contains following steps-:

1. GLOBAL FEATURES EXTRACTION

A. SIGNATURE DENSITY - It provides the number of pixels in the signature area. Firstly read the preprocessed image and then scan the image row wise or column wise, it helps in counting the pixels. And also tells which ae the lack ones and which are the perfect to count.

B. EDGE POINTING THE TOTAL NUMBER OF SIGNATURE- Here in this method we recognize whether the neighboring pixel is black or not. Hence we take 8 pixels gap and then count the next pixel. Hence, total edge is calculated using this method.

2. FACTORS OF TEXTURES- Following are the factors effecting the textures-

A. Entropy- Heterogeneous has low entropy, whereas homogenous has low entropy.

B. Contrast- It is used to measure the local intensity of the image.

C. Correlation- It measure the grey level linear dependence between specified positions.C. SIGNATURE DATASETWe had taken 420 signatures for training and testing of the model. For the better forgery detection we took 4 different signatures from the users, some of them also gave 6 signatures (15). For each person 1 forgery signature of each type is signed. In the training set the total number of signatures is 190 = (original) = (25x4 + 15x6). For testing 22 forgeries of each type were taken and to test original signatures we took total 13 signatures.

D. DESIGNING ANN

Network layers-7

Neurons in layer-8

Learning rate-0.25

Initial weight-1

E. TRAINING AND TESTING MODEL

Training of the mode; is done in the above methods as shown, from pre-processing to texture recognition. Now when we start testing the model, we use the following points-:

Pre-processed image retrieval from dataset.

Feature extracting of the image through techniques discussed above.

Checking application of the ANN above.

Matching the output generated from ANN, and then showing the result with proper accuracy.

IV. RESULT

The data base of 400 signatures was tested. The accuracy and precision that we achieved with CNN was 89% and with ANN it was 77%. This portrays that CNN most preferred method for the image processing and especially in the case of the image mining/text mining. Since, ANN has high false acceptance rate (FAR), which reduces its efficiency towards this method.

The basic difference that comes in these two neural network is in the layers. Where CNN, has stronger and highly accurate result through layer by layer whereas ANN lacks this property in image processing.

V. CONCLUSION

Since, Convolutional Neural Networks are used in each almost every part of image processing, hence this could be the effective part for the bodies like banks, government offices. Since, the offline signature verification and retrieval of the information is no doubt one of the biggest task now a days, hence CNN could play an important role in it.

Page 158: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Utkarsh Shukla and Vikrant Bhateja

Proceedings of IC4T, 2018140

Since, CNN could be effectively be used in the more powerful and the next generation computers. And this above discussed method could be one of the best feature for all the bodies, working in the offline signature field. Since, we only achieved the accuracy of 89% only. This when compared from ANN, where we have used Gaussian method for segmentation and extraction we achieve only 77% accuracy. This concludes that CNN method is much more preferable than any other method. But, accuracy of CNN could be much more if we increase the weights of the layers in CNN.

In addition, it is suggested not to use signature normalization because ANN retrieves the given features of the image. Hence, with respect to that context we have used the above mentioned features for signature representation. However, dataset being used by us is somewhat different from other approaches. Results show that FAR is more than FRR. Further dependency is tested by the features being obtained by the signatures, existing in the dataset of offline signature. In addition to this result, a new training algorithm such as the genetic algorithm for adjusting weights, could be established.

ACKNOWLEDGEMENT

I would like to pay special thankfulness and warmth to my Research Work Guide, Dr. Vikrant Bhateja, who gave me this opportunity to do the research paper. His vital support, assistance and encouragement made it possible to achieve the goal. Last I would like to thank various authors of reference materials mentioned in the reference section for their commendable research.

REFERENCES

[1] Gabe Alvarez, Blue Sheffer, Morgan Bryant, “Offline Signature Verification with Convolutional Neural Networks”

[2] Manoj Kumar”Signature Verification Using Neural Network” , IJCSE, 0975-3397, 09/09/2012

[3] Vahab Iranmanesh, Sharifah Mumtaz Syed Ahmad, Wan Azizun Wan Adnan, Fahad Layth Malallah, Salman Yussof, “Online signature verification using neural network and person correlation features”, 02-04/12/2013, IEEE conference on Open System, 14095766

[4] Saba Mushtaq, A. H. Mir, “Signature verification: A study” International Conference on Computer and Communication Technology (ICCCT), 27 February 2014, DOI: 10.1109/ICCCT.2013.6749637

[5] Cemil OZ, Fikret Ercal, Zafer Demir, “Signature Recognition and Verification with ANN” Oct 16, 2014

[6] Julita A., Fauziyah S., Azlina O., Mardiana B., Hazura H., Zahariah A.M. , “Online signature Verification System”, 5th International Colloquium on Signal Processing & Its Applications (CSPA), 978-1-4244-4152-5/09 2009

[7] V A Bharadi, H B Kekre, “Off-Line Signature Recognition Systems”, International Journal of Computer Applications (0975 - 8887) Volume 1 – No. 27 2010

[8] Ali Karounia , Bassam Daya, Samia Bahlak, “Offline signature recognition using neural networks approach” Procedia Computer Science Volume 3 155–161 doi:10.1016/j.procs.2010.12.027, 2010

[9] Srikanta Pal, Umapada Pal, and Michael Blumenstein, “Hindi and English Off-line Signature Identification and Verification”

[10] Pallavi Patil, Bryan Almeida, Niketa Chettiar, Joyal Babu, “Offline signature recognition system using histogram of oriented gradients” International Conference on Advances in Computing, Communication and Control (ICAC3), 10.1109/ICAC3.2017.8318766, 2017

Page 159: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Our Activities

Page 160: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Our Activities

Page 161: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 162: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 163: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology
Page 164: Proceedings of Second International Conference on ...ISBN: 978-93-5291-969-7 ICT4 Proceedings of Second International Conference on Computing, Communication and Control Technology

Recommended